Matches in SemOpenAlex for { <https://semopenalex.org/work/W3082055815> ?p ?o ?g. }
Showing items 1 to 86 of
86
with 100 items per page.
- W3082055815 endingPage "646" @default.
- W3082055815 startingPage "636" @default.
- W3082055815 abstract "AbstractThe continued success of deep convolution neural networks (CNN) in computer vision can be directly linked to vast amounts of data and tremendous processing resources for training such non-linear models. However, depending on the task, the available amount of data varies significantly. Particularly robotic systems usually rely on small amounts of data, as producing and annotating them is extremely robot and task specific (e.g. grasping) and therefore prohibitive. Recently, in order to address the aforementioned problem of small datasets in robotic vision, a common practice is to reuse features that are already learned by a CNN within a large-scale task and apply them to different small scale ones. This transfer of learning shows some promising results as an alternative, but nevertheless it can not be compared with the performance of a CNN that is specifically trained from the beginning for that specific task. Thus, many researchers turned to synthetic datasets for training, since they can be produced easily and cost effectively. The main issue of such datasets that already exist, is the lack of photorealism both in terms of background and lighting. Herein, we are proposing a framework for the generation of completely synthetic datasets that includes all types of data that state-of-the-art algorithms in object recognition, and tracking need for their training. Thus, we can improve robotic perception without deploying the robot in time-consuming real-world scenarios.KeywordsRobot visionMachine learningNeural networksSynthetic data" @default.
- W3082055815 created "2020-09-08" @default.
- W3082055815 creator A5009024599 @default.
- W3082055815 creator A5050236530 @default.
- W3082055815 creator A5087452615 @default.
- W3082055815 date "2020-08-29" @default.
- W3082055815 modified "2023-09-27" @default.
- W3082055815 title "Generating 2.5D Photorealistic Synthetic Datasets for Training Machine Vision Algorithms" @default.
- W3082055815 cites W1595452285 @default.
- W3082055815 cites W1965235031 @default.
- W3082055815 cites W1967368660 @default.
- W3082055815 cites W2041376653 @default.
- W3082055815 cites W2093102539 @default.
- W3082055815 cites W2132400125 @default.
- W3082055815 cites W2156222070 @default.
- W3082055815 cites W2161158016 @default.
- W3082055815 cites W2471962767 @default.
- W3082055815 cites W2745471877 @default.
- W3082055815 doi "https://doi.org/10.1007/978-3-030-57802-2_61" @default.
- W3082055815 hasPublicationYear "2020" @default.
- W3082055815 type Work @default.
- W3082055815 sameAs 3082055815 @default.
- W3082055815 citedByCount "0" @default.
- W3082055815 crossrefType "book-chapter" @default.
- W3082055815 hasAuthorship W3082055815A5009024599 @default.
- W3082055815 hasAuthorship W3082055815A5050236530 @default.
- W3082055815 hasAuthorship W3082055815A5087452615 @default.
- W3082055815 hasConcept C119857082 @default.
- W3082055815 hasConcept C121332964 @default.
- W3082055815 hasConcept C154945302 @default.
- W3082055815 hasConcept C160920958 @default.
- W3082055815 hasConcept C162324750 @default.
- W3082055815 hasConcept C187736073 @default.
- W3082055815 hasConcept C18903297 @default.
- W3082055815 hasConcept C206588197 @default.
- W3082055815 hasConcept C2778755073 @default.
- W3082055815 hasConcept C2780451532 @default.
- W3082055815 hasConcept C2781238097 @default.
- W3082055815 hasConcept C41008148 @default.
- W3082055815 hasConcept C45347329 @default.
- W3082055815 hasConcept C50644808 @default.
- W3082055815 hasConcept C51632099 @default.
- W3082055815 hasConcept C62520636 @default.
- W3082055815 hasConcept C64876066 @default.
- W3082055815 hasConcept C81363708 @default.
- W3082055815 hasConcept C86803240 @default.
- W3082055815 hasConcept C90509273 @default.
- W3082055815 hasConceptScore W3082055815C119857082 @default.
- W3082055815 hasConceptScore W3082055815C121332964 @default.
- W3082055815 hasConceptScore W3082055815C154945302 @default.
- W3082055815 hasConceptScore W3082055815C160920958 @default.
- W3082055815 hasConceptScore W3082055815C162324750 @default.
- W3082055815 hasConceptScore W3082055815C187736073 @default.
- W3082055815 hasConceptScore W3082055815C18903297 @default.
- W3082055815 hasConceptScore W3082055815C206588197 @default.
- W3082055815 hasConceptScore W3082055815C2778755073 @default.
- W3082055815 hasConceptScore W3082055815C2780451532 @default.
- W3082055815 hasConceptScore W3082055815C2781238097 @default.
- W3082055815 hasConceptScore W3082055815C41008148 @default.
- W3082055815 hasConceptScore W3082055815C45347329 @default.
- W3082055815 hasConceptScore W3082055815C50644808 @default.
- W3082055815 hasConceptScore W3082055815C51632099 @default.
- W3082055815 hasConceptScore W3082055815C62520636 @default.
- W3082055815 hasConceptScore W3082055815C64876066 @default.
- W3082055815 hasConceptScore W3082055815C81363708 @default.
- W3082055815 hasConceptScore W3082055815C86803240 @default.
- W3082055815 hasConceptScore W3082055815C90509273 @default.
- W3082055815 hasLocation W30820558151 @default.
- W3082055815 hasOpenAccess W3082055815 @default.
- W3082055815 hasPrimaryLocation W30820558151 @default.
- W3082055815 hasRelatedWork W2337926734 @default.
- W3082055815 hasRelatedWork W2766634277 @default.
- W3082055815 hasRelatedWork W2799614062 @default.
- W3082055815 hasRelatedWork W3129634582 @default.
- W3082055815 hasRelatedWork W3136076031 @default.
- W3082055815 hasRelatedWork W3173182854 @default.
- W3082055815 hasRelatedWork W4281780675 @default.
- W3082055815 hasRelatedWork W4285586943 @default.
- W3082055815 hasRelatedWork W4287776258 @default.
- W3082055815 hasRelatedWork W3009789068 @default.
- W3082055815 isParatext "false" @default.
- W3082055815 isRetracted "false" @default.
- W3082055815 magId "3082055815" @default.
- W3082055815 workType "book-chapter" @default.