Matches in SemOpenAlex for { <https://semopenalex.org/work/W3025483923> ?p ?o ?g. }
- W3025483923 abstract "Abstract A time-consuming challenge faced by camera trap practitioners all over the world is the extraction of meaningful data from images to inform ecological management. The primary methods of image processing used by practitioners includes manual analysis and citizen science. An increasingly popular alternative is automated image classification software. However, most automated solutions are not sufficiently robust to be deployed on a large scale. Key challenges include limited access to images for each species and lack of location invariance when transferring models between sites. This prevents optimal use of ecological data and results in significant expenditure of time and resources to annotate and retrain deep learning models. In this study, we aimed to (a) assess the value of publicly available non-iconic FlickR images in the training of deep learning models for camera trap object detection, (b) develop an out-of-the-box location invariant automated camera trap image processing solution for ecologist using deep transfer learning and (c) explore the use of small subsets of camera trap images in optimisation of a FlickR trained deep learning model for high precision ecological object detection. We collected and annotated a dataset of images of “pigs” ( Sus scrofa and Phacochoerus africanus) from the consumer image sharing website FlickR. These images were used to achieve transfer learning using a RetinaNet model in the task of object detection. We compared the performance of this model to the performance of models trained on combinations of camera trap images obtained from five different projects, each characterised by 5 different geographical regions. Furthermore, we explored optimisation of the FlickR model via infusion of small subsets of camera trap images to increase robustness in difficult images. In most cases, the mean Average Precision (mAP) of the FlickR trained model when tested on out of sample camera trap sites (67.21-91.92%) was significantly higher than the mAP achieved by models trained on only one geographical location (4.42-90.8%) and rivalled the mAP of models trained on mixed camera trap datasets (68.96-92.75%). The infusion of camera trap images into the FlickR training further improved AP by 5.10-22.32% to 83.60-97.02%. Ecology researchers can use FlickR images in the training of automated deep learning solutions for camera trap image processing to significantly reduce time and resource expenditure by allowing the development of location invariant, highly robust out-of-the-box solutions. This would allow AI technologies to be deployed on a large scale in ecological applications." @default.
- W3025483923 created "2020-05-21" @default.
- W3025483923 creator A5020268777 @default.
- W3025483923 creator A5024455123 @default.
- W3025483923 creator A5039802283 @default.
- W3025483923 creator A5056913502 @default.
- W3025483923 date "2020-05-15" @default.
- W3025483923 modified "2023-10-17" @default.
- W3025483923 title "Location Invariant Animal Recognition Using Mixed Source Datasets and Deep Learning" @default.
- W3025483923 cites W1493083729 @default.
- W3025483923 cites W1832500336 @default.
- W3025483923 cites W1861492603 @default.
- W3025483923 cites W1991779457 @default.
- W3025483923 cites W2031489346 @default.
- W3025483923 cites W2033012377 @default.
- W3025483923 cites W2126194992 @default.
- W3025483923 cites W2133665775 @default.
- W3025483923 cites W2140310924 @default.
- W3025483923 cites W2146352414 @default.
- W3025483923 cites W2161969291 @default.
- W3025483923 cites W2413367505 @default.
- W3025483923 cites W2559553341 @default.
- W3025483923 cites W2570343428 @default.
- W3025483923 cites W2744043610 @default.
- W3025483923 cites W2782689936 @default.
- W3025483923 cites W2811409441 @default.
- W3025483923 cites W2890102334 @default.
- W3025483923 cites W2914978454 @default.
- W3025483923 cites W2941177490 @default.
- W3025483923 cites W2947469701 @default.
- W3025483923 cites W2950062006 @default.
- W3025483923 cites W2952113774 @default.
- W3025483923 cites W2954932437 @default.
- W3025483923 cites W2963556638 @default.
- W3025483923 cites W2964812477 @default.
- W3025483923 cites W2969264236 @default.
- W3025483923 cites W3001360360 @default.
- W3025483923 doi "https://doi.org/10.1101/2020.05.13.094896" @default.
- W3025483923 hasPublicationYear "2020" @default.
- W3025483923 type Work @default.
- W3025483923 sameAs 3025483923 @default.
- W3025483923 citedByCount "1" @default.
- W3025483923 countsByYear W30254839232021 @default.
- W3025483923 crossrefType "posted-content" @default.
- W3025483923 hasAuthorship W3025483923A5020268777 @default.
- W3025483923 hasAuthorship W3025483923A5024455123 @default.
- W3025483923 hasAuthorship W3025483923A5039802283 @default.
- W3025483923 hasAuthorship W3025483923A5056913502 @default.
- W3025483923 hasBestOaLocation W30254839231 @default.
- W3025483923 hasConcept C108583219 @default.
- W3025483923 hasConcept C119857082 @default.
- W3025483923 hasConcept C150899416 @default.
- W3025483923 hasConcept C153180895 @default.
- W3025483923 hasConcept C154945302 @default.
- W3025483923 hasConcept C18903297 @default.
- W3025483923 hasConcept C190470478 @default.
- W3025483923 hasConcept C197352329 @default.
- W3025483923 hasConcept C2776151529 @default.
- W3025483923 hasConcept C2779101711 @default.
- W3025483923 hasConcept C2781238097 @default.
- W3025483923 hasConcept C29376679 @default.
- W3025483923 hasConcept C31972630 @default.
- W3025483923 hasConcept C33923547 @default.
- W3025483923 hasConcept C37914503 @default.
- W3025483923 hasConcept C41008148 @default.
- W3025483923 hasConcept C59822182 @default.
- W3025483923 hasConcept C86803240 @default.
- W3025483923 hasConceptScore W3025483923C108583219 @default.
- W3025483923 hasConceptScore W3025483923C119857082 @default.
- W3025483923 hasConceptScore W3025483923C150899416 @default.
- W3025483923 hasConceptScore W3025483923C153180895 @default.
- W3025483923 hasConceptScore W3025483923C154945302 @default.
- W3025483923 hasConceptScore W3025483923C18903297 @default.
- W3025483923 hasConceptScore W3025483923C190470478 @default.
- W3025483923 hasConceptScore W3025483923C197352329 @default.
- W3025483923 hasConceptScore W3025483923C2776151529 @default.
- W3025483923 hasConceptScore W3025483923C2779101711 @default.
- W3025483923 hasConceptScore W3025483923C2781238097 @default.
- W3025483923 hasConceptScore W3025483923C29376679 @default.
- W3025483923 hasConceptScore W3025483923C31972630 @default.
- W3025483923 hasConceptScore W3025483923C33923547 @default.
- W3025483923 hasConceptScore W3025483923C37914503 @default.
- W3025483923 hasConceptScore W3025483923C41008148 @default.
- W3025483923 hasConceptScore W3025483923C59822182 @default.
- W3025483923 hasConceptScore W3025483923C86803240 @default.
- W3025483923 hasLocation W30254839231 @default.
- W3025483923 hasOpenAccess W3025483923 @default.
- W3025483923 hasPrimaryLocation W30254839231 @default.
- W3025483923 hasRelatedWork W12793662 @default.
- W3025483923 hasRelatedWork W1284803 @default.
- W3025483923 hasRelatedWork W221938 @default.
- W3025483923 hasRelatedWork W2356256 @default.
- W3025483923 hasRelatedWork W2374111 @default.
- W3025483923 hasRelatedWork W2585641 @default.
- W3025483923 hasRelatedWork W2803426 @default.
- W3025483923 hasRelatedWork W7303821 @default.
- W3025483923 hasRelatedWork W8031603 @default.
- W3025483923 hasRelatedWork W9122165 @default.
- W3025483923 isParatext "false" @default.
- W3025483923 isRetracted "false" @default.