Matches in SemOpenAlex for { <https://semopenalex.org/work/W2987685943> ?p ?o ?g. }
Showing items 1 to 72 of
72
with 100 items per page.
- W2987685943 endingPage "101023" @default.
- W2987685943 startingPage "101023" @default.
- W2987685943 abstract "Abstract Every spring, European forest soundscapes fill up with the drums and calls of woodpeckers as they draw territories and pair up. Each drum or call is species-specific and easily picked up by a trained ear. In this study, we worked toward automating this process and thus toward making the continuous acoustic monitoring of woodpeckers practical. We recorded from March to May successively in Belgium, Luxemburg and France, collecting hundreds of gigabytes of data. We shed 50–80% of these recordings using the Acoustic Complexity Index (ACI). Then, for both the detection of the target signals in the audio stream and the identification of the different species, we implemented transfer learning from computer vision to audio analysis. This meant transforming sounds into images via spectrograms and retraining legacy deep image networks that have been made public (e.g. Inception) to work with such data. The visual patterns produced by drums (vertical lines) and call syllables (hats, straight lines, waves, etc.) in spectrograms are characteristic and allow an identification of the signals. We retrained using data from Xeno-Canto, Tierstimmen and a private collection. In the subsequent analysis of the field recordings, the repurposed networks gave outstanding results for the detection of drums (either 0.2–9.9% of false positives, or for the toughest dataset, a reduction from 28,601 images to 1000 images left for manual review) and for the detection and identification of calls (73.5–100.0% accuracy; in the toughest case, dataset reduction from 643,901 images to 14,667 images). However, they performed less well for the identification of drums than a simpler method using handcrafted features and the k-Nearest Neighbor (k-NN) classifier. The species character in drums does not lie in shapes but in temporal patterns: speed, acceleration, number of strikes and duration of the drums. These features are secondary information in spectrograms, and the image networks that have learned invariance toward object size tend to disregard them. At locations where they drummed abundantly, the accuracy was 83.0% for Picus canus (93.1% for k-NN) and 36.1% for Dryocopus martius (81.5% for k-NN). For the three field locations we produced time lines of the encountered woodpecker activity (6 species, 11 signals)." @default.
- W2987685943 created "2019-11-22" @default.
- W2987685943 creator A5038370247 @default.
- W2987685943 creator A5041608491 @default.
- W2987685943 creator A5042439255 @default.
- W2987685943 date "2020-01-01" @default.
- W2987685943 modified "2023-10-14" @default.
- W2987685943 title "Detection and identification of European woodpeckers with deep convolutional neural networks" @default.
- W2987685943 cites W1611245111 @default.
- W2987685943 cites W1967818247 @default.
- W2987685943 cites W2001087393 @default.
- W2987685943 cites W2012284479 @default.
- W2987685943 cites W2018140578 @default.
- W2987685943 cites W2069943693 @default.
- W2987685943 cites W2114455599 @default.
- W2987685943 cites W2115891142 @default.
- W2987685943 cites W2127325495 @default.
- W2987685943 cites W2134653501 @default.
- W2987685943 cites W2317187420 @default.
- W2987685943 cites W2510931882 @default.
- W2987685943 cites W2883595988 @default.
- W2987685943 cites W4230041507 @default.
- W2987685943 cites W4237481021 @default.
- W2987685943 doi "https://doi.org/10.1016/j.ecoinf.2019.101023" @default.
- W2987685943 hasPublicationYear "2020" @default.
- W2987685943 type Work @default.
- W2987685943 sameAs 2987685943 @default.
- W2987685943 citedByCount "20" @default.
- W2987685943 countsByYear W29876859432020 @default.
- W2987685943 countsByYear W29876859432021 @default.
- W2987685943 countsByYear W29876859432022 @default.
- W2987685943 countsByYear W29876859432023 @default.
- W2987685943 crossrefType "journal-article" @default.
- W2987685943 hasAuthorship W2987685943A5038370247 @default.
- W2987685943 hasAuthorship W2987685943A5041608491 @default.
- W2987685943 hasAuthorship W2987685943A5042439255 @default.
- W2987685943 hasConcept C108583219 @default.
- W2987685943 hasConcept C116834253 @default.
- W2987685943 hasConcept C154945302 @default.
- W2987685943 hasConcept C18903297 @default.
- W2987685943 hasConcept C41008148 @default.
- W2987685943 hasConcept C70721500 @default.
- W2987685943 hasConcept C81363708 @default.
- W2987685943 hasConcept C86803240 @default.
- W2987685943 hasConceptScore W2987685943C108583219 @default.
- W2987685943 hasConceptScore W2987685943C116834253 @default.
- W2987685943 hasConceptScore W2987685943C154945302 @default.
- W2987685943 hasConceptScore W2987685943C18903297 @default.
- W2987685943 hasConceptScore W2987685943C41008148 @default.
- W2987685943 hasConceptScore W2987685943C70721500 @default.
- W2987685943 hasConceptScore W2987685943C81363708 @default.
- W2987685943 hasConceptScore W2987685943C86803240 @default.
- W2987685943 hasLocation W29876859431 @default.
- W2987685943 hasOpenAccess W2987685943 @default.
- W2987685943 hasPrimaryLocation W29876859431 @default.
- W2987685943 hasRelatedWork W2731899572 @default.
- W2987685943 hasRelatedWork W2763109982 @default.
- W2987685943 hasRelatedWork W2999805992 @default.
- W2987685943 hasRelatedWork W3116150086 @default.
- W2987685943 hasRelatedWork W3133861977 @default.
- W2987685943 hasRelatedWork W3166467183 @default.
- W2987685943 hasRelatedWork W3192840557 @default.
- W2987685943 hasRelatedWork W4200173597 @default.
- W2987685943 hasRelatedWork W4220996320 @default.
- W2987685943 hasRelatedWork W4312417841 @default.
- W2987685943 hasVolume "55" @default.
- W2987685943 isParatext "false" @default.
- W2987685943 isRetracted "false" @default.
- W2987685943 magId "2987685943" @default.
- W2987685943 workType "article" @default.