Files in this item
Learning deep models from synthetic data for extracting dolphin whistle contours
Item metadata
dc.contributor.author | Li, Pu | |
dc.contributor.author | Liu, Xiaobai | |
dc.contributor.author | Palmer, Kaitlin | |
dc.contributor.author | Fleishman, Erica | |
dc.contributor.author | Gillespie, Douglas Michael | |
dc.contributor.author | Nosal, Eva-Marie | |
dc.contributor.author | Shiu, Yu | |
dc.contributor.author | Klinck, Holger | |
dc.contributor.author | Cholewiak, Danielle | |
dc.contributor.author | Helble, Tyler | |
dc.contributor.author | Roch, Marie | |
dc.date.accessioned | 2020-04-21T15:30:05Z | |
dc.date.available | 2020-04-21T15:30:05Z | |
dc.date.issued | 2020-07 | |
dc.identifier | 267536251 | |
dc.identifier | bbb67a66-ee56-4858-90d4-201454b839fb | |
dc.identifier | 85093866240 | |
dc.identifier | 000626021403027 | |
dc.identifier | 85093866240 | |
dc.identifier.citation | Li , P , Liu , X , Palmer , K , Fleishman , E , Gillespie , D M , Nosal , E-M , Shiu , Y , Klinck , H , Cholewiak , D , Helble , T & Roch , M 2020 , Learning deep models from synthetic data for extracting dolphin whistle contours . in 2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedings . , 9206992 , Proceedings of the International Joint Conference on Neural Networks , IEEE Computer Society , IEEE World Congress on Computational Intelligence (IEEE WCCI) - 2020 International Joint Conference on Neural Networks (IJCNN 2020) , Glasgow , United Kingdom , 19/07/20 . https://doi.org/10.1109/IJCNN48605.2020.9206992 | en |
dc.identifier.citation | conference | en |
dc.identifier.isbn | 9781728169262 | |
dc.identifier.other | ORCID: /0000-0001-9628-157X/work/115631178 | |
dc.identifier.uri | https://hdl.handle.net/10023/19834 | |
dc.description.abstract | We present a learning-based method for extracting whistles of toothed whales (Odontoceti) in hydrophone recordings. Our method represents audio signals as time-frequency spectrograms and decomposes each spectrogram into a set of time-frequency patches. A deep neural network learns archetypical patterns (e.g., crossings, frequency modulated sweeps) from the spectrogram patches and predicts time-frequency peaks that are associated with whistles. We also developed a comprehensive method to synthesize training samples from background environments and train the network with minimal human annotation effort. We applied the proposed learn-from-synthesis method to a subset of the public Detection, Classification, Localization, and Density Estimation (DCLDE) 2011 workshop data to extract whistle confidence maps, which we then processed with an existing contour extractor to produce whistle annotations. The F1-score of our best synthesis method was 0.158 greater than our baseline whistle extraction algorithm (~25% improvement) when applied to common dolphin (Delphinus spp.) and bottlenose dolphin (Tursiops truncatus) whistles. | |
dc.format.extent | 10 | |
dc.format.extent | 3233718 | |
dc.language.iso | eng | |
dc.publisher | IEEE Computer Society | |
dc.relation.ispartof | 2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedings | en |
dc.relation.ispartofseries | Proceedings of the International Joint Conference on Neural Networks | en |
dc.subject | Whistle contour extraction | en |
dc.subject | Deep neural network | en |
dc.subject | Data synthesis | en |
dc.subject | Acoustic | en |
dc.subject | Odontocetes | en |
dc.subject | QA75 Electronic computers. Computer science | en |
dc.subject | QH301 Biology | en |
dc.subject | Software | en |
dc.subject | Artificial Intelligence | en |
dc.subject | 3rd-DAS | en |
dc.subject.lcc | QA75 | en |
dc.subject.lcc | QH301 | en |
dc.title | Learning deep models from synthetic data for extracting dolphin whistle contours | en |
dc.type | Conference item | en |
dc.contributor.institution | University of St Andrews. School of Biology | en |
dc.contributor.institution | University of St Andrews. Sea Mammal Research Unit | en |
dc.contributor.institution | University of St Andrews. Scottish Oceans Institute | en |
dc.contributor.institution | University of St Andrews. Sound Tags Group | en |
dc.contributor.institution | University of St Andrews. Bioacoustics group | en |
dc.contributor.institution | University of St Andrews. Marine Alliance for Science & Technology Scotland | en |
dc.identifier.doi | 10.1109/IJCNN48605.2020.9206992 |
This item appears in the following Collection(s)
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.