Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorLi, Pu
dc.contributor.authorLiu, Xiaobai
dc.contributor.authorPalmer, Kaitlin
dc.contributor.authorFleishman, Erica
dc.contributor.authorGillespie, Douglas Michael
dc.contributor.authorNosal, Eva-Marie
dc.contributor.authorShiu, Yu
dc.contributor.authorKlinck, Holger
dc.contributor.authorCholewiak, Danielle
dc.contributor.authorHelble, Tyler
dc.contributor.authorRoch, Marie
dc.date.accessioned2020-04-21T15:30:05Z
dc.date.available2020-04-21T15:30:05Z
dc.date.issued2020-07
dc.identifier267536251
dc.identifierbbb67a66-ee56-4858-90d4-201454b839fb
dc.identifier85093866240
dc.identifier000626021403027
dc.identifier85093866240
dc.identifier.citationLi , P , Liu , X , Palmer , K , Fleishman , E , Gillespie , D M , Nosal , E-M , Shiu , Y , Klinck , H , Cholewiak , D , Helble , T & Roch , M 2020 , Learning deep models from synthetic data for extracting dolphin whistle contours . in 2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedings . , 9206992 , Proceedings of the International Joint Conference on Neural Networks , IEEE Computer Society , IEEE World Congress on Computational Intelligence (IEEE WCCI) - 2020 International Joint Conference on Neural Networks (IJCNN 2020) , Glasgow , United Kingdom , 19/07/20 . https://doi.org/10.1109/IJCNN48605.2020.9206992en
dc.identifier.citationconferenceen
dc.identifier.isbn9781728169262
dc.identifier.otherORCID: /0000-0001-9628-157X/work/115631178
dc.identifier.urihttps://hdl.handle.net/10023/19834
dc.description.abstractWe present a learning-based method for extracting whistles of toothed whales (Odontoceti) in hydrophone recordings. Our method represents audio signals as time-frequency spectrograms and decomposes each spectrogram into a set of time-frequency patches. A deep neural network learns archetypical patterns (e.g., crossings, frequency modulated sweeps) from the spectrogram patches and predicts time-frequency peaks that are associated with whistles. We also developed a comprehensive method to synthesize training samples from background environments and train the network with minimal human annotation effort. We applied the proposed learn-from-synthesis method to a subset of the public Detection, Classification, Localization, and Density Estimation (DCLDE) 2011 workshop data to extract whistle confidence maps, which we then processed with an existing contour extractor to produce whistle annotations. The F1-score of our best synthesis method was 0.158 greater than our baseline whistle extraction algorithm (~25% improvement) when applied to common dolphin (Delphinus spp.) and bottlenose dolphin (Tursiops truncatus) whistles.
dc.format.extent10
dc.format.extent3233718
dc.language.isoeng
dc.publisherIEEE Computer Society
dc.relation.ispartof2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedingsen
dc.relation.ispartofseriesProceedings of the International Joint Conference on Neural Networksen
dc.subjectWhistle contour extractionen
dc.subjectDeep neural networken
dc.subjectData synthesisen
dc.subjectAcousticen
dc.subjectOdontocetesen
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectQH301 Biologyen
dc.subjectSoftwareen
dc.subjectArtificial Intelligenceen
dc.subject3rd-DASen
dc.subject.lccQA75en
dc.subject.lccQH301en
dc.titleLearning deep models from synthetic data for extracting dolphin whistle contoursen
dc.typeConference itemen
dc.contributor.institutionUniversity of St Andrews. School of Biologyen
dc.contributor.institutionUniversity of St Andrews. Sea Mammal Research Uniten
dc.contributor.institutionUniversity of St Andrews. Scottish Oceans Instituteen
dc.contributor.institutionUniversity of St Andrews. Sound Tags Groupen
dc.contributor.institutionUniversity of St Andrews. Bioacoustics groupen
dc.contributor.institutionUniversity of St Andrews. Marine Alliance for Science & Technology Scotlanden
dc.identifier.doi10.1109/IJCNN48605.2020.9206992


This item appears in the following Collection(s)

Show simple item record