Matches in SemOpenAlex for { <https://semopenalex.org/work/W4319986031> ?p ?o ?g. }
Showing items 1 to 59 of
59
with 100 items per page.
- W4319986031 abstract "Underwater acoustic data contain a myriad of sound sources that include marine life such as marine mammals, fishes, and crustaceans; man-made such as ships, sonar, airguns, and wind farms; as well as natural geophysical processes such as earthquake, hurricane, and volcanic eruption. Monitoring these acoustic events are important for many ocean applications, such as biology and ecology, where acoustic sensing of marine mammal and fish sounds is used to infer their behavior and distributions; ocean environmental compliance and mitigation in hydrocarbon prospecting; maritime surveillance and defense; as well as in geophysical environmental monitoring and prediction. Substantial volumes of underwater acoustic data are usually acquired in passive ocean acoustic waveguide remote sensing experiments with a large aperture coherent hydrophone array system. The human effort required to manually analyze underwater acoustic data limits the applications, especially when data is acquired using a large aperture coherent hydrophone array since beamformed signals in multiple distinct directions need to be analyzed. Developing automatic, accurate and fast algorithms for detection and classification of underwater acoustic events can help minimize the human effort for underwater events monitoring, enabling real-time processing and analysis, and hence aiding rapid scientific discoveries at sea. Among underwater acoustic events, marine mammal vocalization classification is one of the most challenging problems due to their transient broadband calls, high variation in the calls of a specie (intra-class variation), and high similarity between the calls of some species. Marine mammal produced sounds are related to different activities including communication between whales for breeding or feeding purposes, sexual behavior, echolocation, navigation, as well as physical activities including sea surface breaching and slapping motions. In this thesis, we investigate machine learning approaches for classifying marine mammal vocalizations for real-time applications. We utilize acoustic data from a 160-element coherent hydrophone array and employ the passive ocean acoustic waveguide remote sensing (POAWRS) technique to enable sensing and detections over instantaneous wide areas more than 100 km in diameter from the array. Specifically, we investigate call classification from a wide variety of baleen and toothed whale species, including humpback whale song versus non-song call. A variety of computational accelerating approaches, combining hardware and software, that make the methods desirable for real-time applications are also developed. Humpback whale behavior, population distribution and structure can be inferred from long term underwater passive acoustic monitoring of their vocalizations. The humpback whale vocalizations can be divided into two classes, song and non-song calls. Song vocalizations are composed of repeatable set of phrases with consistently short inter-pulse intervals. The non-song vocalizations, such as 'bow-shaped' and 'downsweep' moans, have large and highly variable inter-pulse intervals and no repeatable pattern. Here we employ machine learning approaches to classify humpback whale vocalizations into song and non-song calls. We use wavelet signal denoising and coherent array processing to enhance the signal-to-noise ratio. To build features vector for every time sequence of the beamformed signals, we employ Bag of Words approach to time-frequency features. Finally, we apply Support Vector Machine (SVM), Neural Networks, and Naive Bayes to classify the acoustic data and compare their performances. Best results are obtained using Mel Frequency Cepstrum Coefficient (MFCC) features and SVM which leads to 94$%$ accuracy and 72.73$%$ F1-score for humpback whale song versus non-song vocalization classification, showing effectiveness of the proposed approach for real-time classification at sea. To classify a large variety of whale species by their calls, we extracted time-frequency features such as minimum, maximum and average frequencies, bandwidth, duration, as well as slope and curvature of the time-frequency function obtained via pitch-tracking of calls in Power Spectrogram Density (PSD) of the beamformed signals. We used these features to train three classifiers, which are SVM, Neural Networks, and Random forest to classify six whale species: Fin, Sei, Blue, Minke, Humpback, and general Odontocetes. We also trained a set of Convolutional Neural Networks (CNN) to detect and classify each of these six whale vocalization categories directly using Per-Channel Energy Normalization (PCEN) spectrograms. Best results were obtained with Random forest classifier, which achieved 95$%$ accuracy, and 85$%$ F1 score. To detect transient sound sources, first we applied PCEN on the PSD of the beamformed signals. PCEN is an alternative to the logarithmic transformation, with the benefits of mitigating the effect of persistent sound sources such as ship tonal and also adaptive dynamic range compression. We applied thresholding on the PCEN data followed by morphological image opening to find potential sound sources and reduce noisy detections. Then we applied connected component analysis to obtain the final detected sounds for each bearing. To estimate the Direction of Arrival (DoA) of detected sounds, we applied non-maximum suppression (NMS), which is widely used in object detection applications in computer vision, on the detected sounds. We used mean power of each detected sound as the scores for NMS. To speed up the data processing, we investigated a variety of accelerating approaches, such as analyzing the effect of floating point precision, applying parallel processing, and implementing fast algorithms to run on GPU. We implemented and optimized delay and sum beam-forming, both in time domain and Fourier Transform domain on NVIDIA GPUs, and achieved more than 338x speed-up compared to the basic CPU-implemented version. All the processes, including beamforming, PSD calculation, DoA estimation, feature extraction, and acoustic event classification are computed in real-time. During an experiment in the U.S. Northeast coast on board the US research vessel RV Endeavor in September 2021, we utilized the software and hardware advances developed here to record underwater acoustic data using Northeastern University in-house fabricated large aperture 160-element coherent hydrophone array with sampling frequency of 100 kHz per element. We monitored a wide range of underwater acoustic events in real-time, including marine mammal vocalizations such as Fin whale 20 Hz pulses, Humpback whale songs, Minke whale buzz sequences, as well Sperm whale and Dolphin high frequency echolocation clicks up to 50 kHz; fish-generated sounds, ship tonal and broadband signals.--Author's abstract" @default.
- W4319986031 created "2023-02-11" @default.
- W4319986031 creator A5060854576 @default.
- W4319986031 date "2023-02-10" @default.
- W4319986031 modified "2023-09-25" @default.
- W4319986031 title "Machine learning approaches for classification of myriad underwater acoustic events over continental-shelf scale regions with passive ocean acoustic waveguide remote sensing" @default.
- W4319986031 doi "https://doi.org/10.17760/d20467283" @default.
- W4319986031 hasPublicationYear "2023" @default.
- W4319986031 type Work @default.
- W4319986031 citedByCount "0" @default.
- W4319986031 crossrefType "dissertation" @default.
- W4319986031 hasAuthorship W4319986031A5060854576 @default.
- W4319986031 hasBestOaLocation W43199860311 @default.
- W4319986031 hasConcept C111368507 @default.
- W4319986031 hasConcept C121332964 @default.
- W4319986031 hasConcept C127313418 @default.
- W4319986031 hasConcept C18903297 @default.
- W4319986031 hasConcept C24890656 @default.
- W4319986031 hasConcept C2776328434 @default.
- W4319986031 hasConcept C2776384079 @default.
- W4319986031 hasConcept C34951282 @default.
- W4319986031 hasConcept C41008148 @default.
- W4319986031 hasConcept C555745239 @default.
- W4319986031 hasConcept C62649853 @default.
- W4319986031 hasConcept C67467970 @default.
- W4319986031 hasConcept C76155785 @default.
- W4319986031 hasConcept C86803240 @default.
- W4319986031 hasConcept C98083399 @default.
- W4319986031 hasConceptScore W4319986031C111368507 @default.
- W4319986031 hasConceptScore W4319986031C121332964 @default.
- W4319986031 hasConceptScore W4319986031C127313418 @default.
- W4319986031 hasConceptScore W4319986031C18903297 @default.
- W4319986031 hasConceptScore W4319986031C24890656 @default.
- W4319986031 hasConceptScore W4319986031C2776328434 @default.
- W4319986031 hasConceptScore W4319986031C2776384079 @default.
- W4319986031 hasConceptScore W4319986031C34951282 @default.
- W4319986031 hasConceptScore W4319986031C41008148 @default.
- W4319986031 hasConceptScore W4319986031C555745239 @default.
- W4319986031 hasConceptScore W4319986031C62649853 @default.
- W4319986031 hasConceptScore W4319986031C67467970 @default.
- W4319986031 hasConceptScore W4319986031C76155785 @default.
- W4319986031 hasConceptScore W4319986031C86803240 @default.
- W4319986031 hasConceptScore W4319986031C98083399 @default.
- W4319986031 hasLocation W43199860311 @default.
- W4319986031 hasOpenAccess W4319986031 @default.
- W4319986031 hasPrimaryLocation W43199860311 @default.
- W4319986031 hasRelatedWork W136528088 @default.
- W4319986031 hasRelatedWork W2005397798 @default.
- W4319986031 hasRelatedWork W2044305272 @default.
- W4319986031 hasRelatedWork W2050344136 @default.
- W4319986031 hasRelatedWork W2120682028 @default.
- W4319986031 hasRelatedWork W2169232835 @default.
- W4319986031 hasRelatedWork W2367300933 @default.
- W4319986031 hasRelatedWork W2549305500 @default.
- W4319986031 hasRelatedWork W2905319986 @default.
- W4319986031 hasRelatedWork W2941438175 @default.
- W4319986031 isParatext "false" @default.
- W4319986031 isRetracted "false" @default.
- W4319986031 workType "dissertation" @default.