Authors: Jonas Höchst, Hicham Bellafkir, Patrick Lampe, Markus Vogelbacher, Markus Mühling, Daniel Schneider, Kim Lindner, Sascha Rösner, Dana G. Schabo, Nina Farwig and Bernd Freisleben.
Abstract: We present Bird@Edge, an Edge AI system for recognizing bird species in audio recordings to support real-time biodiversity monitoring. Bird@Edge is based on embedded edge devices operating in a distributed system to enable efficient, continuous evaluation of soundscapes recorded in forests. Multiple ESP32-based microphones (called Bird@Edge Mics) stream audio to a local Bird@Edge Station, on which bird species recognition is performed. The results of several Bird@Edge Stations are transmitted to a backend cloud for further analysis, e.g., by biodiversity researchers. To recognize bird species in soundscapes, a deep neural network based on the EfficientNet-B3 architecture is trained and optimized for execution on embedded edge devices and deployed on a NVIDIA Jetson Nano board using the DeepStream SDK. Our experiments show that our deep neural network outperforms the state-of-the-art BirdNET neural network on several data sets and achieves a recognition quality of up to 95.2% mean average precision on soundscape recordings in the Marburg Open Forest, a research and teaching forest of the University of Marburg, Germany. Measurements of the power consumption of the Bird@Edge components highlight the real-world applicability of the approach. All software and firmware components of Bird@Edge are available under open source licenses.
Ещё видео!