CNN-based multi-class multi-label classification of sound scenes in the context of wind turbine sound emission measurements



Within the scope of the interdisciplinary project WEA-Akzeptanz, measurements of the sound emission of wind turbines were carried out at the Leibniz University Hannover. Due to the environment there are interfering components (e. g. traffic, birdsong, wind, rain, …) in the recorded signals. Depending on the subsequent signal processing and analysis, it may be necessary to identify sections with the raw sound of a wind turbine, recordings with the purest possible background noise or even a specific combination of interfering noises. Due to the amount of data, a manual classification of the audio signals is usually not feasible and an automated classification becomes necessary. In this paper, we extend our previously proposed multi-class single-label classification model to a multi-class multi-label model, which reflects the real-world acoustic conditions around wind turbines more accurately and allows for finer-grained evaluations. We first provide a short overview of the data acquisition and the dataset. We then briefly summarize our previous approach, extend it to a multi-class multi-label formulation, and analyze the trained convolutional neural network regarding different metrics. All in all, the model delivers very reliable classification results with an overall example-based F1-score of about 80 % for a multi-label classification of 12 classes.