Noise assessment of taxibotted versus conventional taxiing operations using a phased microphone array
Time: 7:20 am
Author: Bieke von den Hoff
Abstract ID: 1765
In sustainable aviation the focus is mostly applied to the greenhouse gas emissions during flight. However airports have an increasing interest in reducing emissions during ground operations such as taxiing for example to improve the local air quality. Amsterdam Airport Schiphol started a pilot for sustainable taxiing with a pilot-controlled hybrid-electric aircraft towing vehicle called TaxiBot in 2020. The COVID-19 pandemic created an opportunity for extensive operational testing on a near-empty airport. Due to the low background noise levels in this situation, also a noise assessment of taxiing with the TaxiBot versus conventional two-engine taxiing was performed. This assessment can be used to evaluate the noise levels to which ground workers or neighbouring communities are exposed due to TaxiBot operations. For the noise measurements a phased microphone array was used, which allowed not only for a noise level and directionality assessment, but also for noise source identification. This paper compares the noise emissions and noise sources between a taxibotted and conventional taxiing operation. The results show that a taxibotted taxiing operation produces significantly lower noise levels. Additionally, acoustic imaging shows that the TaxiBot engine is the main noise source for a taxibotted pass-by manoeuvre.
Sound Field Projection System using Optical See-Through Head Mounted Display
Time: 7:00 am
Author: Atsuto Inoue
Abstract ID: 1797
There are various ways to grasp the spatial and temporal structures of sound field. Sound field visualization is an effective technique to understand spatial sound information. For example, acoustical holography, optical methods, and beam-forming have been proposed and studied. In recent years, augmented reality (AR) technology has rapidly developed and is now more familiar. Many sensors, display devices, and ICT technologies have been implemented in AR equipment, which enable interaction between real and virtual worlds. In this paper, we propose an AR display system, which displays the results obtained by the beam-forming method. The system consists of 16ch microphone array, real-time sound field visualization system and optical see-through head mounted display (OST-HMD). Real-time sound field visualization system analyses sound signals recorded by 16ch microphone array by beam-forming method. Processed sound pressures data are sent to OST-HMD by using transmission control protocol (TCP), and colormap is projected on real world. Settings property of real-time sound field visualization system can be changed by using virtual user interface (UI) and TCP. In addition, multi-users can experience the system by sharing sound pressures and settings property data. Using this system, users wearing OST-HMD can observe sound field information intuitively.
Sound field reconstruction in rooms with deep generative models
Time: 6:00 am
Author: Xenofon Karakonstantis
Abstract ID: 1864
The characterization of Room Impulse Responses (RIR) over an extended region in a room by means of measurements requires dense spatial with many microphones. This can often become intractable and time consuming in practice. Well established reconstruction methods such as plane wave regression show that the sound field in a room can be reconstructed from sparsely distributed measurements. However, these reconstructions usually rely on assuming physical sparsity (i.e. few waves compose the sound field) or trait in the measured sound field, making the models less generalizable and problem specific. In this paper we introduce a method to reconstruct a sound field in an enclosure with the use of a Generative Adversarial Network (GAN), which s new variants of the data distributions that it is trained upon. The goal of the proposed GAN model is to estimate the underlying distribution of plane waves in any source free region, and map these distributions from a stochastic, latent representation. A GAN is trained on a large number of synthesized sound fields represented by a random wave field and then tested on both simulated and real data sets, of lightly damped and reverberant rooms.
Optimization of underdetermined hologram points in reconstructing the vibro-acoustic source field based on ESM
Time: 6:20 am
Author: laixu jiang
Abstract ID: 1898
The distribution of measurement points is important in reconstructing the vibro-acoustic source field using the near-field acoustical holography (NAH) based on the equivalent source method (ESM). Because too close measurement impose a limit in the implementation of ESM, an optimal arrangement of the hologram data is needed to enable a longer distance measurement although the points are still within the near field. In this work, the optimal measurement positions are determined by adopting the method that assures the independence among the measuring positions as far as possible. Singular value decomposition of the transfer matrix is employed in the loop-iteration calculation fashion, in which the candidate measuring point affecting the increase of singularity is eliminated at each iteration step. Comparison is made with the uniformly distributed hologram points, the monopole version of ESM model, and the patch holography method. The test results reveal that the acoustic field of sound sources can be reconstructed meaningfully from the optimized hologram points of underdetermined condition. Under the predetermined reconstruction accuracy, the test results varying the hologram distance show that it is possible to realize the underdetermined far-distance measurement than the usual NAH.
A new technology for locating very low frequency and negative signal-to-noise ratio sound sources
Time: 6:40 am
Author: Yazhong Lu
Abstract ID: 2144
This paper presents a new technology that enables one to locate multiple sound sources with very a large dynamic range simultaneously, including very low frequency and negative signal-to-noise ration sound sources in a non-ideal environment, where there are random background noise and unknown interfering signals. In particular, spatial resolution of source localization is frequency independent. In other words, spatial resolution remains very high at very low as well as at very high frequencies. The underlying principle of this new technology is a hybrid methodology that includes a passive SODAR (nic etection nd anging), advanced signal processing and least-squares minimization. Using this technology, engineers will be able to visualize sound sources in both real time and post processing in an adversary test environment. Live videos of sound sources localization inside a crowd machine shop are shown, where there are unknown background noise, unspecified sound reflections and reverberation, and interfering signals.
MEMS microphone intensity array for cabin noise measurements
Time: 8:20 am
Author: Carsten Spehr
Abstract ID: 2288
Aircraft cabin noise measurements in flight are used toto quantify the noise level, and to identify the entry point of acoustic energy into the cabin. Sound intensity probes are the state-of-the-art measurement technique for this task. During measurements, additional sound absorbing material is used to ease the rather harsh acoustic measurement environment inside the cabin. In order to decrease the expensive in-flight measurement time, an intensity array approach was chosen. This intensity probe consists of 512 MEMS-Microphones. Depending on the frequency, these microphones can be combined as an array of hundreds of 3D- intensity probes. The acoustic velocity is estimated using a high order 3D finite difference stencil. At low frequencies, a larger spacing is used to reduce the requirement of accurate phase match of the microphone sensors. Measurements were conducted in the ground-based Dornier 728 cabin noise simulation as well as in-flight.
A basic study on estimating location of sound source by using distributed acoustic measurement network
Time: 8:00 am
Author: Itsuki Ikemi
Abstract ID: 2439
The sounds from childcare facilities are often a cause of noise problems with neighbors, however since the sound power levels of children's play and other sounds in child-care facilities have not become clear, evaluation methods have not been established, making countermeasures difficult. In order to evaluate the noise, it is necessary to model the location of the sound source and the sound power level. We have been developing a sound source identification system that uses multiple Raspberry Pi-based recording devices to estimate the location of a sound source and sound power levels. By using GPS for time synchronization, the system can be distributed and placed without connecting cables, which is expected to expand the measurement area significantly. As a method of estimation, the arrival time difference is calculated by cross-correlation from the signals input to each recording device, and the sound source location is estimated from the calculated arrival time difference and the location information of the device. The effectiveness of this system was verified in an anechoic room and outdoor fields.
Performance evaluation on multi-channel Wiener filter based speech enhancement for unmanned aerial vehicles recordings
Time: 6:40 am
Author: Yusuke Hioka
Abstract ID: 2457
Recording speech from unmanned aerial vehicles has been attracting interest due to its broad application including filming, search and rescue, and surveillance. One of the challenges in this problem is the quality of the speech recorded due to contamination by various interfering noise. In particular, noise contamination due to those radiated by the unmanned aerial vehicles rotors significantly impacts the overall quality of the audio recordings. Multi-channel Wiener filter has been a commonly used technique for speech enhancement because of its robustness under practical setup. Existing studies have also utilised such techniques in speech enhancement for unmanned aerial vehicle recordings, such as the well-known beamformer with postfiltering framework. However, many variants of the multi-channel Wiener filter have also been developed over recent years such as the speech distortion weighted multi-channel Wiener filter. To address these recent advancements, in this study we compare the performance of these variants of techniques. In particular, we explore the benefits these techniques may bring forth in the setting of audio recordings from an unmanned aerial vehicle.
Demonstration of a unified approach to beamforming
Time: 8:40 am
Author: Christof Puhle
Abstract ID: 2709
In this paper, we discuss a unification of several well-known frequency domain beamforming methods into one working principle. The methods under consideration include Functional Beamforming, Asymptotic Beamforming, Adaptive Beamforming and - as a natural limiting case - Standard Beamforming. Common to most of these methods is the underlying eigenvalue decomposition of the cross-spectral matrix. Introducing a weighted power mean (also called weighted Hölder mean) in terms of these eigenvalues for every map point, each of the above methods is represented by a certain power p. Because of the latter, this unified approach will be called Power Beamforming throughout this paper. Going from the limiting case p=1 of Standard Beamforming to lower power values results in the attenuation of side lobes and sharpening of the main lobes in the corresponding beamforming map. We demonstrate this effect using simulations and several real-world measurements.
Deconvoluting acoustic beamforming maps with a deep neural network
Time: 7:40 am
Author: Wagner Goncalves Pinto
Abstract ID: 3084
Localization and quantification of noise sources is an important scientific and industrial problem, the use of phased arrays of microphones being the standard techniques in many applications. Non-physical artifacts appears on the output due to the nature of the method, thus, a supplementary step known as deconvolution is often performed. The use of data-driven machine learning can be a candidate to solve such problem. Neural networks can be extremely advantageous since no hypothesis concerning the environment or the characteristics of the sources are necessary, different from classical deconvolution techniques. Information on the acoustic propagation is implicitly extracted from pairs of source-output maps. On this work, a convolutional neural network is trained to deconvolute the beamforming map obtained from synthetic data simulating the response of an array of microphones. Quality of the estimation and the computational cost are compared to those of classical deconvolution methods (DAMAS, CLEAN-SC). Constraints associated with the size of the dataset used for training the neural network are also investigated and presented.
Sensor placement for sound field reconstruction in enclosures.
Time: 6:00 am
Author: Samuel A. Verburg
Abstract ID: 3095
Sampling spatio-temporal acoustic fields is a challenging problem since it demands a large number of sensors. Typically, to characterize the pressure field inside an enclosure, the number of measurements required increases linearly with frequency and cubically with volume, becoming an intractable problem for rooms of moderate size even at low and mid frequencies. Sparse representation techniques, such as Compressed Sensing, rely on the sparsity of natural signals in certain representation domain to drastically reduce the number of measurements needed to sample such signals. In this study, we optimize the placement of sensors inside an enclosure in order to reduce the measurements required for a given reconstruction accuracy. The proposed methodology selects a sparse set of sensor positions from predefined grid via the QR factorization of the sensing matrix. Numerical results show an effective reduction in the required number of measurements when their positions are optimized, in contrast to standard random positioning. Unlike the majority of existing approaches, we study the placement problem for wide-band acoustic fields.