Improving spatial perception with individualized dynamic binaural filtering

This project focuses on the development of an adaptive directional filtering algorithm based on the user’s anthropometric measures to enhance spatial sound perception. When sound reaches the ears, it is acoustically filtered by Head-Related Transfer Functions (HRTF), transfer functions to each ear that depend on sound direction and the individual’s anatomy and anthropometric features such as the torso, head, and pinna measures. Individual or at least individually adapted HRTFs are key factors to provide spatial perception of the environment with high fidelity in virtual acoustic systems (Seeber & Fastl, 2004). However, measuring the HRTFs for each individual is a complex and time-consuming process and requires special equipment that makes HRTF individualization an ongoing research topic.

We will investigate algorithms based on machine learning techniques to capture the complex relationship between anthropometric features of a human with his or her HRTFs. In this regard, we will measure HRTFs of a range of individuals in the anechoic chamber at TUM. A machine learning model will be developed and trained with this data to synthesize personalized HRTFs for further users based on simple input measures the users feed to our algorithm. The proposed approach is then evaluated against individual HRTFs in listening tests using real-time spatialization with the rtSOFE toolkit.

Team members involved

Payman Azaripasand, M.Sc.

External funding

since 10/2021: German Academic Exchange Service (DAAD), #57552340