
Assessment of auditory function and abilities in life-like listening scenarios
Project C5 in the Collaborative Research Center SFB 1330 “Hearing Acoustics: Perceptive Principles, Algorithms and Applications” (HAPPAA)
Aim
We study auditory mechanisms for communication in life-like listening scenarios. Toward this aim, the project addresses the following:
- Technology: Improve real-time room acoustic simulation and auralization tools (rtSOFE) and verify them for hearing research. We work on virtual reality techniques to analyze full-body movements, dynamic gestures, head movements and facial expressions of human participants and map them to avatars with high fidelity to study their contribution to communication.
- Content: Develop life-like listening scenarios with multiple sound sources (→ underground station scene), verify them against the measured acoustics of the real space, and develop ways to share the scenes with the community to foster reproducible research
- Experimental methods: Interaction in life-like listening situations differs strongly from established psychophysical experimental approaches: We develop novel test methods to assess hearing and communication ability in ongoing and interactive situations using a variety of approaches: interactive reporting, word-by-word speech intelligibility assessment, end-of-trial reporting, head and body motion analysis, facial analysis, interaction analysis (turn-taking), speech analysis (Lombard effect).
- Auditory function: We study the impact of dynamic changes of the spatial configuration of sound sources on the detection (unmasking) of tones and speech, and the contribution of visual, facial and gestural information on human communication.
Demonstration of our interactive audio-visual VR and our underground scene
Key findings

1. We investigated the contribution of body and head gestures and poses to communication: with increasing noise level, beat gestures, particularly complex gestures, are made more frequently by the speaker, whereas confirmative nodding by the listener also increases (Hládek & Seeber, 2026)

2. Head orientation and movement of the listener strongly influences speech perception. During movement, speech intelligibility can be decreased, which has not been previously reported (Hládek & Seeber, 2023; Hládek & Seeber, 2019).
3. The rtSOFE room acoustics simulation and auralization software has been extended to multi-source auralization and binaural rendering, new sound field synthesis methods tested, verified and published.

4. Life-like acoustic and visual models of the underground station “Theresienstraße” in Munich were created. Auralization with rtSOFE was verified acoustically and in a speech test with listeners against the real space. The models were open-source published along with extensive in-situ acoustic measurements, background sound recordings and documentation to render a complete audiovisual scene (Hládek, Ewert & Seeber, 2021).
5. We created a novel ‘in-movement’ speech perception test based on individualized movement patterns of participants to study the evolution of speech perception during movement and vestibular/proprioceptive influences (see 1. and Hládek & Seeber, Forum Acusticum 2020).
6. We measured spatial unmasking of free-field moving tonal sources: Even slow movement of 30°/sec (c.f. head turn ~780°/sec) reduces the unmasking benefit. The time course could be mapped with incoming sound reflections and dynamic masking effects modelled with a fast binaural processing stage followed by temporal integration (Kolotzek, Aublin & Seeber, 2023).
7. Control approaches for room acoustic auralizations from the Unreal Engine and for avatars in MetaHuman in the Unreal Engine were developed. The room acoustic simulation software, rtSOFE, can now be seamlessly used with the Unreal Engine to create professional and interactive visualizations with accurate, real-time acoustic rendering using a loudspeaker system or a VR headset (Enghofer, Hládek & Seeber, 2021).


Funding
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 352015383 – SFB 1330 C5.
Publications
Fichna, S.; van de Par, S.; Seeber, B.U.; Ewert, S.D.: Perceptual Evaluation of Acoustic Level of Detail in Virtual Acoustic Environments. Acoustics (8/1), 2026 mehr… Volltext ( DOI )
Gerken, M.; Schütze, J.; Kirsch, C.; Ewert, S.D.; Seeber, B.U.; Heeren, J.; Afghah, T.; Wagener, K.C.; Kollmeier, B.; Warzybok, A.: Perceptual measures of normal-hearing and hearing-impaired listeners across defined virtual acoustic scenes. International Journal of Audiology, 2026, 1-33 mehr… Volltext ( DOI )
Hládek, Ľ., & Seeber, B.U. (2026). Head, posture, and full-body gestures in unscripted dyadic conversations in noise. Zenodo/arXiv. https://doi.org/10.48550/arXiv.2512.03636.
Azaripasand, P.; Seeber, B.U.: HRTF Measurement System with Rotating Loudspeakers. Fortschritte der Akustik -- DAGA '25 (Fortschritte der Akustik -- DAGA '25), Deutsche Gesellschaft für Akustik e.V. (DEGA), 2025 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Fichna, S.; Biberger, T.; Seeber, B.U.; Ewert, S.D.: Effects of Visual Representation and Scene Complexity on Speech Perception, Spatial Hearing, and Loudness in Virtual Environments. 2025 Immersive and 3D Audio: from Architecture to Automotive (I3DA), IEEE, 2025, 1-10 mehr… Volltext ( DOI )
Hohmann, V.; Anemüller, J.; Biberger, T.; Blau, M.; Brand, T.; Doclo, S.; Ewert, S.; Grimm, G.; Schütze, J.; Seeber, B.U.; Wagener, K.C.; Warzybok-Oetjen, A.: Forschung für eine bessere akustische Kommunikation: Perzeptive Prinzipien, Algorithmen und Anwendungen. Akustik Journal 01 / 25, 2025, 20-31 mehr… Volltext (mediaTUM)
Kuntz, M.; Seeber, B.U.: Dynamic Loudspeaker Equalization for Participant Movements in a Loudspeaker Array. Proc. Forum Acusticum - Euronoise 2025, EAA, 2025, 6527-6531 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Schütze, J.; Kirsch, C.; Gerken, M.; Seeber, B.U.; Kollmeier, B.; Ewert, S.D.: Verified virtual acoustic environments and comparison of speech intelligibility and hearing aid benefit to standard audiological tests. DAGA '25, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2025 mehr…
Seeber, B.U.: Measuring in free field virtual acoustics - from humans to cars. Noise and Vibration: Emerging Methods, 2025 mehr…
Seeber, B.U.; Ewert, S.D.; Hladek, L.: Assessment of auditory function and abilities in life-like listening scenarios. DAGA '25, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2025 mehr…
Seeber, B.U.; Weymar, P.; Ewert, S.D.; van de Par, S.: Perceptual sensitivity to the "open window" in reverberation. Proc. Forum Acusticum - Euronoise 2025, EAA, 2025, 6527-6531 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Seeber, B.U.; Weymar, P.; Dong, D.; Ewert, S.D.; van de Par, S.: Mind the gap: Perceptual sensitivity to anisotropy in late reverberation. DAGA '25, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2025 mehr…
Azaripasand, P.; Seeber, B.U.: Assessment of Head-Related Transfer Function Time-Alignment Preprocessing Through Spatial Principal Component Analysis. Fortschritte der Akustik -- DAGA '24, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2024, 1670-1672 mehr… Volltext (mediaTUM)
Bischof, N.F.; Seeber, B.U.: Analyse der binauralen Merkmale bei der Lokalisation einer bewegten Schallquelle in Rauschen. 50. Jahrestagung für Akustik -- Programm, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2024 mehr…
Bischof, N.F.; Seeber, B.U.: Über den Einfluss von Konstantstrom-Adaptern für vorpolarisierte Messmikrofone auf den Frequenzgang. Fortschritte der Akustik -- DAGA '24, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2024, 919-921 mehr… Volltext (mediaTUM)
Dietze, A.; Clapp, S.W.; Seeber, B.U.: Static and moving minimum audible angle: Independent contributions of reverberation and position. JASA Express Letters 4 (5), 2024, 054404 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Hládek, L.; Jiao, Y.; Seeber, B.U.: Co-speech and listening gestures during casual conversation in a noisy situation. Speech in Noise workshop, 2024 mehr…
Hládek, Ľ.; Seeber, B.U.: On speech-hand synchrony during conversations in a virtual underground station. Fortschritte der Akustik -- DAGA '24, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2024, 1230-1232 mehr… Volltext (mediaTUM)
Kuntz, M.; Seeber, B.U.: Adapting sound reproduction to listener position with dynamic loudspeaker equalization. Fortschritte der Akustik -- DAGA '24, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2024, 1649-1651 mehr… Volltext (mediaTUM)
Bischof, N.F.; Aublin, P.G.; Seeber, B.U.: Fast processing models effects of reflections on binaural unmasking. Acta Acustica 7, 2023, 11 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Bischof, N.F.; Seeber, B.U.: Dynamic Binaural Unmasking model with fast cue extraction (DynBU_fast) to predict the better-ear and binaural benefit for detecting a dynamic sound source in noise: Software package and experimental data set. 2023 mehr… Volltext ( DOI )
Bischof, N.F.; Seeber, B.U.: Lokalisation der Trajektorien-Endpunkte einer bewegten Schallquelle in Rauschen. Fortschritte der Akustik – DAGA '23, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2023, 1193-1196 mehr… Volltext (mediaTUM)
Bischof, N.F.; Seeber, B.U.: Binaural detection thresholds are differently affected by interaural level differences in S0 and Sπ conditions. Forum Acusticum 2023, 2023 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Hládek, Ľ.; Seeber, B.U.: Speech Intelligibility in Reverberation is Reduced During Self-Rotation. Trends in Hearing 27, 2023, 23312165231188619 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Hládek, Ľ.; Seeber, B.U.: Behavior in triadic conversations in conditions with varying positions of noise distractors. Fortschritte der Akustik – DAGA '23, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2023, 916-918 mehr… Volltext (mediaTUM)
Kuntz, M.; Bischof, N.F.; Seeber, B.U.: Sound field synthesis for psychoacoustic research: In situ evaluation of auralized sound pressure level. The Journal of the Acoustical Society of America 154 (3), 2023 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Kuntz, M.; Seeber, B.U.: Investigating the Effect of Head Movement on the Perception of Reproduction Artefacts of Moving Sources. Fortschritte der Akustik – DAGA '23, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2023, 1105-1107 mehr… Volltext (mediaTUM)
Kuntz, M.; Seeber, B.U.: Moving the Ambisonics Sweet-Spot by Adapting Loudspeaker Equalization and Ambisonics Decoding. Forum Acusticum 2023, 2023 mehr… Volltext ( DOI ) Volltext (mediaTUM)
Schütze; J.; Ewert, S.D.; Seeber, B.U.; Wagener, K.C.; Kollmeier, B.: Speech intelligibility in live-like virtual acoustic environments. 49. Jahrestagung für Akustik – Programm, Dt. Ges. f. Akustik e.V. (DEGA), 2023 mehr…
Fichna, S.; Kirsch, C.; Seeber, B.U.; Ewert, S.D.: Perceptual evaluation of simulated and real acoustic scenes with different acoustic level of detail. Proc. 24th International Congress on Acoustics, Acoustical Society of Korea, 2022 [mehr…]
Fichna, S.; Seeber, B.U.; Landeau Bobadilla, C.E.; Ewert, S.D.: Evaluation of complex acoustic scenes for hearing research and audiology. 48. Jahrestagung für Akustik – Programm, Dt. Ges. f. Akustik e.V. (DEGA), 2022 [mehr…]
Hládek, Ľ.; Seeber, B.U.: Audiovisual models for virtual reality: Underground station. Fortschritte der Akustik – DAGA '22, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2022 [mehr…] [ Volltext (mediaTUM) ]
Hládek, Ľ.; Seeber, B.U.: Underground station environment. 2022 [mehr…] [Volltext ( DOI ) ]
Hládek, Ľ.; Seeber, B.U.: Behavior during free conversations in a realistic audiovisual simulation of an underground station. Computational Audiology VCCA 2022 June 30 and July 1, University of Oldenburg, 2022 [mehr…]
Hládek, Ľ.; Seeber, B.U.: Effects of noise presence and noise position on interpersonal distance in a triadic conversation. INTER-NOISE and NOISE-CON Congress and Conference Proceedings, International Institute of Noise Control Engineering (I-INCE), 2022 [mehr…] [ Volltext (mediaTUM) ]
Kolotzek, N.; Seeber, B.U.: Onset importance in a binaural detection task during an arc-shaped movement trajectory. Fortschritte der Akustik – DAGA '22, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2022 [mehr…] [ Volltext (mediaTUM) ]
Kuntz, M.; Seeber, B.U.: Spatial sound reproduction for interactive hearing research. INTER-NOISE and NOISE-CON Congress and Conference Proceedings, International Institute of Noise Control Engineering (I-INCE), 2022 [mehr…] [ Volltext (mediaTUM) ]
Kuntz, M.; Seeber, B.U.: Investigating the Smoothness of Moving Sources Reproduced with Panning Methods. Fortschritte der Akustik – DAGA '22, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2022 [mehr…] [ Volltext (mediaTUM) ]
Seeber, B.U.: Reinforcement of binaural cues by floor and ceiling reflections. Proceedings of the 2nd Symposium: The Acoustics of Ancient Theaters, SAAT, 2022 [mehr…] [Volltext ( DOI ) ]
Seeber, B.U.; Hládek, L.; Wang, T.: Dynamic spatial rendering of early and late reverberation of virtual acoustic scenes. Proc. 24th International Congress on Acoustics, Acoustical Society of Korea, 2022 [mehr…]
Seeber, B.U.; Kolotzek, N.; Kuntz, M.; Hládek, L.: Testing real hearing in dynamic virtual spaces. 49th Erlanger Kolloquium for audiological research and development on February 10th and 11th, 2022 – Program and Abstracts, WSA Audiology, 2022 [mehr…]
Seeber, B.U.; Wackler, T.: Measuring auditory attention effort in virtual audio-visual environments using pupillometry. Proc. 24th International Congress on Acoustics, Acoustical Society of Korea, 2022 [mehr…]
Wang, T.; Seeber, B.U.: Extension of the real-time Simulated Open Field Environment for fast binaural rendering. Fortschritte der Akustik – DAGA '22, Deutsche Gesellschaft für Akustik e.V. (DEGA), 2022 [mehr…] [Volltext (mediaTUM) ]
van de Par, S.; Ewert, S.D.; Hládek, L.; Kirsch, C.; Schütze, J.; Llorca-Bofí, J.; Grimm, G.; Hendrikse, M.M.E.; Kollmeier, B.; Seeber, B.U.: Auditory-visual scenes for hearing research. Acta Acustica 6, 2022, 55 [mehr…] [Volltext ( DOI ) ] [Volltext (mediaTUM) ]
Fichna, S.; Biberger, T.; Seeber, B. U.; Ewert, S. D.: Effect of Acoustic Scene Complexity and Visual Scene Representation on Auditory Perception in Virtual Audio-Visual Environments. 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA), IEEE, 2021 [mehr…] [Volltext ( DOI )]
Hladek, L; Ewert, S.D.; Seeber, B.U.: Communication Conditions in Virtual Acoustic Scenes in an Underground Station. 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA), IEEE, 2021 [mehr…] [Volltext ( DOI )]
Hladek, L.; Seeber, B.U.: Speech intelligibility in reverberation is reduced during self-rotation. under review, 2021 [mehr…] [Volltext ( DOI )]
Pulella, Paola; Hladek, L'ubos; Croce, Paolo; Seeber, Bernhard U.: Auralization of acoustic design in primary school classrooms. 2021 IEEE International Conference on Environment and Electrical Engineering and 2021 IEEE Industrial and Commercial Power Systems Europe (EEEIC / ICPS Europe), IEEE, 2021 [mehr…] [Volltext ( DOI )]
Enghofer, F.; Hladek, L.; Seeber, B.U.: An 'Unreal' Framework for Creating and Controlling Audio-Visual Scenes for the rtSOFE. Fortschritte der Akustik - DAGA '21, 2021 [mehr…] [Volltext (mediaTUM)]
Hládek, L.; Seeber, B.U.: Self-rotation behavior during a spatialized speech test in reverberation. Fortschritte der Akustik - DAGA '21, 2021 [mehr…] [Volltext (mediaTUM)]
Kolotzek, N.; Aublin, P.G.; Seeber, B.U.: The effect of early and late reflections on binaural unmasking. Fortschritte der Akustik - DAGA '21, 2021 [mehr…] [Volltext (mediaTUM)]
Kolotzek, N.; Aublin, P.G.; Seeber, B.U.: Fast processing explains the effect of sound reflection on binaural unmasking. Professur für Audio-Signalverarbeitung, 2021, [mehr…]
Hládek, L.; Seeber, B.U.: The effect of self-motion cues on speech perception in an acoustically complex scene. Forum Acusticum, 2020 [mehr…]
Ewert, S.D.; Hladek, L.; Enghofer, F.; Schutte, M.; Fichna, S.; Seeber, B.U.: Description and implementation of audiovisual scenes for hearing research and beyond. Fortschritte der Akustik -- DAGA '20, 2020, 364-365 [mehr…]
Hládek, L.; Seeber, B.U.: Speech perception during self-rotation. Joint Conference on Binaural and Spatial Hearing, 2020 [mehr…]
Hládek, L.; Seeber, B.U.: The effect of self-orienting on speech perception in an acoustically complex audiovisual scene. Fortschritte der Akustik -- DAGA '20, 2020, 91-94 [mehr…] [Volltext (mediaTUM)]
Kolotzek, N.; Seeber, B.U.: Localizing the end position of a circular moving sound source near masked threshold. Forum Acusticum, 2020 [mehr…]
Kolotzek, N.; Seeber, B. U.: Localization of circular moving sound sources near masked threshold. 2020 [mehr…]
Hladek, L.; Seeber, B.U.: Behavior and Speech Intelligibility in a Changing Multi-talker Environment. Proc. 23rd International Congress on Acoustics, ICA 2019, 2019, 7640-7645 [mehr…] [Volltext (mediaTUM)]
Kolotzek, N.; Seeber, B.U.: Spatial unmasking of circular moving sound sources in the free field. Proc. 23rd International Congress on Acoustics, integrating 4th EAA Euroregio 2019, 2019, 7640-7645 [mehr…] [Volltext (mediaTUM)]