Audio Communication Group

A torso model for dynamic HRTF rendering

Head-related transfer functions (HRTFs) are essential for creating 3-D sound in virtual and augmented reality. While the main localization cues (ITD, ILD, spectral cues) are induced by the listeners head and pinnae, additional cues originate from the torso: In case the source, ear, and shoulder are approximately aligned, the shoulder reflection adds a comb filter with a depth of up to 5 dB for frequencies above approximately 700 Hz; For sources well below the horizontal plane, the acoustic shadow of the torso causes a severe high frequency damping of up to 20 dB. These acoustical cues were shown to provide audible coloration and weak localization cues for sources away from the median plane.

However, including the torso in dynamic virtual scenes in which the listeners can arbitrarily rotate their head with three degrees of freedom (yaw, pitch, roll) is far from trivial. For this reason, the head-above-torso orientation is omitted in most applications and head rotations are rendered by rotating the entire head and torso model. Subjects reported coloration and localization errors as the most prominent cause of this simplification already for sources in the vicinity of the horizontal plane and head rotations to the left and right. More drastic errors are likely to appear for up-down head rotations: In the case of an upwards looking listener and a source below the horizontal plane, the torso might block the direct path and cause severe high frequency damping if it is rotated together with the head, whereas the effect of the torso will be small if only the head is rotated.

This project aims at modeling the effect of head-above-torso orientations by means of computationally efficient IIR filters. These filters will be developed based on a database of simulated HRTFs for more than 1,000 head-above-torso orientations.

Project team

  • Fabian Brinkmann
  • Stefan Weinzierl
  • Dannie Smith (Meta Reality Labs - Audio)
  • William J. Anderst (University of Pittsburghg, Department of Orthopaedic Surgery)
  • David Lou Alon (Meta Reality Labs - Audio)
  • Sebastià V. Amengual Garí (Meta Reality Labs - Audio)

Funding

Meta Reality Labs - Audio

Publikationen

Brinkmann, F., Smith, D., Anderst, W. J., Amengual Garí, S. V., Alon, D. L., & Weinzier, S. (2023, September). Towards Modeling Dynamic Head-Above Torso Orientations in Head-Related Transfer Functions. Forum Acusticum, Torino, Italy.

Brinkmann, F., Kreuzer, W., Thomsen, J., Dombrovskis, S., Pollack, K., Weinzierl, S., & Majdak, P. (2023). Recent Advances in an Open Software for Numerical HRTF Calculation. Journal of the Audio Engineering Society, 71(7/8), 502–514. https://doi.org/10.17743/jaes.2022.0078.

Brinkmann, F., & Pike, C. (2023). Binauraltechnik. In S. Weinzierl (Ed.), Handbuch der Audiotechnik (2nd ed., pp. 1–23). Springer. https://doi.org/10.1007/978-3-662-60357-4_27-2.