Abstract:Individual head-related transfer functions (HRTFs) are essential for accurate spatial audio binaural rendering but remain difficult to obtain due to measurement complexity. This study investigates whether photogrammetry-reconstructed (PR) head and ear meshes, acquired with consumer hardware, can provide a practically useful baseline for individual HRTF synthesis. Using the SONICOM HRTF dataset, 72-image photogrammetry captures per subject were processed with Apple's Object Capture API to generate PR meshes for 150 subjects. Mesh2HRTF was used to compute PR synthetic HRTFs, which were compared against measured HRTFs, high-resolution 3D scan-derived HRTFs, KEMAR, and random HRTFs through numerical evaluation, auditory models, and a behavioural sound localisation experiment (N = 27). PR synthetic HRTFs preserved ITD cues but exhibited increased ILD and spectral errors. Auditory-model predictions and behavioural data showed substantially higher quadrant error rates, reduced elevation accuracy, and greater front-back confusions than measured HRTFs, performing worse than random HRTFs on perceptual metrics. Current photogrammetry pipelines support individual HRTF synthesis but are limited by insufficient pinna morphology details and high-frequency spectral fidelity needed for accurate individual HRTFs containing monaural cues.
Abstract:Spatial audio and 3-Dimensional sound rendering techniques play a pivotal and essential role in immersive audio experiences. Head-Related Transfer Functions (HRTFs) are acoustic filters which represent how sound interacts with an individual's unique head and ears anatomy. The use of HRTFs compliant to the subjects anatomical traits is crucial to ensure a personalized and unique spatial experience. This work proposes the implementation of an HRTF individualization method based on anthropometric features automatically extracted from ear images using a Convolutional Neural Network (CNN). Firstly, a CNN is implemented and tested to assess the performance of machine learning on positioning landmarks on ear images. The I-BUG dataset, containing ear images with corresponding 55 landmarks, was used to train and test the neural network. Subsequently, 12 relevant landmarks were selected to correspond to 7 specific anthropometric measurements established by the HUTUBS database. These landmarks serve as a reference for distance computation in pixels in order to retrieve the anthropometric measurements from the ear images. Once the 7 distances in pixels are extracted from the ear image, they are converted in centimetres using conversion factors, a best match method vector is implemented computing the Euclidean distance for each set in a database of 116 ears with their corresponding 7 anthropometric measurements provided by the HUTUBS database. The closest match of anthropometry can be identified and the corresponding set of HRTFs can be obtained for personnalized use. The method is evaluated in its validity instead of the accuracy of the results. The conceptual scope of each stage has been verified and substantiated to function correctly. The various steps and the available elements in the process are reviewed and challenged to define a greater algorithm entity designed for the desired task.