Sound localization from an information-theoretic perspective
In this talk we will look at sound localization from an information theoretic perspective. Indeed, sound localization can be viewed as an encoding-decoding operation: spatial information is encoded by the head/ears in the acoustic signals that arrive at our tympanic membranes, and this information is then decoded by our brain, and obviously, some information may be lost during this process. From this viewpoint, our ear morphology has been optimized through evolution as a ‘codec’ operating in the physical domain, such that the spatial information that is relevant for survival can be transferred with maximum efficiency. I’ll finish by describing the new human HRTF measurement method we developed in the course of this research. The method is low cost (<40 euros), fairly easy to carry out, and can be done at home, hence, it may make personalized audio accessible to the larger public.