HEAD acoustics’ MOBILITY Voice and Noise Conference 2024

audioXpress attended the HEAD acoustics’ MOBILITY Voice and Noise Conference 2024 (May 8-9, 2024). An immersive event with the most advanced solutions and an application-based approach for testing and evaluating sound for automotive. On the southern shore of the Golden Gate strait, located on the headlands of the San Francisco Peninsula inside the Presidio, not far from the historic Officer’s Club, HEAD acoustics hosted this conference as a way to provide an updated and practical look at all automotive and mobility voice and noise applications.

On the southern shore of the Golden Gate strait, located on the headlands of the San Francisco Peninsula inside the Presidio, not far from the historic Officer’s Club, HEAD acoustics hosted its MOBILITY Voice and Noise Conference (May 8-9, 2024) in the US. Under cloudless bright blue skies, complementing the cool deep dark midnight blue of the HEAD acoustics logo, the event will be remembered positively by all those who attended and HEAD acoustics’ own staff.

HEAD acoustics offers automotive customers a range of products and services designed to enhance vehicle sounds as well as voice and audio quality. If it wasn’t clear from the title of the conference, this event was about sound, inside and outside of automobiles.


The Presidio facilities, the location in San Francisco, and the glorious weather all contributed to an excellent first event.

HEAD acoustics promoted this conference in San Francisco, CA, as a way to provide an updated and practical look at all automotive and mobility voice and noise applications. 

Professor Klaus Genuit, who founded HEAD acoustics in 1986, started the event. His keynote was, “Soundscape – Importance of Engineering a Peaceful Sound Environment.”

HEAD acoustics has always been at the forefront of measurement techniques, assessing the requirements of the industry in terms of acoustics, vibration, speech, audio and sound quality, and supporting standardization of measurement regulations. The company’s approach is to constantly find new or better ways to measure (correctly) new audio and acoustic features and events, and the multiple ways the same features can be implemented. 

The Soundscape Standard (ISO 12913) focuses, first, on the definition of a soundscape, and then on the measurement of it. From the standard’s definition, a soundscape exists through human perception of the physical phenomenon that is the acoustic environment. That acoustic environment is defined by the sound at a receiver from all the sound sources as modified by the environment.


The presentations started on Wednesday, May 8, with an opening keynote by Prof. Klaus Genuit.

In his presentation, he addressed the significance of traffic noise and acoustic sounds in urban development and the role of the new ISO 12913 series Soundscape standard

In his keynote, Dr. Genuit followed up on the historical development of the Soundscape Standard and its implications for automotive applications. “The measurement and assessment of a Soundscape through binaural measurement, psychoacoustics, and questionnaires is needed to fully understand the human’s perception of their acoustic environment.” So far, the collection, the analysis, and the storage has been completed. The last task is to determine how the collected data will be used. He illustrated the challenges with an example for an Acoustic Vehicle Alerting Systems (AVAS) and determining noise sources. The question to answer was “with the superposition of two good sounds, what happens to sound quality?” One answer is that when there is the superposition of two acceleration events, it is better simulated by using noise, instead of tones. From that, the individual and combined sound quality can be more easily assessed.


Dr. Genuit ended his presentation with a familiar comment: “Sound is message.” “The sound of an automobile sells the car as much as the sound of the home appliance sells the home appliance,” he said. The popularity of electric vehicles is increasing, “but the synthesized sounds created in the cabin and outside are rarely systematically investigated or evaluated.” 

There were approximately 45 of us attending each day, in addition to the 15 members of the HEAD acoustics staff, both from its European headquarters near Aachen, Germany, and from the company’s North American offices. Those in attendance were either curious about using or users of the HEAD acoustics equipment, or interested in techniques specific to the automotive industry and mobility, whether it was to assess an environment or to validate the quality of a new feature as well as the quality of a virtual simulation.


Attendees’ reception at the historic Presidio Officer’s Club.

Throughout the days there were many presentations from HEAD acoustics. Matt Lutz presented his consulting tech stories “If You’re Talking, You’re Not Listening.” Jess Gratke presented an evaluation of voice interfaces used on the outside of the vehicle; the example was of the voice control “close the door” in a noisy environment. Jacob Soendergaard discussed “Why People Still Use Handsets in the Car” instead of Handsfree – because the conversational experience still needs to improve on both the sending path and the receiving path in a vehicle.

One standout on the sending path came in a presentation titled “Finding Your Voice in a Noisy World,” and later in a demonstration by Ken Sutton from Yobe, one of the guest companies invited to present at this event. Yobe uses AI-powered technology for voice interfaces and doesn’t use any additional hardware – it only focuses on the voice. The process is not looking for what we say, but how we say it. Each individual has a larynx that is uniquely their own in size and shape. Yobe first uses noise suppression and then identifies what is voice. It is looking for where the human is in the noise based on the voice biometric profile. It can even tag a voice to a person if the system is trained to do so. It takes 5 seconds to extract the information and 3 seconds to train the system. 

Yobe gave a live demo of training voices of visitors to their exhibit and extracting only their voice from the noisy background. We expect to write more about this in the near future. It was truly unique and is being developed for automotive use in partnership with suppliers and manufacturers.


Although the agenda was led mostly by HEAD acoustics engineers, there were guest presentations by companies such as Zoox, Yobe, and Polytec. Pictured is Ken Sutton from Yobe, presenting on the company’s unique voice interface technology.

Yobe voice demonstration.

Matt Lutz (HEAD acoustics US) discussed critical listening systems, expanding listening room capabilities, and speech level measurements.

Roger Shively and Luis Miguel Arango (HEAD acoustics’ West Coast Accounts Manager). There is a story to be told here as Luis mentors local youth in difficult life situations and teaches them technical skills to the point that some have been brought on as apprentice engineers at HEAD acoustics. (Photo by Martin Manuel)

Luis Miguel Arango (HEAD acoustics’ West Coast Accounts Manager) showed us around for all the demos that were available for automotive, including the new Caller Server from HEAD acoustics, which is a service (in beta test) that allows end users to call a phone number, playback standard test signals and that can record the uplink of data measured with a smartphone. Each user registers with their phone number and email address. The phone number is used to authenticate the caller and the email address is used to deliver the recorded files. 

Among the other demos I would highlight: HEAD acoustics’ Mobile PreSense, a driving simulator for virtual evaluation of sound;  Rohde & Schwarz using HEAD acoustics for wireless and hands-free measurements and tuning: and HEAD acoustics’ head tracking demo using its latest HMS move°S. The presentation “Positional Robustness Automation” uses a rotating head to capture timbral changes for handsfree quality improvements. And there were demonstrations from HEAD acoustics on its MDAQS (Multi-Dimensional Audio Quality Score), the company’s binaural perception-based software tool for Audio Quality Assessment. MDAQS replaces human evaluation with novel metrics in the audio device development process. Another demonstration that generated  great interest was HEAD acoustics’ own lightweight and modular new HEAD VISOR VMA V Acoustic Camera, tracking the sound of a drone overhead.


Tracking the sound of a drone overhead with HEAD acoustics’ new HEAD VISOR VMA V Acoustic Camera.

Jacob Soendergaard presented on the topic “Why Do People Insist on Using Their Handsets in the Car?” and introduced interesting objective measurements about the quality of the conversational experience using a handset versus a handsfree solution. Digging deeper he looked at speech and noise suppression quality and measured the listening effort, before ending the presentation with an interesting investigation on the implications of autonomous driving and the technology required to allow multiuser communications in a “silent” vehicle.

Another automotive presentation from HEAD acoustics covered “Insights into the Key Methods for Digital Transformation in Acoustic Engineering” by Matthias Wegerhoff, and described the process of pre-structuring simulation models with computer-aided engineering (CAE) to replace more test-based models with CAE-base models to increase the knowledge base and ultimately create a combined test and simulation hybrid method to gain more at the end of the development process.

Stefan Hank illustrated with “Sound Design for Electric Vehicles and Global Differences in Perception” that a desired Sound Design is different depending on the speed we like to drive at: Germans are driving as fast as possible; drivers in the US are somewhere in between the speed limit and fast; China drivers are at the speed limit. Their Sound Design preferences depend on their driving habits.  

Marc Marroquin talked about the different sounds electric vehicles make while driving, while sitting still, and while charging. And apparently there’s a clear disconnect between consumers and the industry. He reported that the wizard-like Sound Design gets turned off by 70% of the users (according to the Ford Mach-E forum), but that the throaty V8 sound was very popular. He demonstrated the EV charging noise, which comes from the generator for the charger and causes environmental noise at charging stations. He also demonstrated the noise from the wall plug-in, 50A at 120V, (the EVSE Relay). The battery’s heating and cooling pumps to keep a high-voltage battery charging efficiently needs to keep the battery between 60°F to 90°F. As those pumps cycle, the switch repeatedly makes a loud “kerchunk!” in the garage at home.


Some sessions expanded on a more practical perspective with Eric Lawrence (Polytec) talking about using laser vibrometry for acoustic response and sound quality measurements… 

…and Frank Kettler (HEAD acoustics) addressing how to optimize the acoustic environment for passengers and design “acoustic zones” using a combination of techniques.

One last presentation that brought us back to the measurement of a Soundscape came from HEAD acoustics’ Frank Kettler (who also developed MDAQS). “Vehicle Interior Acoustics, ASD, RNC, ICC, … Steps Towards Zone Concepts.” Frank focused on the use of the ESTI Standard TS 103558 “Listening Effort” instead of Speech Intelligibility (SI) to evaluate the sound quality of vehicle interior acoustics.

Interior Sound Design, In-Car Communication, and Road Noise Cancellation are seat dependent and audio system speaker content dependent, and the single-to-noise ratios (SNRs) can vary greatly. The reason not to use SI, is that it saturates even at average SNRs. This meant that using Listening Effort was needed. In some cases (e.g., ASD – Automotive Sound Design), the Listening Effort can be masked, which is counterproductive. Assessing listening effort in the presence of ASD requires separating the active sound elements in the vehicle interior to understand the contribution of each element.

HEAD acoustics also does tests with a rotating HEAD to evaluate “real” listener behavior and the effect it has on Listening Effort. Auto Speech Recognition (ASR) is measured with “Communication Effort,” the combination of Listening Effect, the human-machine interactions through voice control and speech recognition, and the out-of-car communication through telecommunication. 

Testing personal sound zones with headrest speakers, HEAD acoustics found that the effect is “regional” in the car: the co-driver suffers in Listening Effect. The bottom line is that cancelling or bolstering voice in a car is still hard to do. Measuring the effectiveness of acoustic zones as Sound Bubbles is still too early; the technology is just at its beginning stages. This general topic will deserve more exploration, which we hope to be able to do soon in a dedicated article.


Michael Ricci, Senior Director Electroacoustic Engineering at xMEMS, presented “It’s About Time,” a co-authored paper with Jacob Soendergaard from HEAD acoustics, and excited attendees with the opportunity to listen to actual products featuring xMEMS piezo-MEMS drivers. (Photo by Martin Manuel)

A very unique event with plenty of time to attend practical demonstrations of the latest measurement systems and techniques, including of the innovative MDAQS perception-based audio quality assessment method, and the unique move°S silent platform for the natural rotation of artificial heads. With the automotive sector facing revolutionary changes that affect vehicle sounds and transform the work of acoustics engineers, there’s many reasons to repeat this MOBILITY event.

HEAD acoustics plans to continue these MOBILITY conferences every two years in the US. It is an admirable endeavor. Its application-based approach for testing and evaluating sound for automotive interiors and exteriors is providing the audio industry with insight into the challenges and solutions we face in creating high-quality sound experiences for the occupants of a vehicle and the world through which they drive. We’re looking forward to seeing what the next two years will bring. aX

This article was originally published in The Audio Voice newsletter, (#471), June 6, 2024.


Source link







Meze Audio przedstawia uniwersalne słuchawki douszne ALBA z adapterem USB-C

Meze Audio rozszerzyło swoją imponującą ofertę słuchawek o ALBA, nowy model douszny klasy podstawowej. Estetycznie wyrafinowane, z pięknym białym wykończeniem, nowe słuchawki...

W najbliższym czasie korzystanie z RCS w wersji beta systemu iOS 18 może nie być możliwe

Jak ogłosił Apple na konferencji WWDC 2024, iOS 18 oficjalnie zapewnia obsługę RCS – lub Rich Communication Services – na iPhone’a. Jednakże,...

Studio gier współzałożone przez Dr Disrespecta „natychmiast” kończy współpracę ze streamerem

Midnight Society, studio gier, którego współzałożycielem jest Dr Disrespect, zakończyło współpracę ze streamerem. W poście na X, – napisało studio że w...