CES 2024 was the best in a long time. audioXpress attended to report about the solutions and technologies that will enable the audio products of tomorrow, exploring the many hospitality and business suites where the audio industry is at during CES. We focused on attending closed demonstrations – about which we can reveal very little at this stage – because that’s the only way to truly share the product developers’ experience, and gain a real perspective about the audio technologies that matter for product development and those that will determine the products in 2025 and going forward.
CES 2024 was the best in a long time. Maybe it was me, but that’s the feeling I got and that I confirmed with many people I talked to. Of course, there are always folks who travel to Las Vegas, NV, every year and experience a totally different show. Because – as the CTA is increasingly aware – CES has become a way to showcase technologies that will enable the products of tomorrow, and less about actual products available now. That was the show we covered and that was our perspective, and in that regard, it was the strongest ever.
We had never scheduled so many meetings in hospitality and business suites as this year. So much so that we hardly had time to walk the show floor and discover things that you only discover that way – and that is something I deeply regret (but two press days and four show days simply were not enough). The reason why we focused on attending closed demonstrations about which we can reveal very little at this stage, was because this was the only way to truly share the product developers’ experience, and gain a real perspective about the audio technologies that matter for product development at this stage – those that will determine the products in 2025 and going forward.
Also, as we noted while walking between the Venetian Hospitality suites and the Venetian Expo, the crowds were as strong as they were during the pre-pandemic CES editions. Even while racing across town to reach the West Hall where the automotive sector was in full force (this year with mobility expanding to the North Hall), or in between Uber/Taxis to the many hotels where audio companies still insist on staying for cost reasons, the type of people were very different from the regular Vegas leisure visitor. We could clearly see that attendance numbers were very strong. The exhibition area is hard to compare with past editions since the West Hall is a completely new expansion for CES. The South Halls continue to be empty, but we could see the OEM/ODMs from China returning in full force to the Westgate convention areas and spread out all over town.
According to the official Consumer Technology Association (CTA) report, CES 2024 closed with 4300+ official exhibitors (the CTA reported 4400 exhibiting companies in 2020), including a record 1400+ startups from around the globe in Eureka Park. The pre-audit figures from the CTA also indicated 2.5+ million net square feet of exhibits, 15% bigger than CES 2023. Registered attendees have surpassed 135,000+, establishing a new record with more than 40% originating from 150 countries, regions, and territories – a very international show. Pre-pandemic numbers were above 150,000, but we have long estimated that number to be much higher when considering the number of people that travel to Las Vegas for CES and only do visits in hotels, never even bothering to register. This year, I estimate that number to be even higher since there were more companies than ever spread all along The Strip’s casinos and hotels. Like it or not, business to business (B2B) is what is sustaining CES in the global tech scene and is getting bigger than ever for the audio industry.
Gary Shapiro, president and CEO of CTA, calls it “a resurgence of CES,” but in reality this is a more complex setting than ever for what used to be a consumer-focused showcase essentially for its distribution channels. Today, CES is more about technologies, investments, partnerships, and meeting the supply chain involved in the design and development of new products. But yet this still contributes to a dynamic show, given that many large technology players still need the exposure. “The resurgence of CES proves that face-to-face conversations and meetings are a necessity for the technology industry. For more than 20 years, I’ve said that every company must become a tech company, and the diversity of exhibitors at CES 2024 proves it. The CES footprint and conference programming span the entire tech ecosystem,” stated Shapiro.
I see that more complex reality reflected in the words of Kinsey Fabrizio, CTA Senior VP of CES and membership: “Technology is solving global challenges, and we’re excited to see so many collaborations and partnerships start here in Las Vegas, and produce a show where attendees come to meet, dream, and solve.”
The CES 2024 Innovation Awards program received 3000+ submissions, a record high, and included Artificial Intelligence (AI) as a new category. For the reasons I stated, there’s very little audio in the innovation awards, and I can clearly see that some of the “innovative concepts” that are nominated will never make it to the market. And regarding AI this was a sort of a mandatory buzzword for the show – akin to what voice and voice personal assistants were before 2020 – but not exactly grounded in real developments. In fact, some of the concepts that we saw mentioning AI were even more farfetched than usual – simply because the mere mention of AI seems to allow companies to make any type of absurd claims.
And that’s not so much the case for the audio industry, where AI and Machine Learning (ML) has already been explored for clear use cases and demonstrated to award profuse benefits. I should mention that I have explored that very topic in the February 2024 issue of audioXpress, released at the beginning of CES. The potential of AI is indeed enormous, but the consumer electronics industry tends to move faster to surf these trends, leading to all sorts of nonsense.
One of the most common problems I saw is not really clarifying how inference or trained models are applied in the signal flow – and for audio, that is precisely where things can get interesting. AI/ML engines can be extremely beneficial in the front-end, at the core, or at the output stage, depending on the application. Not that it always makes economic sense for now, but like all Large Language Models and Generative AI, everything seems to be in the stage of pure fascination with the possibilities of technology, before anyone worries about costs.
One example of how early we are in the application of AI became obvious when we attended the amazing cutting-edge audio demonstrations of what trained models can do for noise-removal, separating sound sources, or eliminating overlapping conversations – even for restoring entirely degraded (and missing) sound signals. I have experimented with the potential of most of these cases, but it is not yet clear how to apply that type of firepower in current products. Even though the demonstrations platforms have made it clear that the required technology is being created with the strongest cost and power requirements in mind – such as required for hearing augmentation and smart hearables.
More than 60% of the demonstrations I attended were focused on hearing augmentation and smart hearables – with current wireless earbuds and hearing aids evolving to “computers in our ears” as it happened before with smartphones and wearables. The other 40% were loudspeaker related, since the same technologies are already more viable in those product environments, where they are needed and the consumer potential for excitement is equally promising.
I’ll mention two extremes where AI and DSP are much needed and will eventually enable making the impossible possible. The first example was the Nuance Audio Hearing Glasses by EssilorLuxottica. Unlike the Facebook Ray-Ban Meta glasses, which are a socially unacceptable aberration, the Nuance Hearing Glasses are something that actually meets a need: to supply a large segment of the population with mild hearing loss with a socially acceptable solution that is already established and can solve two problems in one. For EssilorLuxottica, the largest conglomerate in the segment, this is a unique opportunity to build another business that could represent more opportunities to claim insurance money from prescription lenses AND hearing aids.
There’s just one hurdle. The Nuance Hearing Glasses are two years away from being “good enough.” The speaker driver applied in the frames is still too thin sounding and the microphone array still is not able to capture a conversation at the extreme of a table while other closer guests are chatting lively. Never mind the cocktail party. But I can attest that combining many of the signal processing solutions and AI-based algorithms that I saw demonstrated in countless suites at CES 2024, all this can be sorted… in two years. And unlike stupid AR glasses, the Nuance Hearing Glasses are actually a good idea (that will probably be killed by much better hearables, but that we will have to see).
The New Smart Speakers
Another area where I strongly recommend that speaker designers start paying attention is that of adaptive systems for room and acoustics compensation. As seen in the Sonos Trueplay concept that can optimize the sound of Sonos speakers with the support of an iOS or Android app, or in the Apple HomePod, which essentially does that automatically every time the user moves it to a new location or changes the program source, fitting microphones and sensors in speakers will be considered standard in future designs.
We now have soundbars gaining those sophisticated capabilities, combining automatic acoustic compensation with sophisticated processing for those users who will never even consider going through the measurements required by current room compensation solutions used in home cinema (and Dirac knows this). Adding AI, users will never have to even care about that. And guess what? Those systems will be available in affordable SoCs and SoMs because they are designed for comparably much larger volume applications in hearables.
But the reason to go all-in with the automatic room compensation designs in speakers is connected to another major trend for which CES 2024 will be remembered: The rise of Bluetooth LE Audio and Auracast. Since the very first demonstrations I attended at the show, all the products presented for availability in 2024 and beyond are LE Audio enabled and support Auracast broadcast modes.
I attended the official Bluetooth SIG Auracast Experience, which is purely focused on explaining use cases for earbuds – which makes sense in public spaces. But dual mode (Bluetooth Classic/LE Audio) is what the audio industry will be shipping for now, and that is what makes sense in all sorts of speakers and other less obvious applications.
With Auracast-enabled wireless Bluetooth speakers playing all the same source signal in any acoustic environment – and even with two simple stereo-paired speakers – there are many new problems that speakers designers need to address. It all starts with the Bluetooth stream and streaming sources causing inevitable latency, but it just gets worse because of the “air in the room.” The distance and acoustics cause all sorts of problems for Auracast speakers – because they are unpredictable.
No one knows where and in what acoustical conditions the speakers will be playing, and worse, no one knows how two, three, or more speakers are going to be placed by users. Most likely, they will all be “pointing” at each other in the same room and unleashing phase and comb filtering hell. And funny enough, it’s even worse if there are current automatic acoustic room compensation features in those speakers. Because those systems are not aware of Auracast and party modes, and they will likely try to compensate for strange “echoes” and “reflections” that are in fact caused by other speakers playing the same thing. And every time they change, because they are “adaptive,” they create a never-ending sequence of new problems. Not pretty.
And yet, that is why those AI-smart room correction and acoustics compensation systems will be required. So that they can optimize the sound every time an Auracast configuration with two or more speakers is enabled in the same room. This is a completely new set of problems for speaker design, but one that will greatly benefit from the many technologies that I saw at CES, and most of which I cannot talk about.
Believe me, it’s a whole new world of what I have seen at many technology demonstrations at CES. It’s not just Bluetooth LE Audio that is coming. I actually saw (and heard) Ultra-Wideband (UWB) wireless audio systems that will pave the way for many more possibilities, unconstricted from the bandwidth restrictions of Bluetooth and enabling hi-res audio to finally becoming truly wireless. And Wi-Fi (The Wi-Fi Alliance announced Wi-Fi Certified 7 precisely during CES) is evolving considerably and will remain a key technology for multichannel and home cinema applications. Because it’s not just stereo. It’s Spatial Audio.
I heard excellent demonstrations of truly immersive 3D audio with headphones and without headphones – in all sorts of speakers. All those connected applications will benefit from more “intelligence” when they need to work wirelessly with multiple units in the same room.
When possible, I will be gradually covering those technologies and possibilities in our continuous online coverage of CES 2024, at audioxpress.com.
This article was originally published in The Audio Voice newsletter, (#453), January 18, 2024.