Broadcom just announced an AI chipset that translates audio in real time right on the device

Broadcom and a company called CHANGE AI Work together to transfer the audio translation from the device to a chipset. This would allow devices using the SOC to perform translation, synchronization and audio description tasks without having to delve into the cloud. In other words, it can significantly improve accessibility for consumers.

The companies promise extremely low latency and better data protection because all processing takes place locally on the user’s device. A significant reduction in WLAN bandwidth is also expected.

- Advertisement -

As for the audio description, there is a demonstration video showing how to use the tool in a clip. You can hear the AI ​​describe the scene in multiple languages, in addition to the written translation displayed on the screen. This seems incredibly useful, especially for people with vision problems.

Of course, we have no idea how this technology will perform in a real world scenario. We also do not know how accurate the information will be. It presents a language model already used by organizations such as NASCAR, Comcast and Eurovision.

The companies boast that this will enable “on-device translations in more than 150 languages.” We don’t know when these chips will be used in televisions and other devices. Broadcom also recently teamed up with OpenAI to help the latter company produce its chips.

Updated November 11, 2025 at 12:18 p.m. ET: This story has been updated to clarify the use of the tool in the clip above.

Related Articles