Open AI announced This changes how chatgpt voice mode works on the company’s website and app. As part of an update, you’ll be able to interact directly with ChatGPT Voice in your current chat, allowing you to see a transcript of your conversation using OpenAI’s AI model, as well as visual representations of what ChatGPT is talking about.
You can start a voice chat by simply tapping or clicking the “waveform” icon next to the ChatGPT text field. Instead of getting into this the original interface full of chips Now that the feature has been introduced, voice chats will now be performed as above. In the demo video that OpenAI posted with the announcement, ChatGPT could see a transcript of the conversation, followed by a map of the most popular bakeries and images of the pastries sold at Tartine. OpenAI says that if you prefer the original voice interface, you can restore it by enabling it Separate mode in the “Speech mode” section of the ChatGPT settings.
The combination of visual and vocal responses is a natural extension of ChatGPT’s multimodal nature. You can now call the OpenAI model with your voice and an image or video. It makes sense that ChatGPT voice responses have the same level of detail. google has explored similar approaches to making Gemini Live more expressive during conversations, including allowing AI to highlight certain parts of a live video with overlays. This OpenAI feature is not as responsive, but can make a voice chat with ChatGPT more informative.