ChatGPT’s boss says you shouldn’t rely on it as your main news source just yet

When you start a conversation with ChatGPT, you may see the following text at the bottom of the screen: “ChatGPT may cause errors. Please check important information.” This remains the case with the new GPT-5 model, a senior OpenAI official reiterated this week.

“But the problem with reliability is that there’s a big gap between very reliable and 100% reliable in terms of how you envision the product,” said Nick Turley, head of ChatGPT at OpenAI. The Verge Decoder Podcast. “Until we can prove that we are more reliable than a human expert in all areas, not just some, we will continue to advise you to verify your answer.”

- Advertisement -

This is something we’ve long emphasized about chatbots in our AI coverage. OpenAI does this too.

Always check. Always check. While the chatbot can be useful for certain tasks, it can also make things up.

Turley hopes to see improvements on that front.

“I think people will continue to use ChatGPT as a second opinion and not necessarily as a primary source of information,” he said.

The problem is that it’s very tempting to take a chatbot’s answer (or a hint of artificial intelligence in Google search results) at face value. But generative AI tools (not just ChatGPT) tend to be “delusional” or make things up. They do this because they are primarily designed to predict the answer to a question based on the information in the training data. But AI models of this generation lack a concrete understanding of the truth. When you talk to a doctor, therapist, or financial advisor, that person should be able to give you the right answer for your situation, not just the most likely answer. In most cases, the AI ​​will give you the answer it probably thinks is correct, without any specific experience to help you verify it.

Even if the AI ​​can guess pretty well, these are still guesses. Turley acknowledged that the tool works best when paired with something that provides a better understanding of the facts, such as a traditional search engine or company-specific internal data. “I still think Ground Truth-linked LLMs is absolutely the right product, and that’s why we brought search into ChatGPT and I think it makes a big difference,” he said.

- Advertisement - Advertisement

Don’t expect ChatGPT to get everything right just yet

Turley said GPT-5, the latest major language model behind ChatGPT, represents a “tremendous improvement” in the field of hallucinations, but is far from perfect. “I’m sure we’ll discover the hallucinations sooner or later, and I’m confident we won’t find out in the next quarter,” he said.

When I tested GPT-5, I found that it had already caused errors. While the personalities of the new language model were being tested, there was some confusion over the college football schedule, with all games scheduled for the fall said to take place in September.

Make sure you compare any information you get from a chatbot with a reliable and truthful source. It could be an expert, such as a doctor, or a trusted source on the Internet. And even if a chatbot gives you information with a link to a resource, don’t assume that the bot has summarized the resource correctly. This may have distorted the facts on its way to you.

If you want to make a decision based on information, see what the AI ​​tells you, if you don’t care at all if you make the right decision.

Related Articles