NewsWhisper Leak reveals how your AI-encrypted call can be stolen

Whisper Leak reveals how your AI-encrypted call can be stolen

  • Microsoft points out that the Whisper Leak reveals privacy flaws in encrypted AI systems
  • AI-encoded chats can still reveal clues about what users are discussing
  • Attackers can track conversation topics based on packet size and timing.

Microsoft has discovered a new type of cyberattack called “Whisper Leak” that can reveal topics that users discuss with AI chatbots, even if the conversations are completely encrypted.

The company Research suggests that attackers can examine the size and timing of encrypted packets exchanged between a user and a large language model to infer what is being discussed.

“If a government agency or ISP were to monitor traffic to a popular AI chatbot, it could reliably identify users asking questions about specific sensitive topics,” Microsoft said.

Whispered escape attacks

This means that “encrypted” does not necessarily mean invisible: the vulnerability lies in how LLMs send responses.

Instead of waiting for a full response, these models send data gradually, creating small patterns that attackers can analyze.

As more samples are collected, these patterns become clearer over time and allow for more precise hypotheses about the nature of the conversations.

This technique doesn’t decrypt messages directly, but it exposes enough metadata to draw informed conclusions, which is probably just as worrying.

After Microsoft’s disclosure, OpenAI, Mistral and xAI said they quickly took restrictive measures.

One solution adds a “random variable-length text string” to each response, breaking the consistency of token sizes that attackers rely on.

However, Microsoft advises users to avoid sensitive discussions over public Wi-Fi, use a VPN, or stick to non-streaming LLM models.

The findings are accompanied by new evidence showing that several open-weighted LLMs remain vulnerable to manipulation, particularly in multi-turn conversations.

Researchers at Cisco AI Defense found that even large business models struggle to maintain security controls when the conversation becomes complex.

Some models, they said, showed “a systemic inability … to maintain safety barriers during prolonged interactions.”

In 2024, reports indicated that an AI chatbot leaked more than 300,000 files of personal data, leaving hundreds of LLM servers unprotected, raising questions about how secure AI chat platforms really are.

Traditional defenses, such as antivirus software or firewall protection, cannot detect or block secondary leaks like the Whisper Leak, and these results show that AI tools can inadvertently increase exposure through data monitoring and inference.

More From NewForTech

AI-generated code contains more bugs and errors than human production

According to the report, the average pull request generated...

Spotify Wrapped says my listening age is 79 and a colleague’s is 100

Spotify Wrapped is a nice annual summary of your...

Windows 11 25H2 is here: upgrade now or stay

Windows 11 25H2 is now available for all compatible...

The United Nations has just made an important decision about who will control the Internet

Creating a people-centric internet required multiple stakeholders, says the...

Europe humiliates X with heavy fines, Elon Musk loses patience

For the first time, the European Union has imposed...

Who is Diego Borella? Emily’s Devotion in Paris Season 5 Explained

Diego Borella, Deputy Director of Emilia in ParisHe...