Microsoft Flags Whisper Leak That Risks AI Conversation Exposure
Amid ongoing debates over AI’s widespread use and the privacy and ethical concerns it raises, often described as a double-edged sword, the technology continues to invade all aspects of human activity. Over the weekend, Microsoft revealed in a detailed blog a sophisticated new side-channel attack that can expose the topics of AI chatbot conversations even when the traffic is encrypted.
Attackers Can Expose Chat Topics Despite Encryption
The tech giant flagged the vulnerability, dubbed “Whisper Leak,” which allows snoopers to deduce sensitive prompt details by analyzing network packet sizes and timing patterns. It warned of this discovery amid growing privacy risks as AI tools become increasingly embedded in daily life, from routine personal to complex professional tasks involving sensitive information.
According to Microsoft, attackers can classify prompts on specific topics without decrypting any data, simply by analyzing the streaming responses from large language models (LLMs).
For tech enthusiasts and privacy advocates, this vulnerability is particularly alarming in oppressive regimes, where online discussions about protests, elections, or banned topics could potentially expose users to surveillance and targeting.
According to Microsoft researchers, tests showed that a cyberattacker could achieve 100% precision, meaning every conversation flagged as related to a target topic would be correct, while still capturing 5–50% of all target conversations.
In simple terms, nearly every conversation the attacker identifies as suspicious would indeed involve the sensitive topic, leaving virtually no false alarms. This level of accuracy would allow attackers to operate with high confidence, knowing they are not wasting resources on false positives.
The research indicates that even though all communications are encrypted, if a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users discussing specific sensitive topics like money laundering, political dissent, or other monitored subjects.
Microsoft warns that the cyberthreat could worsen over time, as the attack becomes more effective when attackers gather more training data. The tests have shown that attack accuracy continues to improve as the dataset size grows. Additionally, a cyberattacker with “patience and resources” will be able to achieve higher success rates.
According to Microsoft, the teams have conducted responsible disclosures with affected vendors. OpenAI, Mistral, Microsoft, and xAI have deployed protection measures, demonstrating a commitment to user privacy across the AI ecosystem.
Additionally, OpenAI and Microsoft Azure introduced a measure that adds a random sequence of variable-length text to each response, masking the original token lengths based on the prompt under key “obfuscation.” This approach has been observed to substantially reduce the effectiveness of the cyberattack, leaving no significant practical risks. Meanwhile, Mistral added a new parameter under key “p” with a similar effect.
However, Microsoft advises users to avoid discussing highly sensitive topics on AI chatbots when connected to untrusted networks, use VPNs to secure their connection, choose providers that have implemented mitigations, prefer non-streaming LLM providers, and stay informed about provider security practices.
Amid claims that the Whisper Leak has already affected several AI chatbots, including Copilot, ChatGPT, and Gemini, Microsoft emphasized that it has worked with multiple vendors to mitigate the risk and ensured that its own language model frameworks are protected.
Researchers Discover ChatGPT Flaws Allowing Data Leaks
Days before Microsoft’s blog on “Whisper Leak,” researchers revealed a new set of vulnerabilities affecting OpenAI’s ChatGPT, which could allow attackers to steal personal information from users’ memories and chat histories.
According to the researchers, seven vulnerabilities and associated attack techniques were identified in OpenAI’s GPT-4o and GPT-5 models, and OpenAI has since addressed some of them. These flaws expose the AI system to indirect prompt injection attacks, enabling attackers to manipulate the expected behavior of the large language model and trick it into performing unintended or malicious actions.

Unverified Reports Suggest Recent AI Chatbot Data Leaks
An unconfirmed media report, citing international sources, claims that a Whisper Leak breach exposed user conversations on AI chatbots, including Copilot, ChatGPT, and Gemini, disclosing personal and professional chat content worldwide.
According to the report, the attackers were able to access and analyze cloud-stored conversation files that were insecurely stored, obtaining sensitive data such as usernames, email addresses, chat logs, trade secrets, and software code, which were reportedly offered for sale on black markets.
The report adds that the incident has raised concerns among tech communities and companies relying on AI, prompting developers to strengthen security measures.
Our team has searched for further international reports citing the incident, but our efforts yielded no relevant findings. Thus, the report in question remains unverified.
Read More
The Intercept: YouTube Removed 700+ Videos Documenting Israeli Violations Against Palestinians
Canada’s Manitoba Introduces an Election Misinformation Bill












