Your conversations with AI assistants such as ChatGPT and Google Gemini may not be as private as you think they are. Microsoft has revealed a serious flaw in the large language models (LLMs) that power these AI services, potentially exposing the topic of your conversations with them. Researchers dubbed the vulnerability “Whisper Leak” and found it affects nearly all the models they tested.
Microsoft finds security flaw in AI chatbots that could expose conversation topics
Reader’s Picks
-
The breakup of a personal relationship can bring hurt feelings, tension and confrontation, and sometimes even violence. But predicting which [...]
-
You are having dinner with friends, and the conversation is lively. Do your hands join the chat, or do they [...]
-
Jun Sunseri remembers his grandfather, Stanley, sharing stories about his service in World War II. A mechanic in the U.S. [...]
