
Hundreds of private chats are being exposed in Meta AI's new app amid major privacy flaw
Back in April, Meta launched a new standalone Meta AI app, offering several of its AI chatbot features previously accessible through Messenger, Instagram or WhatsApp, including writing assistance, image generation and memory of user preferences, powered by its Llama models. However, a couple of months later the new app is now facing serious scrutiny after reports revealed a major privacy flaw: hundreds of users' private conversations have been inadvertently published to a public feed, including audio recordings, images, and deeply sensitive content.
The scope of exposed data is extensive. Personal topics such as mental health crises, private medical diagnoses, confessions of cheating, legal issues, tax evasion, sexual matters, and even home addresses or social security numbers have been published alongside real usernames, often accompanied by voice messages. Commenting on these posts is open to all, raising risks of harassment, shaming, or identity theft. Many affected users only realized their posts were public after being warned by strangers, highlighting a major lack of notice. Several users have compiled dozens of clearly private conversations in threads on X, showing the level of sensitive information now being exposed publicly.
This privacy breach is the result of a serious UX failure, since Meta AI’s Share button offers no clear warning, making it easy for users to publish content publicly without realizing it. Unlike ChatGPT or Google Gemini, which provide private links and clear controls, Meta defaults to public sharing with only a small, easy-to-miss disclaimer. Users can limit exposure by going into Settings Data and privacy Manage your Information to hide past prompts and disable suggestions on Facebook and Instagram. Still, Meta has not addressed the issue with any clear or transparent changes to how the sharing process works, and the fact that users must manually adjust their settings to avoid public exposure only underscores the severity of the flaw.


We couldn't have known...