
Slack's AI assistant exploit exposes sensitive data to unauthorized users
Security researchers have discovered a way to exploit Slack AI assistant to reveal sensitive information to unauthorized users through specific prompts. The platform launched its AI tool in September 2023 to help with summarizing unread messages, answering questions, and searching for files. The vulnerability allows attackers to manipulate the AI into disclosing data from private Slack channels, even if the attacker is not a member of those channels.
The security firm PromptArmor explained that attackers could extract API keys placed in private channels by developers. The attack involves creating a public Slack channel and embedding a malicious prompt that directs the AI’s Large Language Model (LLM) to provide a clickable URL, sending the API key data to an attacker-controlled site. The flaw could also be used to access files uploaded to Slack, as the AI reads and processes these files, potentially exposing their contents.
Hackers can exploit this vulnerability without being in the Slack workspace by embedding malicious prompts in documents like PDFs and convincing a member to upload them. Once uploaded, the AI processes the hidden instructions, leading to data leaks. Although Salesforce has patched the issue in private channels, the vulnerability remains in public channels, which Salesforce considers "intended behavior" since all workspace members can view public messages.