
OpenAI introduces GPT-5.4-Cyber, its response to Anthropic’s Mythos for cybersecurity
OpenAI has announced GPT-5.4-Cyber, a cybersecurity-focused version of GPT-5.4 that works as a direct response to Anthropic’s Claude Mythos Preview, which was also announced last week. Like Mythos, the model is aimed at defensive cybersecurity work, but OpenAI is framing its approach as broader access for verified defenders rather than a tool limited to a small group of major companies.
GPT-5.4-Cyber is described as more “cyber-permissive,” meaning it has lower refusal boundaries for legitimate security tasks compared with general purpose AI models. It also adds binary reverse engineering capabilities, allowing analysts to inspect programs and apps for malicious code. Access will still be restricted through OpenAI’s Trusted Access for Cyber program, with a new tiered verification system where the highest tier unlocks GPT-5.4-Cyber.
Anthropic’s Project Glasswing gives more context for why these models are being released under controlled access. The company says Claude Mythos Preview has already found thousands of high-severity vulnerabilities, highlighting both the defensive value but also potential risk of this type of AI capability, which is why the company has decided to be more restrictive with access for now. OpenAI says GPT-5.4-Cyber is part of its preparation for more capable cybersecurity models expected later this year.
Comments
Oh, so now we're racing downwards. Who's next ? Qwen, making it accessible for everyone instantly with their CyberSec alternative AI ? Not sure how to feel if many actors show up on this field using LLM's. Can we stop calling large language models AI at some point, eventually BEFORE Skynet is booting ? sigh