OpenAI launches GPT‑5.3-Codex with faster and broader agentic coding capabilities

OpenAI launches GPT‑5.3-Codex with faster and broader agentic coding capabilities

OpenAI has announced GPT‑5.3-Codex, the latest update to its family of coding models and the most capable agentic coding agent released to date. This new version combines improvements in coding performance, reasoning ability, and professional knowledge with a significant speed boost: GPT‑5.3-Codex operates 25 percent faster than its predecessor.

Building on these upgrades, GPT‑5.3-Codex now lets users handle continuous, interactive, and long-duration research and execution tasks without losing context. While previous Codex releases focused on code writing and review, this version supports nearly every activity developers and professionals typically perform on a computer, such as debugging, deployment, monitoring, writing product requirements documents, editing copy, user research, and more.

GPT‑5.3-Codex demonstrates its expanded capabilities by achieving new industry highs on the SWE-Bench Pro and Terminal-Bench coding and agentic benchmarks. It also posts strong results on OSWorld and GDPval, establishing its standing in real-world use cases. The model can build functional complex games and applications from scratch over days, further highlighting its multi-phase project competency. GPT‑5.3-Codex is available with paid ChatGPT plans across various platforms, including the app, command line interface, IDE extensions, and web, with API access coming soon.

by Paul

NetGameXtedward26
NetGameX found this interesting

Comments

SleipnirTheHorse
0

Wow, a Bubble which is only profitable for a few monopoly type companies, I hope that AI can fulfill the promises of medicine, science and actual creativity. But the big data center model isn't sustainable or profitable, but for awhile (even a decade before AI) there have been cheap to build small super-computers such as Raspberry Pi Clusters and bigger less cheap but still far more but still less expensive super-computers (still only little less then created a decade back) developed by Japanese Scientist which created one out of graphics cards, which we can begin using local models to create personal data centers and mini-data centers. Yes, most of the people talking local AI models are the creeps who used them for that crap of Twitter (I refuse to call it X!) but that actually looks it's not just like the future for that stuff, but the only future for AI, since the Big Data Centers will be hit worse then the Dinosaurs by the AI Bubble burst.

The fact is due the decisions of the Big OS giants Microsoft, Google, and Apple and the new blossoming of Government led Linux movement will result in more efficient and adaptable OSs, meaning we can create very efficient the personal data centers and mini data centers which could be powered by Solar, there's whole website powered off Solar, it goes off line sometimes but not as often as you would think, and my dad uses Solar to charge his house and Tesla and he's got solution to when the Sun ain't shinning, a large rechargeable battery the size of a large tall storage bin, and he is Middle Middle Class and got it when it much more expensive, same with solar panels on roof, and the house is small, and only partially covered. With the cooling, since it will built from much more traditional parts we could use traditional methods as well as super cooled liquid gases.

But there will need to be advances in the in the AIs programing itself to get further, but remember computer only advanced into personal devices after they were allowed to become hobby horse type things in the 70s, before that even the Enigma Machine could not mach a mid-priced calculator.

Artificial General Intelligence is not something will happen in our life times if we don't extend them and robotic AI is not going to replace as many jobs as you would think. Some Jobs are going Automated but not most and there's going to need to be human oversight. Basically this isn't one bit that much of improvement.

UserPower
0

I like the announcement ending with "We are grateful to Nvidia for their partnership" when Nvidia just cancelled the $100B deal with OpenAI few days before (well, this deal may have never existed in the first place except in Altman's dreams).

The product is obviously fine for some, and it's good thing given how this technology is in a endless beta version, but the company has recklessly no long term vision at all and won't ever have a single clue if it would still exist in six months, even with the whole genAI world bound to it.

1 reply
SleipnirTheHorse

This and everything and all other corporate AI can not be sustained even we found far more fresh water and and started rapidly advancing in power production, but Google Raspberry Cluster which have existed long before as cheap small super computer then do some research into local models, I'm an AI Skeptic, but I believe this version of AI giving way (if this can be done) will lead to a hobby horse generation of AI, like what occurred with the beginnings of personal computers in the 60s and 70s. Just who's going to do it first?

superstickynotemealt
1

My overall experience with it has been fairly good. It's been a solid upgrade over 5.2 codex. It tends to write code faster and more compactly with fewer tokens, handles large refactors well, and is better at spotting potential issues. Overall, I've been comparing it to Opus 4.6 lately and 5.3-codex is pretty much solidly beating it in everything that I throw at the two. Opus 4.6 Can get a little more creative with solving a problem, which makes it better for intense troubleshooting, but thus far that's the only thing I've seen edge out 5.3-codex. Now if only OpenAI could get their image gen models up to the same Quality is, right now, Nano Banana is beating them by a mile. Rave is beating them by a half mile, and in most things even Seedream 4.5 bests them yet their image gen prices are still amongst the highest in the industry.

1 reply
LR88

Agreed. 5.3 made me cancel my Claude and return to OpenAI: faster, smarter, cheaper (for their api-based $). I haven't found real need for Claude's 1M context window thus far.

The only thing is I wish OpenAI supported plugins with the hooks and all, but I can easily make 5.3 create a wrapper and work with the plugins I use.

Gu