OpenAI introduces GPT-4.5, its newest and largest non-reasoning AI model

OpenAI introduces GPT-4.5, its newest and largest non-reasoning AI model

OpenAI has unveiled GPT-4.5, its largest and most knowledgeable non-reasoning language model to date, surpassing the capabilities of its predecessor, GPT-4o. While not classified as a frontier model, GPT-4.5 boasts enhanced world knowledge, improved writing, and a refined interaction style. It excels in writing, programming, and problem-solving, with improved pattern recognition and conversational abilities that early testers describe as more emotionally intelligent and engaging.

GPT-4.5 supports features like search, file and image uploads, and canvas, but lacks native multimodal capabilities such as Voice Mode, video, and screen sharing at launch. It is also compute-intensive, which may affect deployment and scalability. In benchmark tests, GPT-4.5 scored a 38% on the SWE-bench Verified benchmark, showing a 2-7% improvement over GPT-4o. It also demonstrates significant reductions in hallucinations compared to previous models, enhancing its reliability. However, GPT-4.5 is not intended to replace models o3-mini or deep research, serving instead as a more general-purpose model.

The research preview of GPT-4.5 is now available for ChatGPT Pro users through the model picker on web, mobile, and desktop, with availability expanding to ChatGPT Plus and Teams users next week.

by Mauricio B. Holguin

cz
city_zen found this interesting
ChatGPT iconChatGPT
  419
  • ...

ChatGPT is a generative AI chatbot developed by OpenAI, launched in 2022, and built on the GPT-4o large language model. It offers AI-powered, web-based chat capabilities and is rated 4.4. As a sophisticated AI chatbot, it facilitates dynamic interactions and is often compared to alternatives like HuggingChat, Google Gemini, and GPT4ALL.

Comments

UserPower
2

The amount of data and energy burn to train seems, from many sources, just enormous, for an improvement that seems minor. And I can only use "seems" because outside OpenAI announcement, we don't have enough data about the real performances and relevance of answers, but it "seems" that this model ("Orion") needed 6 months and $0.5B to train, which is the pinnacle of "bigger = better". Half a billion dollars! Not so long ago, we used to ask ourselves how LLMs are supposed to help us, but giving the money burnt and the CO2 emissions (directly by GPUs, and indirectly when crawling the massive web), this is not for the good of humanity.

Gu