Anthropic unveils Claude 3 LLM series, outperforming GPT-4 and Gemini in benchmarks
Anthropic has launched its new Claude 3 AI LLM series, which it claims surpasses competitors such as OpenAI's GPT-4 and Google's Gemini 1.0 Ultra in multiple benchmarks. The leading model, Claude 3 Opus, demonstrates superior performance in undergraduate-level knowledge, general knowledge, and elementary school math benchmarks. It also offers improved chatbot responses and multimodal capabilities.
The Claude 3 series' multimodal capabilities allow users to upload a variety of content types, such as images, documents, and charts. The AI models can generate responses equivalent to top-tier models for these diverse inputs.
The series includes the high-end Opus model and the cost-effective Sonnet model, which offers reliable performance for enterprise applications. The smallest model, Claude 3 Haiku, is designed for quick responses to simpler queries and can summarize up to 150,000 words, a significant upgrade from Claude 2. The Opus and Sonnet models are available in 159 countries, with the Haiku model set to be released soon. Users can experience the Sonnet-based chat for free at claude.ai, while access to the Opus chatbot is available through a paid Claude Pro subscription for $20 per month.

Comments
Hmm, many have claimed to outperform GPT-4, citing "benchmarks". I've yet to see any outperform it in real life. Let's see what the "user benchmark" results are.