Anthropic has launched an improved version of its LLM, Claude Instant 1.2, for business applications through an API
Anthropic has introduced an improved version of its Large Language Model (LLM), Claude Instant 1.2, tailored for business applications. The model, accessible through an API, strikes a balance between speed and cost-effectiveness, and it incorporates features of Anthropic's recently updated main model, Claude 2, resulting in notable improvements in math, coding, reasoning, and other tasks such as casual conversation, text analysis, summarization, and document comprehension. It also presents increased resistance to jailbreak attempts.
The new model scored 58.7% in the Codex P@1 Python programming benchmark, an increase from the previous model's 52.8% score. Additionally, it performed better in the GSM8k grade-school math problems benchmark, scoring 86.7%, a significant rise from the earlier version's 80.9% score. The new version generates longer, more structured responses with better adherence to formatting instructions. Other improvements include superior quote extraction, multilingual support, and question answering. Claude Instant 1.2 also reduces the occurrence of "hallucinations" or generation of incorrect or nonsensical text.
Despite minor decreases in some benchmarks compared to the previous version, the overall improvements of Claude Instant 1.2 are substantial. Unlike Claude 2, which users can directly use on the Anthropic website, Claude Instant 1.2 is exclusively accessible to businesses via an API as we mentioned before. However, third party services such as Poe from Quora, DuckDuckGo's DuckAssist or the Notion AI Assistant, provides access to Claude Instant 1.2 and other LLMs in some cases, like the new Meta's Llama 2 model