
Google unveils Gemini 1.5 AI with enhanced efficiency and million-token context window
Google has launched an updated version of its conversational AI system, Google Gemini 1.5, boasting improvements in efficiency, performance, and long-form reasoning. The system's architecture has been enhanced to surpass the performance of the larger Gemini 1.0 Ultra model, using fewer computing resources. A significant upgrade in this version is the experimental million-token context window, a major step up from the previous 128,000 token context.
The million-token feature allows Gemini 1.5 to process more continuous information, improving its extended reasoning capabilities. That would be about 1 hour of video, 30K lines of code and 700K words. Google CEO Sundar Pichai demonstrated the system's ability to summarize complex content, using examples like the Apollo 11 mission transcript and silent films. Google's DeepMind co-founder, Demis Hassabis, noted that the expanded context enables the system to effectively analyze, classify, and summarize large amounts of content within a prompt, with preliminary results showing sustained performance as the context window expands.
Google is also integrating the Gemini API into Firebase with two new extensions, "Build Chatbot with the Gemini API" and "Multimodal Tasks with the Gemini API", allowing users to quickly and easily add AI features to their Firebase apps. You can read more about it here.
Although the million-token version's public availability is unconfirmed, Google is offering a limited preview to developers and enterprise users via its Vertex AI platform. Users can also fill in this waitlist form to request early access. This follows Google's recent rebranding of its AI system from Bard to Gemini, which included the introduction of a paid Gemini Advanced tier powered by the Ultra 1.0 model.