
Google introduces Gemma: a new open-source AI model for developers and researchers
Google has introduced Gemma, a new open-source AI model developed by its DeepMind team. Designed for researchers who prefer local model usage, Gemma shares elements with its larger counterpart, Google Gemini. Gemma is available in two versions, Gemma 2B and Gemma 7B, both of which are pre-trained and instruction-tuned models that can operate on local desktops, laptops, or in the cloud. These models are optimized for Nvidia GPUs and Google Cloud TPUs, enhancing their performance and flexibility.
Unlike Gemini, Gemma is open-source, allowing broader experimentation and usage by developers. The models will be accessible on platforms including Kaggle, Hugging Face, Nvidia’s NeMo, Google’s Vertex AI, and as a free resource in Colab notebooks. Despite their smaller size, Gemma 2B and 7B outperform larger language models, including Meta's Meta Llama, on key benchmarks. Gemma models are provided with a commercial license applicable to organizations of all sizes and projects, with new Google Cloud Platform having access to it using $300 free credits, and researchers can purchase up to $500,000 in credits to use Gemma via Google Cloud.
Just earlier this month, Google renamed its Bard chatbot as Gemini and unveiled its plans for a future Gemini 1.5 Pro with a million-token context window. To find all Gemma launch details, check out the official announcement.

