Google introduces new MedGemma and MedSigLIP open AI models for health applications
Google has expanded its MedGemma portfolio by launching new open AI models designed specifically for health applications, all of which are capable of running on a single graphics processing unit (GPU). Leading the release is MedGemma 27B Multimodal, a model that enables advanced multimodal and longitudinal electronic health record (EHR) interpretation. This builds on the prior 4B Multimodal and 27B text-only models, adding deeper support for complex healthcare data.
Alongside this, Google has introduced MedSigLIP, a lightweight image and text encoder focused on efficient classification, search, and related tasks within medical imaging. MedSigLIP leverages the same image encoder technology used in the MedGemma 4B and 27B models. While MedGemma is positioned for tasks such as report generation or visual question answering, MedSigLIP is suited to imaging workflows involving structured outputs like classification and retrieval.
All models in this release support deployment on a single GPU, and both MedGemma 4B and MedSigLIP can be adapted for mobile hardware environments. These models were created by training a medically optimized image encoder and fine-tuning the Gemma 3 base models with medical data. Since the MedGemma collection is fully open, developers can download, customize, and further fine-tune these models for their specific research or product needs.
