Google Announces Gemma 2, Their Latest Open LLM For Developers

After releasing their first version earlier this year, Google has announced Gemma 2. This model is available in two versions: 9 billion and 27 billion parameters, and are created to bring efficient results, at a fraction of the price as compared to existing tech solutions. Gemma 2 is different because it works well on a single piece of hardware, like the NVIDIA H100 GPU.

One of the main goals with Gemma 2 was to make it easy for developers to use in their own projects. It is compatible with popular programming tools and platforms, such as Hugging Face and PyTorch, and can run on different types of computers— including powerful desktops and cloud-based systems. Starting next month, Google will also simplify the process of using Gemma 2 on its cloud platform, Google Cloud.

The press release reads, “We’ve continued to grow the Gemma family with CodeGemma, RecurrentGemma and PaliGemma — each offering unique capabilities for different AI tasks and easily accessible through integrations with partners like Hugging Face, NVIDIA and Ollama.

Now we’re officially releasing Gemma 2 to researchers and developers globally. Available in both 9 billion (9B) and 27 billion (27B) parameter sizes, Gemma 2 is higher-performing and more efficient at inference than the first generation, with significant safety advancements built in.”
 

 

How Does Gemma Help Researchers And Developers?

 
Gemma 2 is an AI tool that also helps developers and researchers who want to test and build new AI apps. Tasks that previously took way longer or needed more resources to complete, can now be done using Google’s new tool. Researchers can process large sets of data for linguistic analysis or complex problem-solving cases in industries such as health care and for autonomous driving.

Also, Gemma 2 is compatible with well-known AI tools like PyTorch and JAX, which many developers already use. This compatibility means you can integrate Gemma 2 into your existing projects without needing to rewrite your code.

Google’s license for Gemma 2 allows developers and researchers to share and even sell their AI innovations easily. Google is working on making it almost automatic to get Gemma 2 running on their cloud services, which means less tediousness and more time focusing on developing your AI projects.
 

How Is Google Maintaining Responsible Use?

 
As a way to keep Gemma 2 in responsible use, Google has designed a tooklit that helps users promote ethical and responsible activities. It contains a set of guides to enforce safety measures, a Learning Interpretability Tool to analyse Gemma’s responses, for example. The LIT can tell you why the LLM made certain decisions.

Google added, “We’re committed to providing developers and researchers with the resources they need to build and deploy AI responsibly, including through our Responsible Generative AI Toolkit. The recently open-sourced LLM Comparator helps developers and researchers with in-depth evaluation of language models.”
 

Quick Setup And Practical Help

 
Setting up Gemma 2 is about to get easier for Google Cloud users. Google is making efforts to automate the process on their cloud services. This means users can start working with Gemma 2 faster and spend more time on developing your AI projects rather than manually setting things up.

The Gemma Cookbook is another valuable resource. It contains practical examples and step-by-step instructions on how to make Gemma 2 help with specific tasks. Think it like a guidebook that helps you adjust and improve your AI model for different uses, with advice on how to achieve the best results.