In February 2024, Google released a family of open-source models, Gemma 2B and 7B. Now, at the Google I/O 2024 event, the tech giant announced the upcoming launch of Gemma 2 in June. This new Gemma 2 model will also be open-source and will be trained on 27 billion parameters.
Google stated that developers and researchers have been requesting a larger, easy-to-use open model. Gemma 2 is the result of that effort. According to Google, the Gemma 2 27B model will outperform “some models that are more than twice its size.“
Additionally, the company claims that Gemma 2 will be lightweight enough to run efficiently on GPUs or a single TPU hosted in Vertex AI. So far, Meta has been leading the open-source AI space with its Llama 3 models. The Llama 3 8B and 70B models have demonstrated remarkable performance despite their relatively small sizes.
Now, we will have to wait and see how the upcoming Gemma 2 model performs. With numerous open-source models from Mistral, Meta, Microsoft, and other AI companies competing in this space, the competition is fierce. Are you excited about the Gemma 2 open-source model? Let us know in the comments below.
0 Comments