AI อะไรเนี่ย

Model

Google Gemini 2.0 Updates: Flash Goes GA, Pro Experimental, and Flash-Lite Unveiled

Google Gemini 2.0 Updates: Flash Goes GA, Pro Experimental, and Flash-Lite Unvei

Hey AI enthusiasts! Big news from Google DeepMind today as they've rolled out some exciting updates for the Gemini 2.0 family of models. We're seeing the general availability of Gemini 2.0 Flash, a brand-new, super cost-efficient model called Gemini 2.0 Flash-Lite, and an experimental version of Gemini 2.0 Pro that's set to impress with its coding prowess.

What it's for

Let's dive into what these new models bring to the table:

  • Gemini 2.0 Flash is Now Generally Available: This model is your go-to "workhorse" for high-volume, high-frequency tasks. It's incredibly efficient, boasts a massive 1 million token context window, and is great for multimodal reasoning. Plus, image generation and text-to-speech capabilities are on their way soon! It was already available to Gemini app users, and now developers can integrate it into production applications.

  • Gemini 2.0 Pro Experimental for Power Users: Google is calling this their "best model yet" for coding performance and tackling complex prompts. It features an impressive 2 million token context window—the largest released so far!—and can even call tools like Google Search and code execution. If you're into serious development, this one's for you.

  • Introducing Gemini 2.0 Flash-Lite: Get ready for the "most cost-efficient model yet"! Flash-Lite is in public preview and offers better quality than 1.5 Flash at the same speed and cost, outperforming it on most benchmarks. It also has a 1 million token context window and multimodal input. Imagine generating captions for about 40,000 photos for less than a dollar in Google AI Studio's paid tier – that's some serious efficiency!

  • Gemini 2.0 Flash Thinking Experimental: For those using the Gemini app, Flash Thinking Experimental, which combines Flash's speed with enhanced reasoning for complex problems, is now available in the model dropdown on desktop and mobile.

All these new Gemini 2.0 models feature multimodal input with text output right out of the gate, with more modalities planned for general availability in the coming months.

Why it matters

These updates are a game-changer for developers and users alike. With Gemini 2.0 Flash becoming generally available, building robust, high-performance applications just got easier and more reliable. The experimental 2.0 Pro pushes the boundaries for complex coding and advanced reasoning, giving developers a powerful tool for intricate problems.

Flash-Lite's focus on cost-efficiency means more accessible, high-quality AI for a wider range of applications, democratizing advanced capabilities. This also highlights Google's commitment to continuous improvement, delivering better quality while maintaining speed and affordability.

Google has also incorporated robust safety measures into the Gemini 2.0 lineup. They're using reinforcement learning techniques where Gemini actually critiques its own responses for accuracy and improved handling of sensitive prompts. Plus, automated red teaming is in place to assess risks like indirect prompt injection, ensuring a safer AI experience. You can find more details on these advancements on the Google DeepMind Blog.

Where you get it

Ready to try these out? Here's where you can access the new Gemini 2.0 models:

  • Gemini 2.0 Flash (GA): Available via the Gemini API in Google AI Studio and Google Cloud Vertex AI for developers. It's also been available to all users of the Gemini app on desktop and mobile.
  • Gemini 2.0 Pro Experimental: Available experimentally to developers in Google AI Studio and Google Cloud Vertex AI, and to Gemini Advanced users in the Gemini app.
  • Gemini 2.0 Flash-Lite (Public Preview): Available in Google AI Studio and Google Cloud Vertex AI.
  • Gemini 2.0 Flash Thinking Experimental: Available to Gemini app users via the model dropdown on desktop and mobile.

For all the specifics, including detailed pricing information for these models, make sure to check out the Google for Developers Blog.

Try it: Learn more about these updates and access the models for details and links to AI Studio, Vertex AI.