Model
Google Unveils Gemini 3 Flash: Fast and Cost-Effective Frontier AI
![]()
Google announced Gemini 3 Flash in Dec 2025: frontier intelligence built for speed at a fraction of the cost. It combines Gemini 3's Pro-grade reasoning with Flash-level latency and efficiency, so you get strong reasoning and multimodal capabilities without the wait or the premium price. It's now the default model in the Gemini app and is rolling out in AI Mode in Search for everyone.
What it's for
Gemini 3 Flash is for when you need smart answers fast: high-volume apps, real-time chat, quick summaries, agentic workflows, and coding. It's tuned for low latency and strong price/performance so more developers and teams can ship capable AI without breaking the bank.
Why it matters
You get frontier-level quality (e.g. strong on GPQA Diamond and MMMU Pro) with Flash speed and lower cost than 2.5 Pro. That means better defaults in the Gemini app and Search for everyday users, and a better price/performance option for developers and enterprises. More people can build and use serious AI without the usual trade-offs between quality, speed, and cost.
Where you get it
- Everyone: Gemini app (default model) and AI Mode in Search as it rolls out globally.
- Developers: Google AI Studio, Android Studio, and the Gemini API, including Gemini 3 Flash in the Gemini API.
- Enterprises: Vertex AI and Gemini Enterprise.
Try it: Gemini 3 Flash announcement — then use it in the Gemini app or AI Studio.