Google’s Gemini 2.5 Flash introduces ‘thinking budgets’ that cut AI costs by 600% when turned down


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Google has launched Gemini 2.5 Flash, a major upgrade to its AI lineup that gives businesses and developers unprecedented control over how much “thinking” their AI performs. The new model, released today in preview through Google AI Studio and Vertex AI, represents a strategic effort to deliver improved reasoning capabilities while maintaining competitive pricing in the increasingly crowded AI market.

The model introduces what Google calls a “thinking budget” — a mechanism that allows developers to specify how much computational power should be allocated to reasoning through complex problems before generating a response. This approach aims to address a fundamental tension in today’s AI marketplace: more sophisticated reasoning typically comes at the cost of higher latency and pricing.

“We know cost and latency matter for a number of developer use cases, and so we want to offer developers the flexibility to adapt the amount of the thinking the model does, depending on their needs,” said Tulsee Doshi, Product Director for Gemini Models at Google DeepMind, in an exclusive interview with VentureBeat.

This flexibility reveals Google’s pragmatic approach to AI deployment as the technology increasingly becomes embedded in business applications where cost predictability is essential. By allowing the thinking capability to be turned on or off, Google has created what it calls its “first fully hybrid reasoning model.”

Pay only for the brainpower you need: Inside Google’s new AI pricing model

The new pricing structure highlights the cost of reasoning in today’s AI systems. When using Gemini 2.5 Flash, developers pay $0.15 per million tokens for input. Output costs vary dramatically based on reasoning settings: $0.60 per million tokens with thinking turned off, jumping to $3.50 per million tokens with reasoning enabled.

This nearly sixfold price difference for reasoned outputs reflects the computational intensity of the “thinking” process, where the model evaluates multiple potential paths and considerations before generating a response.

“Customers pay for any thinking and output tokens the model generates,” Doshi told VentureBeat. “In the AI Studio UX, you can see these thoughts before a response. In the API, we currently don’t provide access to the thoughts, but a developer can see how many tokens were generated.”

The thinking budget can be adjusted from 0 to 24,576 tokens, operating as a maximum limit rather than a fixed allocation. According to Google, the model intelligently determines how much of this budget to use based on the complexity of the task, preserving resources when elaborate reasoning isn’t necessary.

How Gemini 2.5 Flash stacks up: Benchmark results against leading AI models

Google claims Gemini 2.5 Flash demonstrates competitive performance across key benchmarks while maintaining a smaller model size than alternatives. On Humanity’s Last Exam, a rigorous test designed to evaluate reasoning and knowledge, 2.5 Flash scored 12.1%, outperforming Anthropic’s Claude 3.7 Sonnet (8.9%) and DeepSeek R1 (8.6%), though falling short of OpenAI’s recently launched o4-mini (14.3%).

The model also posted strong results on technical benchmarks like GPQA diamond (78.3%) and AIME mathematics exams (78.0% on 2025 tests and 88.0% on 2024 tests).

“Companies should choose 2.5 Flash because it provides the best value for its cost and speed,” Doshi said. “It’s particularly strong relative to competitors on math, multimodal reasoning, long context, and several other key metrics.”

Industry analysts note that these benchmarks indicate Google is narrowing the performance gap with competitors while maintaining a pricing advantage — a strategy that may resonate with enterprise customers watching their AI budgets.

Smart vs. speedy: When does your AI need to think deeply?

The introduction of adjustable reasoning represents a significant evolution in how businesses can deploy AI. With traditional models, users have little visibility into or control over the model’s internal reasoning process.

Google’s approach allows developers to optimize for different scenarios. For simple queries like language translation or basic information retrieval, thinking can be disabled for maximum cost efficiency. For complex tasks requiring multi-step reasoning, such as mathematical problem-solving or nuanced analysis, the thinking function can be enabled and fine-tuned.

A key innovation is the model’s ability to determine how much reasoning is appropriate based on the query. Google illustrates this with examples: a simple question like “How many provinces does Canada have?” requires minimal reasoning, while a complex engineering question about beam stress calculations would automatically engage deeper thinking processes.

“Integrating thinking capabilities into our mainline Gemini models, combined with improvements across the board, has led to higher quality answers,” Doshi said. “These improvements are true across academic benchmarks – including SimpleQA, which measures factuality.”

Google’s AI week: Free student access and video generation join the 2.5 Flash launch

The release of Gemini 2.5 Flash comes during a week of aggressive moves by Google in the AI space. On Monday, the company rolled out Veo 2 video generation capabilities to Gemini Advanced subscribers, allowing users to create eight-second video clips from text prompts. Today, alongside the 2.5 Flash announcement, Google revealed that all U.S. college students will receive free access to Gemini Advanced until spring 2026 — a move interpreted by analysts as an effort to build loyalty among future knowledge workers.

These announcements reflect Google’s multi-pronged strategy to compete in a market dominated by OpenAI’s ChatGPT, which reportedly sees over 800 million weekly users compared to Gemini’s estimated 250-275 million monthly users, according to third-party analyses.

The 2.5 Flash model, with its explicit focus on cost efficiency and performance customization, appears designed to appeal particularly to enterprise customers who need to carefully manage AI deployment costs while still accessing advanced capabilities.

“We’re super excited to start getting feedback from developers about what they’re building with Gemini Flash 2.5 and how they’re using thinking budgets,” Doshi said.

Beyond the preview: What businesses can expect as Gemini 2.5 Flash matures

While this release is in preview, the model is already available for developers to start building with, though Google has not specified a timeline for general availability. The company indicates it will continue refining the dynamic thinking capabilities based on developer feedback during this preview phase.

For enterprise AI adopters, this release represents an opportunity to experiment with more nuanced approaches to AI deployment, potentially allocating more computational resources to high-stakes tasks while conserving costs on routine applications.

The model is also available to consumers through the Gemini app, where it appears as “2.5 Flash (Experimental)” in the model dropdown menu, replacing the previous 2.0 Thinking (Experimental) option. This consumer-facing deployment suggests Google is using the app ecosystem to gather broader feedback on its reasoning architecture.

As AI becomes increasingly embedded in business workflows, Google’s approach with customizable reasoning reflects a maturing market where cost optimization and performance tuning are becoming as important as raw capabilities — signaling a new phase in the commercialization of generative AI technologies.



Source link
Scroll to Top