
- Google’s first reasoning model is finally here. The “Gemini 2.0 Flash Thinking” model can solve complex reasoning, math, and coding problems.
- It supports multimodal inputs such as images, videos, and audio files.
- It uses more compute resources and time to re-evaluate its response before generating the final answer.
After OpenAI introduced its o1 reasoning model that takes some time to “think” before responding, Google has now finally released its own version of the thinking model. The new AI model is “Gemini 2.0 Flash Thinking” aka gemini-2.0-flash-thinking-exp-1219. It’s an experimental preview model, and already available on AI Studio for testing and feedback.
The Gemini 2.0 Flash Thinking model follows the new paradigm of test-time compute that OpenAI introduced in September. Basically, it allows the model to use more compute resources and time to re-evaluate its response before generating the final answer.
In early research, it’s seen that when AI models are given more time to “think” during inference, they perform far better than models trained on large parameters.
Google has released its first thinking model with the smaller Gemini 2.0 Flash model, but it’s expected that inference scaling will come to the larger Gemini 2.0 Pro model (Gemini-Exp-1206) as well.
Google says Gemini 2.0 Flash Thinking can solve complex reasoning questions and difficult math and coding problems. And unlike OpenAI o1, it shows the raw thinking process of the model which is great for transparency.
Not to mention, the new Thinking model can process multimodal inputs such as images, videos, and audio files. Finally, its knowledge cutoff date is August 2024.
I briefly tested the Gemini 2.0 Flash Thinking model on AI Studio. It failed the popular Strawberry question on the first try, but on the next run, it got the answer right and said there are three r’s in the word “Strawberry”. Next, I asked it to find Indian states that don’t have ‘a’ in their names. Again, it got the answer wrong.
I think we should wait for the larger Gemini 2.0 Pro Thinking model which should deliver strong performance, and demonstrate the power of inference scaling. Meanwhile, on the LMSYS benchmark, Gemini’s thinking model has topped the chart across all categories.

Passionate about Windows, ChromeOS, Android, security and privacy issues. Have a penchant to solve everyday computing problems.
Add new comment
Name
Email ID
Δ

- Apple and Google have officially confirmed their team-up for the next-gen Siri AI.
- The Cupertino Giant will be using Google’s Gemini for a more personalized Siri model as well as Apple Intelligence features.
- We can expect the next-gen Siri to come out with iOS 26.4, sometime in March or April.
Apple has officially confirmed joining forces with Google to use its Gemini AI model to power the next-generation Siri. It will offer a more personalized experience and will be coming out with the iOS 26.4 update. Apple also plans to leverage Gemini’s capabilities for other Apple Intelligence features as well, later down the line.
The Next-Gen Siri will be powered by Google’s Gemini AI
Apple officially confirmed that it will be partnering with Google in a statement to CNBC . Here’s what it stated, “After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models, and we’re excited about the innovative new experiences it will unlock for our users.”
Later, Google also shared a post on X confirming the tie-up, “Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.”

Image Credit: X/@NewsFromGoogle
Both statements clearly mention that the Cupertino Giant will be using Gemini to power its assistant Siri. This was already rumored, as Apple’s attempts to acquire Perplexity went nowhere. With Gemini, Siri will get a major AI update . It will be able to handle more nuanced conversations and provide better results. Something long-time Apple users have been asking for years.
The next-gen Siri will arrive with the iOS 26.4 update, which will launch sometime in March or April. And it is only going to be available for Apple Intelligence-supported devices .
Something else worth noting is how Google’s statement mentions that Gemini will power Apple Intelligence features. This leads us to believe that Apple could use Gemini’s multi-modal capabilities for its Writings tools , Image Playground , and Message summaries, too.
Elon Musk Not Happy With Apple and Google Tie Up
xAI CEO Elon Musk also responded to Google’s announcement post on X, sharing his thoughts on the matter by saying, “This seems like an unreasonable concentration of power for Google, given that they also have Android and Chrome.” Though we don’t expect either Apple or Google to respond to Elon, we will update the situation as it progresses.
It is worth noting that xAI is the company behind Grok, which is in hot water currently due to its inappropriate image generation fiasco , and has been getting backlash from multiple news outlets, X users, and even government authorities.

With over 4 year of experience under the belt, I cover all facets of consumer tech, from smartphones to other consumer electronics, our favorite social media apps, as well as the growing realm of AI and LLMs. As an Apps and AI writer app Beebom, I provide my expertise in all these areas, weaving stories that help you get familiar with the tech around you. But you will find me playing NYT daily puzzles in my free time.
Add new comment
Name
Email ID
Δ