Google is racing to put Gemini at the center of Android before Apple's AI reboot

Google unveiled Gemini Intelligence at its May 12 Android Show, embedding agentic AI across phones, cars, and laptops weeks before Apple's WWDC AI reboot.

Google used its May 12 Android Show to make its biggest AI play yet: a rebranded, deeply integrated version of Gemini called Gemini Intelligence, rolling out across phones, cars, watches, glasses, and laptops over the next year. The timing is not a coincidence. Apple is expected to reveal its own rebooted, partially Gemini-powered version of Apple Intelligence at WWDC in June, and Google clearly wants to set the terms of the conversation first.

From operating system to intelligence system

The line that captures the announcement comes from Sameer Samat, who runs Android at Google. "We're transitioning from an operating system to an intelligence system," he told CNBC. That sounds like executive-speak, but it maps onto a real change in how Android is being built.

Until now, Gemini on Android has been an app you open or an overlay you summon. Gemini Intelligence is meant to be the layer Android runs on top of. Google said it will be able to move across apps, understand what's on the screen, and complete tasks that would normally require jumping between multiple services. Instead of asking it a question and getting an answer, you give it a goal and it works across your apps to get there.

What it can actually do

The demo Google leaned on was planning a backyard barbecue. Samat described asking Gemini to look at the guest list, build a menu, add ingredients to an Instacart cart, and return for approval before checkout. That single example covers a lot of ground. Reading context from one app, generating new content, taking action in a third-party service, and pausing for human approval before spending money.

Other use cases Google showed include pulling relevant information from Gmail without manually opening it, filling out forms, organizing shopping lists, and stitching together multi-step workflows that used to require constant app switching.

The "human in the loop" promise

The biggest worry with agentic AI is software taking actions on your behalf that you didn't sanction. Google addressed it directly. Samat said Gemini will come back to the user before completing a transaction, with the human always in the loop.

In practice, that means Gemini can do the legwork, like assembling the cart, drafting the message, or booking the slot, but it pauses before pulling the trigger on anything irreversible. Whether that promise holds up across the full range of agentic tasks is a question for the rollout, not the keynote, but it's the right design instinct.

Where it shows up

This is the part of the announcement that might matter most in the long run. Gemini Intelligence is not launching as a phone feature. It's launching as a platform feature across Google's entire device footprint.

The app automation features will roll out in waves, starting with the latest Samsung Galaxy and Google Pixel phones this summer, before expanding across watches, cars, glasses, and laptops later in the year. Android Auto is getting a parallel rebuild around Gemini, which matters because Android Auto is in more than 250 million cars. The same announcement included the biggest maps update in a decade and Gemini-powered help with tasks like ordering dinner while driving.

That breadth across phone, car, watch, laptop, and glasses is the thing Apple cannot easily match in a single keynote.

The Apple wrinkle

There's a strange twist to the rivalry framing. Google is racing Apple, but Google is also one of Apple's suppliers. Four months after announcing its Gemini deal with Google, Apple is under pressure to show a more capable version of Apple Intelligence, which has been a relative laggard. Apple is using Gemini models to power parts of the new Siri.

So the competitive picture is unusual. Google wants to demonstrate that on Android, its AI is more capable and more deeply integrated than the version Apple will ship using the same underlying technology. The bet is that integration depth beats brand polish.

Why Wall Street is on board

The market response so far has been emphatic. Alphabet's stock is up more than 140% in the past year, compared to Apple's roughly 40% gain. Investors want to see Gemini become more central to the products people actually use, and the May 12 announcements are essentially Google's answer to that demand. Google I/O on May 19 is expected to bring more detail.

What to watch next

A few specific things will tell us whether Gemini Intelligence lands the way Google wants it to. How well the cross-app actions work in the real world, not just in stage demos. Whether the rollout stays on schedule across the device categories Google promised. How Apple responds at WWDC, and whether its tighter hardware-software integration produces a more polished, if narrower, version of the same idea. And how quickly third-party developers get tools to plug into Gemini's agentic layer, since that's what determines whether this is a Google ecosystem feature or a genuine platform.

For now, Google has clearly set the tempo. The question is whether the rest of the year delivers on what May 12 promised.

Start building with agents in minutes

Start building with agents in minutes

Start building with agents in minutes

Start building with agents in minutes