Gemini, Everywhere, All the Time
We’re on the cusp of developer conference season, with a host of events set to kick off in the coming weeks as major technology companies set out their stall for the rest of 2025 and beyond. This includes Google I/O, which takes place next week and usually sees the company articulate its strategy for all its software platforms, with a healthy focus on AI. We expect AI on devices to be a major theme of multiple events, as we wrote about here.
However, Google broke from its usual routine to focus on the Android ecosystem story a week ahead of I/O. This has given us an early look at what’s coming to more than 3 billion active devices using Android worldwide. The approach makes sense; Google has a vast number of announcements to present during I/O (last year, the narrative was a little confused owing to an avalanche of announcements), so breaking some of the news early is a sensible approach to provide more breathing room next week.
The most important update? Gemini is coming to more devices. Google’s AI assistant will begin to expand beyond smartphones, with the smartwatch an important first beachhead. Gemini will officially be rolled out to Wear OS, with Google promising that the natural language interface will adapt smoothly to wearable products. Google says that Gemini on the wrist will provide access to the same model in the cloud used on phones, with some responses customized to be more concise for the wrist-worn experience. However, this requires internet connectivity, which is no surprise given the lack of computing horsepower of most wearables.
In addition to wrist-worn devices, Gemini will be tightly embedded with the Android XR platform and debut in Samsung’s forthcoming headset, codenamed Project Moohan, which launches this year. We’re still waiting to hear more about this and expect Samsung to take the lead on most announcements. But based on what we’ve seen so far, it’s clear that Gemini is deeply infused into the headset and will be core to the overall user interface. It seems near certain that a pair of Android XR-powered glasses will arrive soon to challenge the Ray-Ban Meta — I’ll be extremely surprised if there’s no such device on the market by the end of 2025.
Google has also teased that new earbuds from Sony and Samsung will bring Gemini to the ears. We expect this will be similar to the smartwatch approach, using connectivity on the smartphone as the conduit for any AI offerings. It would be no surprise to see improved voice command interaction at the centre of any offering here, and language translation — previously seen in Google’s Pixel Buds — could also come to the fore as an AI-enhanced feature.
Still, whether it’s the wrist, the eyes or the ears, it’s clear that Google will be integrating Gemini into more and more devices as it seeks to make this the AI assistant of choice — and to beat out competition from the likes of OpenAI and Meta, which are vying for a similar position. And with Apple currently on the back foot with its Apple Intelligence efforts, Google will also be keen to capitalize on this opportunity. We’re seeing ever-more attempts to add AI to all manner of devices and applications, and it seems that positioning here is critical for all major companies seeking to recoup their heavy investments in the technology.
Elsewhere, Android Auto will also get a healthy dose of Gemini to help users on the road. Google is promising a more intelligent user experience to help people plan routes, manage in-car entertainment or handle other voice-related tasks such as providing a run-down of their diary or sharing a weather forecast. Similarly, Google TV viewers will benefit from Gemini’s ability to suggest what to watch or provide another leaping off point for everyday search queries in the home.
Beyond Gemini news, Android 16 itself is getting a lick of paint, with the new Material 3 Expressive language designed to make Android more fun and personal. The new interface features livelier animations and a playful feel, and certainly pops in Google’s shared imagery. It’s interesting to note some areas where Google has drawn inspiration from iOS, most obviously with a new “glanceable notifications” window, which takes a leaf out of Apple’s Dynamic Island. Some Android licensees, such as Honor, seem to have already started down this path with their own user interface skins, suggesting Google is keen to standardize an approach before it loses control of this feature.
The same approach to design has also reached Wear OS. Google says the new iteration will take better advantage of round displays and provide useful, contextual and easily accessible information on the go.
Some apps and features in the Android platform will also be getting a revamp. Find My Device is reintroduced as Find Hub, a more powerful offering to help users share their location with friends and family, as well as keep track of their Android devices and other connected items, such as smart luggage and smart tags — with support also coming for ultrawideband-enabled tags. Improvements later this year include support for satellite connectivity to allow for better tracking in remote locations, and integrations with some airlines to allow sharing of tag locations and help resolve lost luggage queries more quickly.
All told, these early announcements from Google provide a clear look at how the Android experience will improve over the coming year and set the stage nicely for I/O next week. We expect to hear far more about Gemini specifically and how Google’s extensive research into AI is powering these improvements. By shining a spotlight on Android now, the stage is set for a deeper dive into the behind-the-scenes elements that make the ecosystem tick.
CCS Insight will be attending Google I/O, with coverage planned for clients through CCS Insight Connect.