Google began its annual developers’ conference on May 20 with a keynote session in which artificial intelligence took center stage. The firm announced significant improvements to its Gemini AI models, gave an early look at new agentic AI features, rolled out tools for AI-powered Search, and provided a peek at its maturing Android XR platform for smart glasses and headsets.
Key Announcements and Highlights of the Google I/O 2025 Keynote

Gemini App Updates
Google revealed that Gemini Live’s screen sharing and camera feature, on Project Astra, will start rolling out to Android and iOS platforms this week. The feature enables users to share a live conversation with Gemini about what they can see on their screen or with their phone’s camera.
Gemini Live is also being integrated more deeply into Google’s app ecosystem. Soon, users will be able to fetch directions from Maps, set reminders, and interact with other services using Gemini’s conversational interface.
In addition, Gemini’s Deep Research mode is getting an upgrade, enabling users to upload files and images to receive more contextual and detailed responses.
Veo 3 and Imagen 4 Models
The keynote also introduced two new generative models: Imagen 4 and Veo 3. The next-generation text-to-image model from Google is Imagen 4, which now generates imagery with a much higher level of detail and better rendering of text in images. Veo 3, Google’s newest text-to-video model, pushes video generation even further by rendering smooth motion graphics and generating natively synchronized soundtracks that complement the visuals.
AI Mode for Search
Google also outlined fresh features of AI Mode, its future AI-driven Search experience. AI Mode will now provide more interactive, personalized, and visually engaging results. For instance, users will be able to search for sophisticated subjects like finance or sports statistics and get visualized results in the form of AI-created graphs. For shopping searches, AI Mode will drive capabilities such as “try it on” for clothing, allowing users to see how an item of clothing would look.
It will add more personalization by providing AI Mode results based on historical searches and allowing connecting to other Google services starting with Gmail to provide even more contextual content. The company verified that the AI Mode is being rolled out to all US users this week, with further expansion planned later.
Also Read: This Chip Is Smaller Than a Grain of Rice – MediaTek Drops a Bombshell!
Android XR
Google also unveiled its Android XR platform, demonstrating how it intends to deliver mixed reality experiences on smart glasses and headsets. In a live demo, the company demonstrated how Android XR enables smart glasses for daily use cases like messaging, navigation, and real-time language translation with built-in lens displays.
Google has partnered with eyewear companies such as Gentle Monster and Warby Parker to create Android XR-based smart glasses, with more announcements coming in the near term.
Agentic AI Experience
Another dominant theme at the event was agentic AI systems that can perform multi-step tasks on a user’s behalf. With its Project Mariner test agent, Google showed users how to accomplish tasks like buying tickets or shopping online solely through AI, without having to physically go to any websites. Likewise, the Gemini app is receiving an Agent Mode, which will enable it to pull in and abstract pertinent web info in real time, streamlining workflows that otherwise would entail multiple apps and steps.
Beam 3D Video Communication
During the keynote session Google also showcased its new 3D teleconferencing platform named Beam. It leverages a combination of software and hardware such as a six-camera array and a light field display that is proprietary in nature to develop a very realistic 3D image of the individual at the other end of a call. The end result is a rich video experience that duplicates the sensation of having a meeting face to face. Google has confirmed it is collaborating with such partners as Zoom and HP to make Beam available in enterprises, with HP set to ship the first Beam-enabled devices shortly.
Conclusion
Google I/O 2025 showcased the company’s bold strides in artificial intelligence, immersive experiences, and next-gen communication. With major updates to the Gemini app, breakthroughs in generative models like Imagen 4 and Veo 3, the rollout of AI Mode in Search, and advancements in Android XR, Google is positioning itself at the forefront of the AI-driven future. Features like agentic AI and Beam 3D communication highlight a shift toward more natural, intuitive digital interactions. As these innovations begin rolling out globally, users and developers alike can expect a more connected, intelligent, and immersive tech ecosystem powered by Google.