Google’s annual event I/O has returned this year, pushing the boundaries of AI further than ever before. What started off with the keynote speech by Google CEO Sundar Pichai, highlighting the milestones accomplished by the tech giant, soon escalated to an ever-exciting show of AI-powered advancements and new generative AI tools. From the new AI mode on Google Search and Gemini Live to the launch of Veo 3, Imagen 4, and Flow, to the unveiling of Android XR and Samsung Moohan – Google was pulling one AI rabbit after the other out of the hat. Of all that was said and shown, this blog brings you the 8 biggest AI breakthroughs and launches announced at the Google I/O 2025 event.
Google has taken video calling to a whole new level with Google Beam – an evolution of Project Starline that offers immersive 3D video communication. This new technology captures the views of the speaker from 6 different camera angles, along with their movement at 60 fps. It then puts them all together to generate a 3D version of the speaker, making it feel like the person is right in front of you. Aimed at making virtual interactions feel more lifelike, Google Beam will soon be available to Google Meet users in the US and then to other countries.
Complementing this, Google Meet now features real-time speech translation. Powered by AI, this translation feature can pick up your dialect, tone, and nuances to give accurate translations in real-time. Initially supporting English and Spanish, Google plans to add more languages soon, facilitating seamless multilingual conversations during video calls. This new feature has already been rolled out to US users and will soon be launched worldwide. Google Enterprise users will also get access to this feature towards the end of this year.
The biggest announcement made at Google I/O 2025 has got to be about the new AI Mode in Google Search. Owing to the widespread acceptance of AI overviews on Google Search, they have now brought the power of AI directly to the search bar with the AI Mode. This new feature lets users use AI directly to search for results, just as they would on ChatGPT, Gemini, or any other AI chatbot.
With an expanded search window, users can now add more context and ask multiple questions within the same search query. Google Search breaks user queries into multiple smaller queries and categories and runs parallel searches on all of them. With AI-powered reasoning capabilities, it then puts together all the info and generates a comprehensive and contextual response. This transforms Google Search into a more interactive experience.
Google Search’s new AI Mode offers 7 new features:
The AI Mode on Google Search is currently being rolled out to US users. Google plans to make it available in other countries soon.
The biggest update of the Gemini chatbot announced at the Google I/O event this year was the Gemini Live feature. An extension of Google’s Project Astra, Gemini Live is developed to be a universal AI assistant. It lets users have live video calls with the AI-powered Gemini chatbot, offering real-time AI assistance for anything and everything. It lets users engage in interactive camera conversations, receive on-the-go translations, and share screens or camera feeds for help. This feature is now available in over 45 languages across 150+ countries, to both Android and iOS users.
At Google I/O 2025, the company demoed Agent Mode – a Project Mariner-based AI agent with computer use features. This superagent is capable of doing up to 10 tasks at once. It can make calls, search the web, find YouTube videos, give you suggestions, answer questions, and do a lot more. It is also intelligent enough to learn the workflow from one task and apply it to do other tasks similarly, using a technology called ‘teach and repeat’.
Agent Mode is designed to be personal, proactive, and powerful. It can access your calendar, see upcoming events, and set reminders or prep you for the event, even before you ask it to. This level of autonomy and intelligence hasn’t been seen in general-use AI agents before. It helps in automating a number of everyday tasks like scheduling, notetaking, interview prep, etc.
Google took it even a step further by integrating it with Google Search’s AI Mode to bring its users an agentic search feature. With this, users can run multiple web searches and web search-based tasks in the background, which will be done autonomously by the agent.
For example you can use this feature to set up agentic checkout for e-shopping. Once you find a product that you wish to buy, you can use agentic search to find it for you within your budget. Google Search would keep tracking the price across websites, and once the price drops to your range, it will automatically place your order with just one tap. You can even pay for it using Google Pay, again with just a tap.
So now, just like Agent2Agent Protocol, and Anthropic’s Model Context Protocol, Gemini API and SDK will be able to use MCP tools. Google will soon roll out Project Mariner’s computer use capabilities to developers via the Gemini API. Meanwhile, an experimental version of the multi-tasking Agent Mode is now available to Google AI Ultra subscribers in the US.
Google announced some of its latest and most advanced generative AI tools at the Google I/O 2025 event. This included:
These advanced tools are now available to Google AI Pro and Ultra plan subscribers and will slowly be integrated into the Google Gemini chatbot.
Google I/O 2025 was more about Gemini than about AI, as proven by CEO Sundar Pichai’s word counter. Several announcements regarding Google’s Gemini chatbot were made at the event including updates on Deep Research and Canvas and integrations with Google’s latest generative AI tools.
Here’s a list of all the Gemini Updates revealed at the Google I/O event this year.
These updates will be rolled out to subscribers in the coming weeks.
Android XR is Google’s first-ever Android platform, venturing into extended reality. This technology, powered by Gemini, fosters an immersive experience for users through hyper-realistic videos in real-time. Samsung’s Moohan, a newly designed pair of smart glasses, would be the first device that leverages Android XR for AI assistance. These glasses offer features like real-time navigation, translation, and camera live-streaming, aiming to enhance user interaction with the digital world.
With these glasses you can watch live events from your home, as if you were sitting in the front row of the stadium. With the ability to show Google Maps in 3D, it can visually take you places in real-time, giving you a realistic experience. Moreover, it comes with memory and can answer questions. Designed to provide AI assistance in real-time just like a human companion, Samsung Moohan can click pictures, make bookings, and even translate audio to text. Unlike most other smart glasses that come in a single sci-fi inspired design, these ones are going to be designed in various styles by Gentle Monster and Warby Parker.
Apart from all these launches and updates, Google also introduced two new subscription plans at its annual event:
Google I/O 2025 was quite the show, giving us all a glimpse into the ambitious AI plans Google has. From enhancing everyday tools like Google Search and Google Meet to developing advanced creative tools like Flow and Genie 2, Google’s innovations aim to redefine the boundaries of AI. As these updates and models roll out, I’m sure AI will be an integral part of a common man’s everyday life. Be it Project Astra, Project Mariner, or Android XR – these developments mark a significant step toward a more intuitive and immersive digital future, powered by AI.