Revolutionizing Search: Googles AI-Powered Features Set to Transform User Experience

The new AI mode from Google, along with agent functionalities and multimodal tools, indicates significant upcoming changes in the realm of search.

Google’s main focus is now on «AI Mode,» which has recently been made available to all users in the United States. First showcased last year, this mode is designed to handle longer, more intricate queries, enabling a more personalized, multimodal interaction akin to conversing with an agent. Google describes it as one of the most remarkable shifts in the company’s history.

The aim is to transform search into a more conversational experience. After an initial query, users can pose follow-up questions, upload images or graphics, and engage with more visual data representations. This system operates on Gemini 2.5, the most advanced AI model from Google to date.

AI Mode also introduces new capabilities similar to agent functions derived from Google Project Mariner research. It can now perform tasks like booking tickets, reserving tables at restaurants, or monitoring prices. The AI scans offers in real-time, fills out forms, and suggests top options, while users retain the final say on purchases or reservations.

Moreover, AI Mode can integrate with other Google services like Gmail and Google Drive, allowing for more tailored recommendations based on past searches, bookings, or documents. For instance, when planning a trip, the AI might suggest activities or dining options that match your preferences.

Google has not elaborated extensively on how AI Mode interacts with the broader Internet from which it derives its answers. Although AI-driven search does reference external sources, preliminary studies indicate that users often avoid clicking on these links, a trend that could undermine the business models of publishers and other website owners.

Additionally, Google is rolling out new AI-driven shopping tools. Soon, users will be able to upload photos to virtually try on clothes using a specialized image generation model tailored for the fashion industry. This marks the first substantial integration of a virtual fitting experience into search.

Once users find something they like, the new agent-driven checkout system can facilitate the purchase at a price they set. The AI monitors price changes, alerts users to discounts, and can even complete the entire purchase process, including payment. According to the company, this is supported by Google’s Shopping Graph, which tracks over 50 billion products and updates hourly.

Google states that these changes are just the beginning of even more extensive transformations. The ultimate goal is to create a universal AI assistant that is multimodal, capable of not only answering questions but also performing tasks, anticipating needs, and enhancing productivity across all devices—from desktops and phones to XR headsets. Much of this work is being executed under Project Astra.

A new feature dubbed «Real-Time Search» enhances multimodal search capabilities. Users can point their camera at their surroundings and ask questions in real-time—be it identifying an object, translating text, or solving everyday problems. The AI analyzes the camera image, explains what it sees, and provides links to additional resources. Initially developed as part of Project Astra, this feature is now accessible to users in the U.S. via a real-time button in Google Search.

[Source](https://the-decoder.com/google-pushes-ai-powered-search-with-agents-multimodality-and-virtual-shopping/)