Google’s annual developer conference, Google I/O, was held earlier this week, where we got an insight into how Search is transforming going forward.

Last year, Google officially introduced the idea of Generative AI entering the search engine results pages, whereby users are encouraged to use search as a ‘jumping off point’ and engage in a more conversational way of searching. Since then, they have been testing and refining this in Search Labs.

During Tuesday’s webinar, Google confirmed that their natively multi-modal large language model AI assistant, Gemini, will begin to power generative AI results in the US this week, with plans to roll this out to over a billion people by the end of the year. 

AI Integrations

There are a few different ways we can expect to see Gemini being incorporated into search results. Gemini can now search, plan, research, and brainstorm for you using advanced, multi-step reasoning. Essentially, ‘Google can do the Googling for you’. 

There has been a lot of speculation about how the roll out of the AI Overview panel could negatively impact click through rate, as organic results will naturally be further down the initial SERP. That said, this panel does provide visibility opportunities as it will pull the featured text from a source which answers the question the most accurately, not necessarily the highest ranking result.

In this article by Liz Reid, Google’s Head of Search, she says that “we see that the links included in AI Overviews get more clicks than if the page had appeared as a traditional web listing for that query”.

At the moment, there won’t be a way to differentiate clicks and impressions which were triggered from an AI overview as opposed to a regular search impression or click in Google Search Console according to Google’s Senior Director of Product, Hema Budaraju. This will make it difficult for marketers to report on, or analyse, the impact of AI in the SERPs.

AI Overviews

AI Overviews are one of the biggest changes we will see integrated into searches. The idea of these overview panels is that users can ask more complex questions without having to break them down into smaller, digestible queries. Instead, we can expect quick responses which include anything that’s relevant to the topic searched. 

Overviews won’t be shown for all queries; Google says it will reserve them for queries where it can provide more value outside of a normal search. These panels will have the option to simplify the content, or break the content down to go into more detail.

AI Organised SERP

In addition to the AI Overviews panel, Google is also bringing out a custom AI Organised SERP for queries which require inspiration. These will be rolled out for dining and recipe queries first, but will be used for music, books. hotels, shopping and more in the long run.

This result will be pulling in perspectives, multiple content types and will be relying heavily on EEAT signals for trustworthy results. Taking up considerably more space than the AI Overviews, the Organised SERP is where the multi-step reasoning and planning capabilities will really shine.

One of the examples used in I/O was a search looking for restaurants to celebrate a special occasion. The response was able to break down the query, and cluster results by using contextual factors with an AI generated headline. For example, it considered the time of year and weather of the user’s location to provide an extra cluster of rooftop patio restaurants, as well as indoor, and generated nearby restaurants plotted on google maps with distance, rating etc. 

Another example shown was for meal planning. Gemini is able to use contextual reasoning to understand that you won’t want to eat the same meal 3 times a day for a week, and planning abilities to make the response interactive; you can swap options out by selecting a meal, and relaying how you’d like it adapted. 

Ask-With-Video Feature

The Ask-With-Video Feature is a brand new way of searching using advanced video understanding. Users will be able to record a video of a query, with a voiced question to be answered in an AI overview panel. Gemini has been designed to be multi-modal, as that’s how humans interact with and understand the world, and we can see Google now bringing this to the forefront of the SERPs.

Using the example from I/O, Google was shown a video of a record player where the arm was slipping off the record, along with the question, “why will this not stay in place?”. Gemini used deep visual understanding alongside speech models to generate an AI response with instructions and other media to answer the query.

The video was fed into Gemini’s long context window and broken down frame by frame to understand the motion of the tonearm drifting related to the spoken question. It then identified the make and model of the record player, provided clarity on the name of the part of the player that was drifting (the user didn’t know that it was a “tonearm” in order to be able to search) and searched the index for relevant information to return.

Credit: techpalacio.com

Summary

In summary, the changes coming to the SERPs are going to shake up the norm for both Organic and Paid results, but we won’t fully understand the impact until it rolls out in the wild.

The coming weeks will be interesting to observe any changes, especially in CTR, across US based clients. 

Got a project in mind?

We work with companies of all sizes and our clients include many non-profits, charities and start-ups. Get in touch here