When is Google inspired by the user experience?

0
348
When is Google inspired by the user experience?

Update on some patents filed by Google about the user experience, and on the influence of this experience on the search engine.

Number of SEOs agree on the benefit of processing the user experience in SEO. After all, attracting traffic is good, but the finality is to convert the visitor. On the other hand, there is still skepticism about the real impact of user signals on Google's results pages. So let's take a look at some patents filed by Google about the user experience, and how this experience affects the search engine.

What is the user experience?

The user experience from the web point of view is the feel of a user following his interaction with a website. The goal of a good user experience on a website is to provide an enjoyable experience that meets both the expectations of the user and the site owner in terms of conversions or branding. The user experience is closely linked to human psychology and the emotions that a site can generate.

This is why domains such as web ergonomics, web design, content style or response time of one page have an influence. The founders of Google, Sergey Brin and Larry Page (in the picture above) have always had the objective to offer relevant results and have long been interested in translating mechanically and objectively systems that adapt to this subjective value.

The technical constraints have made it not always possible or at least not satisfactorily. But the evolution of the algorithms and the current computing power make it a factor that really counts in SEO, and it's not completely new ...

The random surfer

The random surfer

One of Google's criteria for positioning a page is the number of links it receives from other pages (internal or external). With this system, a page is considered important if it receives links from other pages. The weight of the link or the juice transmitted depends on the popularity of the linking page. There is, therefore, a quantitative aspect both in the number of links received and in the popularity of the linking page.

The theory of the random surfer consists in imagining a user who navigates from page to page, and links in links randomly. And the likelihood that this random surfer visits a page represents the PageRank. Indeed, the more links there are on a page, the more likely this random surfer visits this page.

This system dates from the origin of Google in 1998, as well as the definition of the PageRank. The problem with this system is that each link of a page carries the same weight and brings the same juice to the linked pages.

This method of random surfer has the advantage of being easy to calculate and easy to deploy on the machines of Google. And although this first version of the PageRank has made the search engine successful, this model reaches its limits. One sees it well with the abuse on the netlinking strategies to manipulate the results pages.

The reasonable surfer

In 2004, Google filed a patent on the value of links and the influence of user behavior.

This document explains the invention of a model based on a reasonable surfer who will follow certain links with more probability than others. This model of the reasonable surfer thus approaches more of a "normal" user behavior because each link on a page does not have the same probability of being followed. This patent also explains that a new rank is associated with the page, a rank that I have decided to call "behavior rank".

To assign this probability of clicking on a link, this system is based on two types of indicator:

  1. The characteristics of a link
  2. User behavior data also called "usage criteria".

The characteristics of a link

The document gives us some examples of characteristics that can be taken into account and not all of which can be used. Thus, according to this patent:

  • The font size of the link text
  • It's position in the page
  • The semantic context of the link
  • The number of words in the link
  • etc.

User behavior data

These user data were obtained by the Google toolbar at the time and are now recovered by Google Chrome. Still, according to this patent filed in 2004, the usage information of a page that is collected can be:

  • The navigation information, which links are selected,
  • The interests of users,
  • How often a link is clicked,
  • How often no link is clicked on the page,
  • The language of the user,
  • Etc.

The weight of a link and its ability to transmit from the PageRank (or juice) to another page depends on these criteria. Moreover, a page can be discovered, explored and indexed by Google only by what there is use on this page, even if there is no link to it (nor to sitemap).

Criteria of use

If it were necessary to cut the SEO into three main parts, it is necessarily reductive but it could be like this:

  • On-site criteria: semantic relevance of a page on a given query, number of links to the page from the same domain, position of the page in the structure of the site ...
  • The criteria Off-site: number of links from other sites, the number of times the site was mentioned on the web in relation to a given query (co-citation), links from thematic and authority websites.
  • Usage criteria: number of times a result is clicked on a result page, the number of clicks of a link, time spent on a page, return rate to a result page, and other data related to experience user....

The criteria of use are difficult to manipulate by a webmaster or a referrer, so they represent a certain element of relevance for Google. But for that to be really relevant, Google needs a significant amount of user data for a given query and page. In practice, therefore, these criteria of use are mainly used on generic or semi-generic expressions. But the long drag is also impacted by the user experience, the hummingbird algorithm is a good example.

Which are user signals taken into account by Google?
User Search History

In 2005, Google filed a patent on how it re-orders results based on mouse activity. This document tells us that analyzing mouse activity on Google's results pages allows you to fine-tune the order of results, including one-box (rich results such as Google links, sitelinks, etc.).

Another patent filed in 2005 makes it possible to determine which type of result is the most suitable according to the user data, the query, and the format of the results page. It is directly linked to the results of the universal search (results images, video, maps ...).

In 2007, a very interesting patent focuses particularly on the user experience within a site. I translate here an extract of this patent:

Under certain circumstances, the use of information to "profile" a user may include the number of clicks or visits to a site, page, or group of sites for a given period of time. Other characteristics of user behavior can be used such as the time spent by a user in interacting with the site, the share of the site traveled by a user, the actions taken by that user while visiting the site, and Activity following these interactions with the site.

There are of course other patents referring to the use of data on user behavior and certainly others to come.

How do some such ugly sites manage to position themselves as well?

It should be borne in mind that all of Google's ranking criteria are relative, relative to a semantic universe and related to the competitive universe. A site A with a web design poor but rich in content and optimized in SEO may well position itself if there is no site in the same niche with both super design, good content and optimized. But it also means that Site A leaves opportunities for its competitors to be overtaken.

Conclusion

As a result, ergonomics, site design, page load time, and even content marketing, anything that can affect the user experience also has an impact on natural referencing.
However, it is clear that this is not yet perfect, the proof is with the filters Panda or Penguin which penalizes, in particular, the on optimization and linking "abusive". If the use of the links was taken into account so well, Google would not need to take out a filter that finally compensates for the shortcomings of its main algorithm. The reason is, as already explained, the need to have enough information, but also that Google can not do without other criteria ranking.

Also published on Medium.

LEAVE A REPLY