Will Google Use Facts to Rank Sites in the Future?

If the New Scientist magazine is correct Google researchers will start judging the value of websites in a markedly different way. Instead of Google looking at how many links lead to a particular site, there are plans to place a greater value on accurate information when determining the rank of a page within it’s SERPs.

Currently, a site that contains great and accurate content can languish far behind other sites that are riddled with errors. All because the latter has a greater amount of other sites linking back to them. Google has always judged websites that are linked to by many other sites as being trustworthy. Though, as link exchanges can be arranged between individual webmasters, and consequently help sites that don’t have great content, this method has never been infallible. It can also be argued that it’s unfair to penalise websites that are great sources of information, just because not many other sites link to them.

New Websites Could Have A Higher Ranking Quickly

There are two words that we should familiarise ourselves with here: ‘exogenous’ and ‘endogenous’. Exogenous relates to external signals regarding a web page, i.e. links, while endogenous relates to internal aspects of a web page, including how accurate its information is. If Google starts considering content as the most important aspect of a website’s worth, then new websites will have a better chance of making an impact quickly also. Google’s current page ranking method means that it will nearly always take time for a new website to achieve a greater prominence within the SERPs. This is because it’s unlikely that any new site will be able to accumulate backlinks quickly. If accurate content does start to carry more weight with Google, then even some established websites may see their ranking adversely affected.

Google’s Knowledge Vault

In what seems like something out of science fiction, Google has already been trying to gather all the knowledge it can. This is being done via something called the Knowledge Vault, and, because of the immense amount of knowledge it has collected already, the Google bots can refer back to it to check on the accuracy of the information on a web page. Any new information that the bots find can then be added to the information that’s stored in the Knowledge Vault.

Promising Early Results

Though still early days, Google’s early tests on factual accuracies and inaccuracies on websites have proved to be promising. Via a method called triples, Google has extracted information from around 5.6 million websites and almost 120 million relevant pages. So far, Google has been happy with how reliable their method has proved to be.

A Radical Change

Penguin and Panda saw radical changes in how pages were ranked when Google’s algorithms were updated in 2012. Many websites were affected, but changes that involve a heavier focus on content would be the most radical Google changes yet.

 

Next article

Ranking In Universal Search: Structured Data If the New Scientist magazine is correct Google researchers will start judging the value of websites in a markedly different way. Instead of Google looking at[...] Read article
Find Out How We Can Help Your Business

We specialise in implementing bespoke online marketing campaigns and building stunning responsive websites. Fill in the details below to find out how we can help your online business become a success.