Can SEO Tools Track The Google Weather?

Jan 19, 2016 - 7:53 am 14 by

Wrenches5 Google 1900px

As most of you know, it is incredibly rare for me to have stories here from outside writers. But based on the story I wrote yesterday named Google: Some Of Those SEO Weather Tools Pick Up Wrong Signals I wanted one of the more well known people behind one of the most popular tools to write up his take. I did update the original story with some comments from the tool providers but here is Dr. Pete Meyers.

Dr. Peter J. Meyers (AKA "Dr. Pete") is Marketing Scientist for Seattle-based Moz, where he works with the marketing and data science teams on product research and data-driven content. He has spent the past 4 years building research tools to monitor Google, including the MozCast Project, and he curates the Google Algorithm History, a chronicle of Google updates back to 2002. He can usually be found on Twitter @dr_pete.

Here is his story...

Over the past few days, multiple tools that measure fluctuations in Google search results have noticed high amounts of activity. This naturally leads the SEO community to push Google representatives for comment, and can create drama and misunderstandings. Recently, John Mueller suggested that Google trackers may be measuring the wrong signals - or, at least, measuring things Google doesn’t consider important. It's a complicated question, and I'm going to try to address it.

Full disclosure – I built (in spring of 2012) and maintain the MozCast system, a Google “weather” tracking tool. I can’t speak to the inner workings of other Google tracking tools, but I know many of the architects of those tools, and they include smart SEOs whom I admire. While I may not always agree with Google and its motivations as a corporation, I do believe that there are many people at Google, including the Search Quality team, who care about search quality and the user experience and are sincerely trying to build a better product every day.

Are We Measuring The Wrong Signals?

Most Google weather/flux trackers measure changes in ranking. Put simply, how much did rankings move over time? For MozCast, that's a daily measurement of page 1 - for other tools, it might be hourly, etc. We track these changes for one simple reason - it's what the SEO community cares about. We eat, drink and breathe rankings, and understanding how they move can help us determine if Google is making changes that could impact our own sites.

What Signals Does Google Care About?

So, how could rankings be the wrong signal? I think it's a matter of perspective. Google tests algorithm updates for impact prior to launch, and they tend to look at overall quality factors, including the impact on things like Click-Through Rates (CTR). Please understand - I'm not saying CTR is a ranking signal; I'm saying that it's used in testing the potential impact of an update. Google cares about whether people are engaging more with results and uses metrics that they believe signal whether a result set is objectively better. Those metrics aren’t flawless, but that’s a discussion for another time – the key is that Google needs to look at the big picture across all results.

So, who's right? We both are. Google has to think across results as a whole. We have to think about how any given change impacts the ranking of our business, employer, or customer. It doesn’t matter how "white hat" you are – at the end of the day, you still care whether your rankings moved up or down, because that’s a proxy for search traffic. Google’s job is not the same as our job.

Can We Separate Signal from Noise?

Search results are noisy, especially for competitive, newsworthy queries. The web is changing by the second, and a high-volume search like "Trump" is going to fluctuate daily, even hourly. Worse yet, Google can make feature and UI changes that can lead to false alarms. For example, if Google starts showing a lot more vertical results (News, Images, In-depth Articles, and now Twitter), this can impact rankings, because each block of verticals consumes one organic position.

We deal with day-to-day noise primarily by looking at a lot of historical data. It's not a perfect solution, but over time we've been able to establish baselines that tell us what a "normal" day looks like. For MozCast, we tend to want a pretty large deviation from normal before we’ll declare an event happened, and even then we’re going to dig into that data and try to validate it.

We also track other signals. MozCast tracks metrics like average page-1 result count and "diversity", for example. We also track SERP features. So, if we see ranking fluctuations and suspect another explanation, like a change in vertical results, we'll check those vertical results. Finally, we track possible UI changes and previously unseen features. While we only report one temperature, there are many other signals available for verification. I suspect other Google weather trackers have similar ways to slice and dice their data.

Are We Measuring The Wrong Metrics?

This is a tougher question. Measuring how rankings change over time across a large data set is a surprisingly complex task. Any time you reduce a large amount of data to one number, you lose a lot of information and risk changing the meaning of that data. That’s the potential hazard of statistics.

Over the history of the MozCast project, we've tried a number of metrics, some much more complex than the public "temperature" that we currently use. Those metrics yield different temperatures, but what we've found is that they tend to have very similar patterns over time. In other words, the peaks and valleys in the data are roughly the same. So, at this point, we're fairly confident that our current metrics are representative of real patterns in the data.

Are Our Keyword Sets Representative?

This is the trickiest and probably fairest criticism. In statistics, whenever we take a subset or sample of a population, we have to be mindful of whether that sample is representative of the entire population. Google searches aren’t a fixed population – people are trying new phrases every day, especially in the very long tail. Trying to pick a perfectly representative sample is nearly impossible.

MozCast tracks a fixed 10,000-keyword data set. We keep it fixed to remove some of the noise of using changing data sets (customer data, for example). This data set tends toward higher-volume, commercial keywords and is spread out evenly across 20 industry categories.

When the system was first built, it was a 50-keyword prototype. It doesn't take an advanced degree in statistics to know that that's not a very reliable sample. Over time it moved – to 100, 500, 1K, 10K – and at some point we found that the noise didn't decrease that much. We hit a point where a larger data set was providing diminishing returns. The 10K is actually divided into 10 separate weather "stations", and we compare those stations daily (the temperature is usually based on the median of the stations).

Even with all of that, and with other trackers taking similar precautions, any given sample could lead to a false alarm – seeing a signal that's only relevant to a few data centers, industries, or websites. So, as with any measurement, we look for corroboration. If MozCast alone indicates an update, and no one else does, I’m going to be skeptical. If MozCast indicates an update, other tracking tools confirm it, and people like Barry are detecting large amounts of "chatter" across SEOs and webmasters, then our confidence will be much higher. Every tool and observation is using a different sample, and the more consistency we see, the more we know we’re dealing with something real.

Why Do We Track the Algorithm?

This is a question people, even SEOs, ask me a lot. Originally, I built MozCast out of a core frustration – Google's Eric Schmidt revealed that there were 516 changes in 2010, and we confirmed/named less than 10. In 2012, Google had 635 "launches" based on an incredible 7,018 live experiments. While not all of those launches were algorithm updates in the sense we typically mean them, it's clear that we were (and are) missing a huge number of important changes and relying too heavily on Google's official confirmation.

Over time, though, the compliment I've heard the most is that tracking the Google weather helps people know they're not crazy. As SEOs, we need to know whether rankings changed because of something we did (did it help or hurt, and should we keep doing it?) or something beyond our control. Tracking Google helps answer that question.

I don't believe our primary job is to chase the algorithm. Our websites have much more important audiences than Google, and we should be building for those audiences. However, much of our traffic still depends on Google and their dominant search share, and so awareness is important. I do what I do so that, hopefully, you can spend more of your day doing your job.

Forum discussion continued at Google+ & Twitter.

 

Popular Categories

The Pulse of the search community

Follow

Search Video Recaps

 
Video Details More Videos Subscribe to Videos

Most Recent Articles

Search Forum Recap

Daily Search Forum Recap: October 29, 2024

Oct 29, 2024 - 10:00 am
Google

Google AI Overviews Rolling Out To 100+ Countries & Billion+ Users

Oct 29, 2024 - 7:51 am
Google Search Engine Optimization

Google: Doubtful You'd See Big Ranking Drop Over Core Web Vitals Issues

Oct 29, 2024 - 7:41 am
Google

Google Tests Frequently Saved Label On Search Result Snippets

Oct 29, 2024 - 7:31 am
Google Maps

Google Tests New Local Places & Compare Sites Interface

Oct 29, 2024 - 7:21 am
Bing Search

Bing Recommends Against Batch Mode For IndexNow

Oct 29, 2024 - 7:11 am
Previous Story: Google Madrid's Micro Library