Google's Gary Illyes said that Google knows there are over 30 thousand trillion URLs on the web at SMX Advanced last night. But that number is not new, I thought it was, but it was mentioned back in 2013 when they released the Inside Search portal.
What Gary did say, but I don't think he meant it that way, was that Google doesn't have the server capacity to store them all. He went on to explain the concept of crawl capacity, PageRank, what Google decides to index and not - this is all basic stuff for SEOs.
I don't think Gary meant to say Google doesn't have the ability to store all the URLs but rather, Google chooses not to.
Travis Write tweeted a pretty accurate quote from Gary last night at SMX, in my opinion:
There are Thirty Thousand Trillion URLs in the Google index, but they don't have the server capacity to store all of them. @methode #smx
— Travis Wright (@teedubya) June 3, 2015
But Gary responded later that it was incorrect, he wrote on Twitter:
@teedubya that's incorrect: we know about that many URLs, we don't have them all indexed
— Gary Illyes (@methode) June 3, 2015
Either way, crawl priority is important for SEOs to understand. Google doesn't want to bother indexing and storing web pages that are not useful.
Forum discussion at Twitter.