Google shares the same crawl budget between GoogleBot, the organic free web crawler, and Google AdsBot, the paid Google crawler. Keep in mind, Google has dozens of crawlers and they all likely share the same crawl budget.
This is one of the rare instances where Google's Search team shares anything with the Google Ads team, I guess?
The reason this came up is because I saw Microsoft also say crawl budget is shared between organic and ads. So I asked John Mueller of Google if this is the same case at Google. John did not clearly say yes or no. What he said on Twitter was "The goal is to prevent crawling from taking down the server, essentially, so you have to count all the requests, regardless of the crawler type."
Here is the Q&A:
The goal is to prevent crawling from taking down the server, essentially, so you have to count all the requests, regardless of the crawler type.
— 🍌 John 🍌 (@JohnMu) November 17, 2020
Sounds like that is a yes to me?
I mean, I asked John later for a yes or no answer and he gave me the "it's not new" line - so yes, it is shared:
It's not new :-). Also the crawl stats show combined requests.
— 🍌 John 🍌 (@JohnMu) November 17, 2020
And yes, it is not new, Glenn Gabe covered John saying this back in March:
How are accurate is the crawl stats report in GSC? Via @johnmu: The report is accurate. It's based on what we pull out of our crawling logs. But, it includes urls fetched from other Google services (like Google Ads, product search, etc.) It's not just Gbot https://t.co/jKlOsn5vSj pic.twitter.com/K8vNgwSZ42
— Glenn Gabe (@glenngabe) March 10, 2020
And Dawn Anderson shared some screen shots of John saying this years ago:
And here is John’s reply from 7 Oct 2016 pic.twitter.com/YyvCnhpD6S
— Dawn Anderson (@dawnieando) November 17, 2020
It makes sense, Google does not want its systems, as a whole, impact your site's performance.
I don't think this is some big conspiracy - it simply makes sense that Google share these resources across all its crawlers, which essentially use the same robots.txt and GoogleBot infrastructure. If not for the case of internal Google efficiencies but also for your own site's health?
But it is also a rare case of where Google Search organic works with Google Ads paid.
John also added this:
Most sites are not restricted by crawl budget, so you wouldn't see any change there. Also, we crawl to refresh a lot of things where we don't necessarily expect new content (eg, 404s), so it's often easy to reprioritize without having a negative effect overall.
— 🍌 John 🍌 (@JohnMu) November 17, 2020
Do they share the same index? Generally not...
Requests from different user-agents tend not to get mixed in processing. For example, some sites specifically block some user-agents, and it would be bad to just fill in the blanks based on crawls from other user-agents.
— 🍌 John 🍌 (@JohnMu) November 17, 2020
Forum discussion at Twitter.