John Mueller of Google said it would be "a really bad idea which will cause all sorts of problems" if you block Google or other search engines from crawling pages that return a 404 server status code. He said "billions of 404 pages are crawled every day" by Google and it is normal.
One webmaster wrote that his "website automatically blocks user agents that get more than 10 404 errors, including Googlebot, so that's a problem." John responded to that that this is a really bad idea, he said "That sounds like a really bad idea which will cause all sorts of problems.. You can't avoid that Googlebot & all other search engines will run into 404s. Crawling always includes URLs that were previously seen to be 404."
That sounds like a really bad idea which will cause all sorts of problems.. You can't avoid that Googlebot & all other search engines will run into 404s. Crawling always includes URLs that were previously seen to be 404.
— 🍌 John 🍌 (@JohnMu) July 15, 2020
He said in a different tweet, the same day, "Billions of 404 pages are crawled every day - it's a normal part of the web, it's the proper way to signal that a URL doesn't exist. That's not something you need to, or can, suppress."
Billions of 404 pages are crawled every day - it's a normal part of the web, it's the proper way to signal that a URL doesn't exist. That's not something you need to, or can, suppress.
— 🍌 John 🍌 (@JohnMu) July 15, 2020
So while you can fix your 404 pages through other means, automatically blocking Google from accessing 404 pages without knowing how Google is accessing those pages can be a really bad idea.
Forum discussion at Twitter.