Google has updated its manual actions documentation around site reputation abuse, to expand on the next steps and actions you can take if you receive this manual action. Google now more clearly says that you should not block that content with your robots.txt file, if you use the noindex rule.
Glenn Gabe spotted this change and posted on X, "OK, I wrote a post recently about how disallowing via robots.txt is NOT a valid approach when dealing with site reputation abuse. Noindexing is the way to go. Looks like Google is making this clearer now!"
It now says provides four bullet points for things you can possibly do, before it just said:
Decide what to do with the violating content and take action. For example, moving the violating content to a new domain, using noindex to exclude the content from Search indexing, or redoing the content as first-party content.If you moved the content to a new domain, and you link from the old site to the new site, use the nofollow attribute in those links. Avoid redirecting URLs from the old site to the new site, as redirecting may introduce the site reputation abuse issue again.
Now it says:
- Move the violating content to a new domain. If you link from the old site to the new site, use the nofollow attribute in those links. Avoid redirecting URLs from the old site to the new site, as redirecting may introduce the site reputation abuse issue again.
- Use noindex to exclude the violating content from Search indexing. To make sure your noindex rule is effective, don't block that content with your robots.txt file.
- Redo the violating content as first-party content.
- Remove the violating content from your site.
Here is a screenshot of the new page:
Here is the old page I captured back in January 2025:
OK, I wrote a post recently about how disallowing via robots.txt is NOT a valid approach when dealing with site reputation abuse. Noindexing is the way to go. Looks like Google is making this clearer now!
— Glenn Gabe (@glenngabe) March 7, 2025
In the manual actions documentation, Google now clearly explains to… pic.twitter.com/bg2xJfxv3r
Forum discussion at X.
Update: The Google Search Liaison commented on LinkedIn related to this to explain:
If a site gets a manual action, we've actually seen content appearing in relation to violative practices. Manual actions don't happen proactively ahead of something. So if a site robots.txt blocks content that it *thinks* might violate our spam policies, and we never index that content because we can't crawl it -- great. We never saw it in the first place, so it can't cause a manual action to follow.Again, however, if a site actually got a manual action, we saw the content. With site reputation abuse, to remove that action, the site needs to take one of the four corrective options listed. Which option is up to the site.
If they go the noindex route, they can't robots.txt the content. That's because robots.txt blocks content from being crawled but not necessarily being linked to, because we will link to some content that we only know about through other links. In addition, and more important, a robots.txt block prevents us crawling the pages to see that noindex has been applied -- so they may remain fully listed even though the site is trying to comply by using noindex.