Search engines use automated crawlers, also known as robots or spiders, to scour the Internet's content and add it to their indices. Once pages of a website are in an index that is used to provide search results, sites are revisited on a regular basis to determine if any new content has been added, or if there has been significant updates to currently indexed content. Since there is so much information available on the Internet, sites generally get re-crawled based on a variety of factors, including the frequency of content updates or even the command in a page's code that asks the spider to return every "X" days. However, except on occasion, spiders will not re-crawl the entire website.
A recent thread at Cre8asite Forums starts with a member asking "How do they do that?" He describes that he on occasion examines his log files to find varying degrees of robot activity, and asks how they determine how deep to dig. An initial answer by Moderator softplus offers some good ideas, and finishes with the thought that:
In the end, the main element I have seen for crawl frequency is page "value"; a page with good value is crawled more frequently than a page with little value... Even a static high-value page is crawled frequently, it doesn't make that much sense to me, but there must be reasoning behind it. Perhaps the frequency would be even higher if the content were to change frequently?The member that asks the original question then poses the theory that the Google toolbar could be involved, with the crawl somehow directed towards pages with higher time spent by a visitor. The thread then diverges slightly into an interesting conversation about how Google Sitemaps works to help get pages crawled and indexed (now Google Webmaster Tools). Do you think you know why some pages are crawled and others not? Join the discussion at Cre8asite Forums.