I have always had a thing for spiders. Not the creepy crawly kind, but the one made of bits and bytes who scour the web for new documents to index and download. They are so predictable but at the same time quite surprising you when you least expect it. How the hell did they do that or find that page? Many a webmaster has scratched their hand in disbelief at a crawler at one time or another. There is a thread on WebmasterWorld asking new questions about the various characteristics of a how search engine crawling technology works and the bare bones infrastructure of how a search engine goes from finding a page to ultimately deciding to list it in its search engine results. This is the nuts and bolts of the technology and also updating previously known information with new questions and answers.
So how do search engine robots work and what comprises them?
Spider : a robotic browser like program that downloads webpages. Crawler : a wandering spider that automatically follows links found on pages. Indexer : a blender like program that dissects webpages that are downloaded by spiders. The Database : a warehouse of the pages downloaded and processed. Search Engine Results Engine : digs search results out of the database
Pageoneresults takes it a step further in creating this thread to ask new questions about search engine robots for those that are not previously familiar.
1. Do robots accept cookies? 2. What happens if my site forces a cookie? 3. Do robots execute JavaScript functions? 4. Could I be doing something technically that is stopping a robot from indexing my site? 5. How do robots interpret my page? 6. In what order to robots index my page? What is the very first step that robot takes?Continued discussion on WebmasterWorld - How Do Robots Work?