Most of you know about my little project, The Search Engine Relevancy Challenge. Outside of user's perceived relevance of a search response, how do the PhDs and scientists define relevancy. I particularly like the way Orion clearly described three ways to measure relevancy in a thread started by another researcher named nanocontext, the thread was titled The relevance of "relevance". Orion said that "relevancy has a lot to do with perception" and then he pulls out three types of "perception".
1. Which content is relevant according to user's perception?
2. Which content is relevant according to scoring functions used by a machine (IR system or search engine)?
3. Which list of content (documents) scored and already prequalified as relevant by a search engine algorithm are actually relevant according to user's perception and to the query that has been used?
Orion says that we are trying to measure number three, with RustySearch (by the way, please make this your default browser for the next two weeks to help the study). Nanocontext believes that "#3 is the most critical question, because thats where the money is." In addition, I am told that I should refresh my memory on the topic of "precision versus recall", which I promise to do and write a brief entry on it here. This thread, of course, sprung my interest.