Dawn Anderson reshared a tweet from Alexander Bondarenko that sat in an information retrieval conference where Microsoft demoed how Bing Search can generate clarifying questions for certain queries in the search results. This photo he posted on Twitter shows a screen shot (not a good quality photo) of this in action.
Here is the photo (click to enlarge):
As you can see, for a search on [blue screen] it asks you what version of windows are you running. For a search on [20th anniversary gifts] it asks you if you are shopping for a husband or wife. For a search on [nsfl] it asks if you if mean not safe for life or northern state...
Dawn told me this is related to this research paper named Generating Clarifying Questions for Information Retrieval by Hamed Zamani Susan Dumais Nick Craswell Paul Bennett Gord Lueck:
Search queries are often short, and the underlying user intent may be ambiguous. This makes it challenging for search engines to predict possible intents, only one of which may pertain to the current user. To address this issue, search engines often diversify the result list and present documents relevant to multiple intents of the query. An alternative approach is to ask the user a question to clarify her information need. Asking clarifying questions is particularly important for scenarios with “limited bandwidth” interfaces, such as voice-only and small-screen devices. In addition, our user studies and large-scale online experiment show that asking clarifying questions is also useful in web search. Although some recent studies have pointed out the importance of asking clarifying questions, generating clarifying question for open-domain search tasks remains unstudied and is the focus of this paper. Lack of training data even within major search industry for this task makes it challenging. To mitigate this issue, we first identify a taxonomy of clarification for open-domain search queries by analyzing large-scale query reformulation data sampled from Bing search logs. This taxonomy leads us to a set of question templates and a simple yet effective slot filling algorithm. We further use this model as a source of weak supervision to automatically generate clarifying questions for training. Furthermore, we propose supervised and reinforcement learning models for generating clarifying questions learned from weak supervision data. We also investigate methods for generating candidate answers for each clarifying question, so users can select from a set of pre-defined answers. Human evaluation of the clarifying questions and candidate answers for hundreds of search queries demonstrates the effectiveness of the proposed solutions.
Interesting...
Forum discussion at Twitter.