Jason Barnard does these awesome interviews on his YouTube channel and one he just recently released was with Nathan Chalmers, the Program Manager, Bing Search Relevance Team at Microsoft. Here is the video interview and the tidbits summarized by Glenn Gabe on Twitter.
Tidbits from the interview:
- Darwin: The name of the algorithm that does rich element replacements
- Six unique teams work on different areas of the search results page
- Bing uses various metrics to track searcher satisfaction of the search results
- The main algorithm for ranking helps determine if the search results gets richer elements
- Answer placement is mostly based on in intent type and market
- The rankings are machine learning based, no one is coding ranking but rather they are coding machine learning models
- and much more, with GIFs from Glenn
It's Breakfast with Bing time. Another great interview from @jasonmbarnard with Nathan Chalmers, a PM on the Search Relevance Team. First up, the name of the algo that does placement of rich elements in called Darwin. Fitting :) Each link is timestamped: https://t.co/U5EmMUhaIg pic.twitter.com/mXtyhc77LX
— Glenn Gabe (@glenngabe) April 29, 2020
There are six teams on the whole page team that cover different areas of the SERP, including 10-blue links, rich answer cards, right rail, ads, captions, etc. All work together to build the "whole page" https://t.co/JbqGBrHMSn pic.twitter.com/HcQ64VsIT5
— Glenn Gabe (@glenngabe) April 29, 2020
The ranking system's ultimate goal is is optimizing for user satisfaction. Teams use online metrics for measuring that (like how users are interacting w/the page, where they are clicking, & NOT clicking). Offline metrics are used too from "human judges" https://t.co/fO9yPfNzzI pic.twitter.com/bYo7ewPDwK
— Glenn Gabe (@glenngabe) April 29, 2020
The 10-blue links are the baseline that the ranker uses to determine if adding more elements creates a better user experience, or worse. E.g. Does adding rich card or right rail elements produce a better SERP? If so, add them: https://t.co/Kemo4MwXAl pic.twitter.com/HfyPbZUnjM
— Glenn Gabe (@glenngabe) April 29, 2020
For answer placement, Bing looks at intent types, markets, etc. (hundreds of combinations). They built a system that can learn from what's happening in production & teach itself the best way by learning from human interaction https://t.co/3gcLXs4OVR pic.twitter.com/FWrm4Xo66Q
— Glenn Gabe (@glenngabe) April 29, 2020
It's important to know that nobody is coding how Bing ranks the results... It's machine learning-based. You tell the ML model what you want & it works out the best results. It's a satisfaction & optimization problem after that: https://t.co/xIZHTWSufG pic.twitter.com/K7ZUWrEJ1X
— Glenn Gabe (@glenngabe) April 29, 2020
The ads team looks at which dials can be turned to make the page more profitable, BUT they also have goals for user satisfaction. Sure, Bing could spam the results with ads, but they would lose users. https://t.co/zMREiJifnj pic.twitter.com/pP42HpOgCP
— Glenn Gabe (@glenngabe) April 29, 2020
For queries not seen before (or for longer-tail queries), the ranker has historical data for similar types of queries, knows how the blue links perform, & knows how the answer type behaves for this type of query. So it uses that data to craft the best SERP https://t.co/xYePFEBiD6 pic.twitter.com/lPPWZlDc6d
— Glenn Gabe (@glenngabe) April 29, 2020
Bing wants to get the user to a successful state as quickly as possible. That's why some SERPs are shorter than others. And if there's a commercial aspect, & it knows there are strong ads, it might even include those ads w/a shortened SERP: https://t.co/04cGDRCjoo pic.twitter.com/Pt37pwV5D9
— Glenn Gabe (@glenngabe) April 29, 2020
Forum discussion at Twitter.