Google has slightly updated its Google crawlers and fetchers documentation to say that it will pick the protocol, HTTP/1.1 and HTTP/2, that "provides the best crawling performance" for Googlebot. In fact, it may even switch protocols between sessions, if it needs to.
The change to the document was so small that Google didn't even document the document change in its updates page.
Google wrote:
Google's crawlers and fetchers support HTTP/1.1 and HTTP/2. The crawlers will use the protocol version that provides the best crawling performance and may switch protocols between crawling sessions depending on previous crawling statistics. The default protocol version used by Google's crawlers is HTTP/1.1; crawling over HTTP/2 may save computing resources (for example, CPU, RAM) for your site and Googlebot, but otherwise there's no Google-product specific benefit to the site (for example, no ranking boost in Google Search). To opt out from crawling over HTTP/2, instruct the server that's hosting your site to respond with a 421 HTTP status code when Google attempts to access your site over HTTP/2. If that's not feasible, you can send a message to the Crawling team (however this solution is temporary).
Google began crawling using HTTP/2 for a limited number of URLs in November 2020 and a year later was crawling about half the web on that protocol. HTTP/2 does not have a direct benefit for SEO and you cannot force Google to crawl over HTTP/2.
Gagan Ghotra spotted this change and wrote on X that the documentation was updated on November 19th. He has this before and after, which I of course, verified:
Forum discussion at X.