Just like you can sometimes see listings in Google search for URLs that are disallowed, those URLs can also "collect" links in Google search. What this means is that while Google is not authorized to crawl the URL, if people are linking to the URL, Google will and can pick up on those links.
That is why you sometimes see search results with snippets that read "No information is available for this page." Google may list the page if (a) the query is specific enough and (b) there is enough links to that page to give Google enough hints that the page is relevant to the query, even with Google not being able to crawl the page to see what content is on the page. Instead, Google uses the links pointing to the page and its anchor text to figure that out, amongst other things.
Thus, when Google's John Mueller said on Twitter "If a URL is disallowed for crawling in the robots.txt, it can still "collect" links, since it can be shown in Search as well (without its content though)." John is stating something true and valid despite how some might not like this.
Here is the context:
If a URL is disallowed for crawling in the robots.txt, it can still "collect" links, since it can be shown in Search as well (without its content though).
— 🍌 John 🍌 (@JohnMu) May 2, 2019
So you can technically do link building on URLs that are disallowed. That is, if you are bored and need a challenge.
Forum discussion at Twitter.