Ever see a web page that when you visit it, it just keeps redirecting back to itself? So if you visit Google.com, it will reload Google.com over and over again.
If you send a spider/crawler, such as GoogleBot into such a redirect, it can get dizzy and not be able to have enough time to crawl the content on that page.
That is the issue one webmaster had in a Google Webmaster Help thread. He recently had his .com home page redirect back to the same exact URL, the .com home page. He just changed it to go to the .net, but you can still see the issue when you try to use Google Translate on the web page, it spits back an error saying "The page you requested attempted to redirect to itself, which could cause an infinite loop."
John Mueller said in the thread that once he removes the recursive redirect, Google should be able to crawl and index the content. John said:
The main problem we're seeing is that your homepage it redirecting to itself -- so we can't actually crawl it at all. Once you remove that recursive redirect, we'll be able to focus more on the content of your pages.
Of course, this is where Fetch and Render come in handy, to see if this is just happening to spiders.
Forum discussion at Google Webmaster Help.