I missed Nate Silver’s NY Times blog post last week about the history of the NCAA basketball tournament based on preseason rankings (instead of merely seeds). The teams that were not ranked in the AP preseason poll at the beginning of the season tend to underperform in the tournament when compared to other teams with the same seed.
[T]he preseason poll is essentially a prediction of how the teams are likely to perform. The writers who vote in the poll presumably consider things like coaching, the quality of talent on the roster, and how the team has performed in recent seasons.Although we all like to make fun of sportswriters, these predictions are actually pretty decent. Since 2003, the team ranked higher in the A.P. preseason poll (excluding cases where neither team received at least 5 votes) has won 72 percent of tournament games. That’s exactly the same number, 72 percent, as the fraction of games won by the better seed. And it’s a little better than the 71 percent won by teams with the superior Ratings Percentage Index, the statistical formula that the seeding committee prefers. (More sophisticated statistical ratings, like Ken Pomeroy’s, do only a little better, with a 73 percent success rate.)
When I teach multiobjective decision analysis, I mention how cognitive biases indicate that we tend to be overconfident about our initial information. Nate Silver’s example, however, suggests the opposite: we tend to underestimate the original predictions in favor of metrics available at the end of the season (win-loss records, RPI, various team rankings, etc.). It’s a nice counterexample for showing that bias is a two way street.
As far as your bracket is concerned, Nate Silver’s blog post suggests that teams like Notre Dame, who was unranked when the season began, are unlikely to get as far in the tournament as their seed might suggest.