Covering the Spread: Why Ken Pomeroy’s score predictions are usually wrong. And why they’re still better than yours.

Last month, Ken Pomeroy excitedly tweeted that he was on the verge of correctly predicting an NCAA Tournament game score for the first time ever. Alas, California was unable to foul Syracuse to extend the game in the final seconds and Pomeroy’s 68-60 prediction fell just short: Syracuse 66, Cal 60.

Based on his tongue-in-cheek tweet, I imagine near misses in predictions don’t phase Pomeroy very much. Basketball, like life, involves randomness. Pomeroy could have the most theoretically sound and predicatively accurate forecast in the world and he’d still miss. Occasionally, he’ll miss because a key player gets injured or a bat flies onto the floor. But more often then not, he’ll miss because of statistical variability. For the same reason that you might flip a coin 10 times and get 7 heads, a team might outperform their long-run average in a single game.

For some reason, this bothers people. Hey geeks, if Florida is so good, how’d they lose to Ole Miss? It’s because Florida played a weak schedule. It’s because Marshall Henderson has “guts.” It’s because Florida can’t win close games. But statistical variability? Get that nerd stuff out of here!

Why are Ken Pomeroy’s score predictions usually wrong?

The short answer: Variability.

The long answer: College basketball scores are normally distributed. Below is the distribution of scores for the away team and for the home team in 5321 games between Midnight Madness (November 9, 2011: the first day of the season) and March Madness (March 17, 2013: the last day of conference tournaments). These graphs exclude overtime games.

NCAA Score DistributionsPretty cool, huh? On a side note, see how the away curve is centered around a slightly lower average than the home curve (it’s further to the left)? That’s a graphical representation of home court advantage.

Anyway, scores fit a normal distribution pretty well. Which means it’s pretty easy to get close to the correct score (68% of scores fall within one standard deviation of the mean. 95% within two. 99.7% within three), but it’s really tough to pinpoint the exact location along the curve. So the long answer also boils down to one word: Variability.

The distribution of Pomeroy’s errors (error = prediction – the actual outcome) looks like this:

Pomeroy's Predictions

Which is pretty much what we’d expect a good distribution of guesses to look like. It’s centered around zero (mean = 0.02. On average, his guesses are off by just two hundredths of a point). It’s roughly normally distributed (see the overlaid normal curve). The distribution skews a little bit high meaning it overestimates the score a little more often than it underestimates the score (60% of errors are greater than or equal to 0. 51% of errors are less than or equal to 0). But overall, it’s a pretty solid fit.

That being said, surely somebody can do better.

Ask the experts

One of the knocks on Pomeroy and on predictive models in general is that they’re too cold, too calculating. They can’t adjust for changes that are easily apparent to the human eye. A change to the lineup, an injury, a change in momentum, an inspirational story line, clutch, the hot hand. These are things that can’t be fed into a machine.

Basically, Pomeroy’s knowledge is a mile wide and a few inches deep. His system can speak intelligently about averages and numbers and what not, but it doesn’t know that a team’s star player just tweeted about hitting the gym. It doesn’t know that a team’s shooting guard is battling the flu. It doesn’t know about rumors that a team’s coach is going to start giving a freshman some more playing time. This is the type of mile deep, inch wide knowledge of a die hard fan.

Many message boards run prediction contests each season. I used HoosierSportsNation’s predictions for Indiana’s regular season games because the data was already aggregated. Given the specific knowledge Indiana fans have, if this common critique of Pomeroy’s system is accurate, I’d expect the Indiana fans to outperform Pomeroy when picking Indiana scores and Pomeroy to outperform the Indiana fans when picking opponents’ scores.

Below is a distribution of errors for Indiana scores from the folks over at HoosierSportsNation:

Indiana Error

And a distribution of errors for Indiana’s opponents’ scores from the same folks:

opponentserror

 

It turns out that being well informed actually makes your predictions worse. Pomeroy’s cold, calculated machine outperforms the experts by a pretty solid margin. Even more surprisingly, the uninformed predictions about opponents’ scores outperform the informed predictions about Indiana’s scores.

Comparison

This isn’t to say that there aren’t better predictive models than Pomeroy’s. It’s just to say that the predictive model most people judge Pomeroy against — their own set of eyes — isn’t very good.

This also isn’t to say that die hard fans or the people of HoosierSportsNation don’t know what they’re talking about. I happen to be both a die hard fan and a lurker over at HoosierSportsNation and I like to think that I have at least some idea of what I’m talking about it. It’s just to say that we’re only human. We’re subject to a long list of cognitive biases. And it’s something to be aware of next time you try to wage war on Pomeroy using only your eyes,  your gut, and your years of experience.

 


Is this the year a 16 seed pulls off the upset?

The recurring narrative of the 2012-2013 college basketball season is parity. Top seed projections have bounced around all season long. Does that mean we’ll finally see a 16 seed pull off an upset?

Probably not.

Average Initial Win Probability for #1 Seeds Overlaid on the Spread of Initial Win Probabilities for #1 Seeds for the 2003-2013 NCAA Tournaments.

Image

The green line is the average initial win probability for #1 seeds for each NCAA Tournament from 2003-2013 (the green dash represents the projection for this year. Because the field hasn’t been announced yet, I used the current #1 vs. #16 matchups in Jerry Palm’s latest bracketology).

The grey shading is the spread between the highest initial win probability and the lowest initial win probability for a #1 seed each season. So 2007 and 2008 had 4 very good #1 seeds that were all very unlikely to lose. 2006 had a larger spread between the strongest #1 seed (Duke) and the weakest (Memphis), but they were still all very unlikely to lose.

All initial win probabilities come from Ken Pomeroy’s data. For 2010-2012, initial win probabilities are readily available in his game recaps. For 2003-2009 (and for this year since the field hasn’t been announced yet), I had to calculate initial win probabilities.

For the less math inclined, you can use this handy spreadsheet that redditor /u/Concision put together to calculate the initial win probabilities for yourself.

For the math inclined, you can use the Log5 Formula (where Win% is the Pythagorean Winning Percentage):

For the truly math inclined, you can read about how we arrive at this formula.

One minor caveat: 2003-2009 and 2013 aren’t the actual initial win probabilities for each team. To get those, I’d have to know each team’s Pythagorean Winning Percentage on the day of their matchup. Instead, I used the end of season data for 2003-2009 and the current data for 2013. Because a few extra games is unlikely to move the season-long numbers very much, the actual initial win probabilities shouldn’t differ by much from these estimated ones.

Why aren’t the #1 seeds any more likely this year to lose against a #16 seed? Perhaps because the claims of parity have been overblown. Or perhaps because, even if the top teams are more closely bunched, it’s unlikely that the middle of the pack teams are any better than usual.

That doesn’t mean we won’t see a #1 seed fall in the second round. It’s roughly as likely as flipping 5 heads in a row on a fair coin. Rare enough to call unlikely, but not nearly rare enough to call impossible. But I’m still going to pencil the #1 seeds through to the third round and you probably should too.


Rush the Court Redux

Already this week, we’ve seen Minnesota, Penn State, Southern California, and Virginia take to the floor. This year, we’ve seen students with a combined 9 National Championships and 39 Final Fours disgrace themselves like no traditional power ever would. Just how bad is this current court rushing epidemic that we’re suffering?

Not very.

It’s no secret that I have no problem with rushing the court. So I had a little bit of fun and put together a (hopefully) comprehensive list of court rushes for the 2012-2013 season. It’s a list that I plan to expand to cover past seasons in hopes of starting a database of historic court rushes. And I had a little bit more fun and compiled a (far less) comprehensive list of court rushes for the 1994-1995 season just to show that students felt emotions even back when Brad Stevens was still in high school.

NC State Upsets Duke

Our Current Epidemic…Isn’t That Bad

Methodology

I started with a list of every game so far this season (thanks Ken Pomeroy for the handy .txt file!) and used a little bit of Excel wizardry to add AP Poll rankings (thanks Basketball-Reference for the handy .csv file!). From there, I coded every game with a few additional criteria:

From there, it was mostly grunt work. I filtered first to non-SEC, non-neutral games where an unranked home team won over a ranked away team because I figured these were my most likely court rushing candidates (there were 59 games. 26 ended with a rushed court). After that, it was just a lot of Google searching. Plus some additional fiddling with the filters (overtime games, close games, lower ranked teams beating higher ranked teams, etc.) and some guesswork and memory to fill in the rest. That’s why I call it a (hopefully) comprehensive list.

Results

You can see the full spreadsheet here. I’ll do my best to update it each week as new games are played.

As it turns out, our current epidemic isn’t actually that bad. There have been 32 court rushes in 5042 games so far this season. That’s 0.62%.

Of course, that’s not a very interesting numbers. So let’s pull out the highly unlikely court rushes (Neutral = FALSE, HomeWin = TRUE, IsSEC = FALSE).

Now we’re looking at 32 court rushes in 3045 games. 1.03%. Still hardly an epidemic.

Here are some other (mildy) interesting fractions:

  • 43.10% (25/58) of unranked upsets over ranked teams (non-SEC, non-neutral, yadda, yadda, yadda) ended in a court rush.
  • 6.15% (4/65) of one possession games that end in overtime (non-SEC, non-neutral, yadda, yadda, yadda)
  • 25.93% (7/27) of the teams who rushed have at least one National Championship.
  • 59.26% (16/27) of the teams who rushed have at least one Final Four appearance.
  • 42.86% (3/7) of the teams who have at least one National Championship have rushed more than once.

And some other interesting tidbits:

  • Duke has been rushed against the most.
  • Butler, Colorado, Miami, NC State, and Oregon have all rushed and been rushed against.

Our Current Epidemic…Isn’t That New

Methodology

This research was a little bit uglier because I didn’t plan as far ahead. Initially, I was just curious about some less recent court rushes, so I did a little bit of web searching. I somehow ended up on George Washington’s upset of #1 UMass in 1995 and from there, I just started guessing some other 1994-1995 celebrations. Instead of starting from every game of the season and working my way down, I started from “big” games and worked my way up.

I flipped through each week of the AP Poll (thanks College Poll Archive for making this easy!) and wrote down the Top 10 teams from that  week. Then I found the away games that each team lost while ranked in the Top 10 (this wasn’t so easy. I had to use a variety of sources. StatSheet, Basketball-Reference, media guides, individual team fanpages, etc.) From there, it was just a painful process of searching through Google News archives.

So this list is admittedly much less comprehensive than the 2012-2013 one.

Results

I don’t have a fancy spreadsheet for this data (although I’ll hopefully get far enough back with my first spreadsheet that it will eventually catch up to 1994-1995). Instead, I just have a list:

  •  12/27/1994: #3 Kansas @ Indiana: No documented rush
  • 1/1/1995: #5 Kentucky @ Louisville: No documented rush
  • 1/4/1995: #3 Arkansas @ Ole Miss: No documented rush
  • 1/4/1995: #1 North Carolina @ NC State: Documented rush
  • 1/5/1995: #2 UCLA @ Oregon: Documented rush
  • 1/14/1995: #5 Arkansas @ Auburn: Documented rush
  • 2/4/1995: #8 Arkansas @ Mississippi State: No documented rush
  • 2/4/1995: #1 UMass @ George Washington: Documented rush
  • 2/7/1995: #1 North Carolina @ Maryland: Documented rush
  • 2/17/1995: #9 Missouri @ Oklahoma: No documented rush
  • 2/23/1995: #5 UMass @ Temple: Documented rush
  • 2/27/1995: #4 UConn @ Providence: Documented rush
  • 3/8/1995: #9 Michigan State @ Iowa: Documented rush

GWU Upsets UMass

Don’t forget that there are certainly court rushes from 1994-1995 that fall outside of my narrowly defined criteria. For instance, #11 Villanova @ Providence.

61.54% (8/13) of unranked upsets over Top 10 teams (non-neutral. I was able to drop the SEC filter because the rule didn’t exist yet) ended in a rushed court. 73.91% (17/23) drawn with the same parameters did today. That jumps up to 89.47% (17/19)  if you drop out the SEC. On the flip side,if the rule had existed 20 years ago, the 1994-1995 number would jump up to 70.00% (7/10).

Either way, the numbers are small enough that they likely fall within the expected margin of error. If there has been a true uptick in the propensity to rush the court over the past 20 years, it’s pretty minor.


Follow

Get every new post delivered to your Inbox.