Basketball players, commentators, and fans have always believed in the “hot hand,” the idea that a player can temporarily get hot and start shooting much better than usual. When that happens, conventional wisdom urges you to “feed the hot hand.” That is, to get the ball to that player as often as possible so you can take advantage of his or her temporarily elevated shooting ability.
Every basketball fan has witnessed this. Basketball writers and TV commentators invoke it all the time. The problem is that it’s not a real thing, at least for NBA players. (It could be different for players who are not trained athletes at the highest level.) Statistical analyses done over the years have proven again and again that the best predictor of how an NBA player’s next shot will do is not the last shot or shots he took, but his normal shooting percentage for the season. In other words, if a player who normally hits half his shots suddenly hits five or seven or ten in a row, the odds of him hitting the next shot is still just 50%.
“But wait!” I hear you saying. “I once saw Klay Thompson (a very good shooter on the Golden State Warriors) score 37 points in a single quarter. He hit thirteen out of thirteen shots! His odds of hitting those shots were obviously much better than whatever his actual shooting percentage was that season. He clearly had the hot hand that quarter!”
We remember Klay’s 37 point quarter, but we somehow don’t count all the times he hit a few shots, thought he might be “in the zone,” but then missed the next two and decided, “Ah well, I guess I wasn’t.” We really just honestly forget those times, which is understandable because they aren’t very noteworthy, and so they don’t detract from our belief in the hot hand. However, the statisticians count them all. And when you take every minute of every game into account, and not just the ones that support your belief, you find no evidence of the hot hand.
Then how did Klay hit 13 out of 13 shots in that quarter? It was just normal randomness. Any shooter is bound to have some long streaks of makes or misses, just by chance. Steph Curry once went 0 for 10 on three point shots in a game. Even a coin, flipped 100 times, is almost guaranteed to have at least one streak of five or six consecutive heads or tails, and is more likely than not to have a streak of seven. Yet, no matter how many heads or tails come up in a row, the odds for the next flip are still 50-50. And it’s the same for Klay Thompson. (Not exactly 50-50, but whatever his actual shooting percentage is for a given type of shot.)
The selectiveness of memory that creates sports myths like the hot hand is a form of “confirmation bias,” a logical fallacy that includes various ways that we avoid dealing with evidence that contradicts our beliefs. When Thompson makes several baskets in a row, it’s memorable, and it feeds our belief in the hot hand. When he seems to get hot but then it doesn’t pan out, we don’t end up counting that against our belief, probably because we have no reason to even remember it.
Here are some other examples of confirmation bias:
- When someone believes some trinket they own is a lucky charm because good things have happened to them while they’re holding it, forgetting about all the times they had it and nothing good happened to them, or all the times good things happened to them when they weren’t holding it.
- When someone with an extreme belief (religious or political or whatever) believes they’re less of a minority than they really are – or even that they constitute the majority – because most of their friends are of a like mind, and they mostly follow news sources aligned to their belief, and all they ever see in their social networking feeds is people who agree with them. (This effect, of the Internet in particular, is also called the “echo chamber.”)
- When someone cites only those scientific studies that support some position of theirs, while forgetting or ignoring or finding some way to explain away the studies that contradict it.
There’s another logical fallacy I want to discuss, but I haven’t been able to find its name. (If you know it, please tell me!) For now, I’m going to call it “confirmation exaggeration.”
The great hot hand debate was revived in 2014 by a Harvard study that examined the data more deeply, taking into account not just the percentage of shots a player hit, but the difficulty of those shots, and what it discovered was that, while the percentages of made shots stayed the same for an apparently “hot” player (as the stats had always indicated) the difficulty of those shots slightly increased. (Which makes sense. When a player is perceived to be getting hot, the other team might start guarding him more closely, and he might also attempt some outrageously difficult shots, or “heat checks,” on his own, confident in his hot hand, both of which would increase the average difficulty of his shots.) When a player takes more difficult shots while maintaining the same percentage of makes vs. misses, he’s basically shooting better.
The true believers were ecstatic. The hot hand had been proven! Those nerdy statheads — the first ones, who denied it, and the not second set, who proved it after all — were full of baloney! However, the thing that was proven was not at all the thing everybody had imagined.
To any NBA fan or player or media member, what the hot hand suggests is a huge increase in shooting skill, something on the order of 20 or 30% or even more. They imagine 50% shooters temporarily becoming 90% shooters, and based on that, rightly insist that the player be force fed the ball no matter what, because how can you beat a 90% chance to score? But the effect that the second study found was not anywhere near 40% or even 10%. It was 1.2%. In other words, a 50% shooter can sometimes transform, not into a 90% shooter or even a 60% shooter, but into a 51.2% shooter.
The change is so small that it can’t possibly make any practical difference in a game. For one thing, there’s no way to know, after just two or three consecutive makes, whether what you’re experiencing is just normal randomness or the 1.2% increase of the hot hand. Secondly, even if you could detect the hot hand as it was happening, there’d be very little point in altering your strategy around it: it’s only a 1.2% difference. That minuscule an increase doesn’t justify a player deliberately attempting low percentage shots, or refusing to pass to a teammate who has a higher percentage shot. Taking good shots rather than bad shots is always going to be worth more than 1.2%.
So for all practical purposes, the hot hand does not exist, or at the very least, should not be guiding on-court tactics nearly as much as it currently does. To continue believing that it should is to focus entirely on the direction of the effect that the Harvard study found, but not its scale, and this is the essence of confirmation exaggeration:
- You imagine a huge effect.
- The effect is indeed proven, but turns out to be small.
- You act as though what was proven is the huge effect you imagined in the first place.
Here are a couple of more examples of confirmation exaggeration:
- You learn that your favorite food has been shown to triple the risk of a certain kind of cancer, and immediately quit eating it, but then a friend points out that the risk for that particular cancer was only .001% over the course of your lifetime in the first place, so all the food would do is increase that to .003%, still a vanishingly small risk. Nevertheless, you’re spooked and continue to avoid that food as if it would practically guarantee that you would get cancer.
- A sports team wins the championship by prevailing over the losing team four games to three in a seven game series, with each game being hard fought and the final game being decided by a single point. Suddenly, everybody thinks of the winning team as clearly superior and the losing team as deeply flawed – even though the result was practically a coin flip.
Confirmation bias and confirmation exaggeration are powerful forces that distort your view of reality. And they can feed one another: you credit only the evidence that supports your belief, and you exaggerate those bits of evidence on top of that. This strengthens your belief, which makes it even easier to practice confirmation bias in the future. This is a treadmill to delusion. The way to get off it is to weigh all the evidence, and to give each piece of evidence its proper weight.
I plan to write future blog posts in which I discuss examples of confirmation bias and confirmation exaggeration. You can find them by searching for posts in the “confirmation-fallacies” category.