In my estimation, The Legend of Zelda: Breath of the Wild is the greatest video game of all time. There are a lot of reasons I think this is true. But, for now, I want to focus on the reason applicable to this website: game theory.
For the narrow set of readers familiar with both BotW and game theory, this might seem puzzling. Game theory is, by definition, the study of strategic interdependence. BotW is a one player game and would thus seem relegated to the field of decision theory.
On the contrary, a good portion of your play time is actually a two player game. It’s you and the game developer, and it’s a cooperative interaction.
Introducing the Game BotW gives you the premise of the game-within-the-game right off the bat. As Link departs the Shrine of Resurrection, the camera guides you down a clear path. The first stop is a chat with the mysterious man at the campfire. Immediately afterward, Link finds a pointed ledge overlooking a pond.
Without telling you, the game is communicating a clear message. It is giving you a diving board shaped rock, equipped with its own water lily target. The game begs you to jump in. It then rewards Link with his first Korok seed.
Mr. Korok then tells you how the rest of this subquest will play out. Other Koroks are hidden, and you need to find them.
From here, the developers could have gone in two directions. One would have been a disaster. The other would make the game fun. They chose the latter.
Focal Points and Thomas Schelling To get the fun, we must first take a step back and learn about focal points. In Strategy of Conflict, Thomas Schelling proposes the following problem. Suppose I instantly transport you and a friend to New York City. You have no way to communicate with each other but need to reunite. Where do you go to meet, and when do you do it?
This question has no “correct” answer. Anything you say—no matter how clever or ridiculous—could be right, as long as as your friend would say the same thing. Yet despite the infinite number of possibilities, Schelling found that people had a predisposition to choose Grand Central Station at noon.
Why? Schelling describes such a time and place as a “focal point.” They stand out for one reason or another, which makes them easier to coordinate on. Noon is the middle of the day, so it is a sensible time to meet someone else. Grand Central is a crossroads of New York, so it also seems appropriate.
The Developer’s Strategy Nintendo could have placed the Koroks in the most obscure of places and make it as hard as humanly possible to find all of them. But they didn’t. Instead, they made a good faith effort to play a focal point coordination game with the player.
Open up your map and find another pool of water nearby? If you head there, a Korok is probably waiting for you. Is one tree much, much taller than the others? You should probably climb it. See a strange rock pattern in the distance? Time to investigate.
In fact, once you discover that category of Koroks, you look at your map in a whole new light:
Tons of these perfectly circular dots of rocks are all over the map. It makes you want to scour the map for more, and it excites you when you inadvertently discover one while using the map for something else.
Once you realize the game developers are trying to the Koroks easy to find, it completely changes your mindset. You start asking yourself “if I were a game developer who wanted to ‘hide’ Koroks in obvious places, where would I put them?”
Granted, not every Korok ends up feeling that way. The game has 900 of them, and the focal point is in the eye of the beholder. Many times, I was left dumbstruck and asking myself how anyone thought that someone could reasonably find that particular seed. Still, most of BotW rewards a type of strategic thinking that you rarely see in a one player game.
Suppose you are a pokémon trainer, and a fellow trainer approaches you with the following proposition:
In any real life scenario, this should be a hard pass. But the situation provides a teachable moment on two fronts to justify that claim.
Adverse Selection Suppose someone tells you they want to engage in a trade. What does that willingness tell you about whether you should do it?
In many contexts, it tells you a lot. Imagine the Exeggutor was the strongest one could possibly imagine. Would the person want to exchange it for another Exeggutor? Absolutely not—no matter the strength of the other Exeggutor the person could receive in return, it would be worse than what he currently has. So a strongest possible Exeggutor would never be offered up in a trade.
Iterating this logic has an interesting implication. Consider the incentives of someone who has the second-strongest possible Exeggutor. Would that person want to exchange it? The only way this could be worth the time is if the other trainer has the strongest possible Exeggutor. But we just discovered that another trainer would not offer such an Exeggutor. So the person with the second-strongest should have no interest in trading either.
What about the third? Well, the only way this would be good is if a strongest or second-strongest Exeggutor were available. But they won’t be. So someone who owns the third-strongest should not engage in a trade either.
That logic unravels all the way down. Even a trainer with a very, very weak Exeggutor should have no interest in a trade. Although it is true that the average Exeggutor is much stronger than the one the trainer currently has, the average Exeggutor being offered for trade is not. In fact, trainers would only want to engage in a trade if the Exeggutor is the worst possible.
This result is well-known in game theory as a product of adverse selection—a situation where one party has private information about the value of a transaction. More specifically, it is an application of the market for lemons, which uses the same mechanism to explain why it is so difficult to buy a quality used car.
Cheap Talk “But wait!” one might say. “The fellow trainer has assured me that he is very proud of his Exeggutor. It must be not so bad after all.”
Not quite. This is classic cheap talk. The message the other trainer is communicating could just as easily be conveyed by a trainer who actually thinks that his Exeggutor is terrible. Moreover, he does not have a common interest with you in communicating truthful information. Combining these pieces of information together, you should ignore the message altogether.
To make this more concrete, imagine that you were to take whatever the trainer said at his word. Then would a trainer with a truly terrible Exeggutor want to lie? If doing so would convince you to make the trade, then the answer is yes. As a result, you cannot differentiate between an honest assessment and a fib. In turn, the message should not change your beliefs about the Exeggutor’s quality at all.
Nintendo and Strategic Theory All that said, this is Nintendo we are talking about. They are not interested in teaching you interdependent strategic thinking. The man actually has an exotic Exeggutor from a region you cannot access in the game. The only way to acquire such an Exeggutor is through a trade. You therefore most definitely should make the deal with him.
But this leads to a deeper question: why doesn’t the trainer just lead with that point in the first place? Rather than speaking to your common interest—he has an Exeggutor from his region and wants one from your region, you have an Exeggutor from your region and want one from his—he just speaks strategic nonsense.
Here’s hoping that George Akerlof appears in Pokemon: Sword and Shield offering the same trade. But this time, when you accept, you get a level 1 Exeggutor with all the minimum stats.
Imagine you are playing Jenga, and it is your turn. The tower looks like this:
To save you some time, almost the entire tower is “spent”—either the middle block or the two side blocks are missing from just about every level. The only exceptions are the top two rows. Jenga’s rules only allow you to take blocks from below a completed row, so the top is off limits. Without doing some ridiculous Jenga acrobatics, this gives you exactly three blocks to choose from, all of them in the second row from the top.
If you have played Jenga before, you know what normally comes next. Jenga blocks are not completely uniform; some are slightly smaller or slightly larger than others. Consequently, the tower’s weight is not usually distributed evenly across a row. If the middle block is slightly larger, then the sides will be loose due to this; if the middle block is slightly smaller, then it alone will be loose.
Under normal circumstances, you should poke around that row to figure out which is the case. If the sides are loose, you would take one of them; if the middle was loose, then you would take it.
Backward induction suggests caution here. Broadly, backward induction tells us that the best way to strategize for right now is to consider how others will respond to whatever you can do. After doing so, choose the option that maximizes your benefits given how others will maximize their benefits later. In short, the future matters; if you ignore it, trouble will find you.
Let’s say that the middle block is loose. What happens if you take it? Your opponent is in deep trouble. There are no blocks that can be removed in isolation without crumbling the tower. Instead, your opponent must take a row with the middle block missing, slowly re-position a side block into the middle slot, and then remove the other block. This is ridiculously difficult. Your opponent is almost certain to lose here, so your initial decision is straightforward.
Life is more complicated if the side blocks are loose. Suppose you take one of them. Your opponent will immediately follow up by taking the remaining side block. When the game comes back to you, the new top row will still only have two blocks on it. This means you will be forced to do the aforementioned nearly impossible task. You will almost certainly lose.
Your only other feasible option is to force out the middle block. This is a tall order—it is not loose precisely because it is supporting the weight above it. You are definitely more likely to lose this turn if you try taking it. However, if you are successful, you will place your opponent in the near-hopeless situation and almost certainly win.
Thus, depending on how you perceive those probabilities, your optimal play may be taking the more difficult block!
Over the past seven days, you have either grown to love or hate executive orders. But regardless of your political perspective, Trump’s actions are showcasing one of the major findings of bargaining theory: namely, that proposal power is one of the greatest sources of bargaining power.
What Is Proposal Power?
Put simply, proposal power is the ability to structure the terms of a possible settlement. It is distinct from receivership, which is the ability to say yes or no to any particular offer.
Perhaps surprisingly, whether you can make an offer has an enormous impact on your welfare. Imagine a seller needs at least $30 to sell and a buyer will buy for no more than $50. Clearly, a transaction should take place, as there is a $20 surplus (the buyer’s maximum minus the seller’s minimum).
If the seller can make a take-it-or-leave-it offer, then he can set the price at $50. The buyer, only having the ability to accept or reject, buys the good—it is just enough to convince him to buy.
Now imagine the buyer makes the take-it-or-leave-it offer. She can set the price at $30. The seller, only having the ability to accept or reject, sells the good—it is just enough to make her willing to part with it.
Who proposes the offer makes a major difference. If we endow the seller with the proposal power, she walks away with the entire $20 surplus. Yet if we endow the buyer with the proposal power, the exact opposite happens: the buyer receives the entire $20 surplus. It’s night and day!
The lesson here is simple: without the ability to make offers or counteroffers, the deals reached will look very bad for you.
Proposal Power in Action
In many contexts, proposal power is easy to acquire. At a flea market or car dealership, for instance, nothing stops a buyer or seller from making the initial offer or responding with a counteroffer of their own. They just do it.
Government is different. Some systems have proposal structures baked into the legal system. And this has major consequences for the types of laws that are ultimately implemented.
In the United States, most of the proposal power rests with Congress. That is because only Congress can write laws; the President can merely sign or veto whatever lands on his desk. Thus, if Congress and the President both agree that a law needs fixing, the solution implemented is going to be much closer to what Congress wants than what the President wants.
There is a key exception to that rule, however, and it is what we have been seeing over the last week. Executive orders essentially flip the script—the President institutes the policy, and it is up to Congress to accept it (by doing nothing) or reject it (by creating a bill that undoes the policy, and then overriding the inevitable presidential veto).
It should be obvious that the President choosing his own policies would result in outcomes closer to his ideological preference. I suspect liberals realized this was an issue weeks ago, but only recently realized how bad it would be for them. Bargaining theory goes deeper here and explains this. It isn’t just that the President fares better when choosing the policy than when he signs a Congressional law. It is saying that a strategic, forward-thinking President can set the policy so far to his liking that Congress is just barely willing to go along with it. And if the gulf between Congressional preferences and Presidential preferences is substantial, so too will be the types of policy outcomes associated with executive orders and standard legislation.
This logic sets expectations for what is to come for both fans and foes of Trump. Foes should continue to fear executive orders and should not be particularly worried about laws passed through Congress. Liberals see Paul Ryan as their last remaining hope. And they might find solace there—but only for standard laws. Undoing executive orders requires having a much larger share of Congress on board, so Paul Ryan does not get you very far.
For Trump supporters, it is the opposite: they will find standard laws relatively unexciting but be thrilled with the executive orders.
After rewatching The Empire Strikes Back over the weekend, it became clear that two major blunders ultimately doom the Empire. (Original trilogy spoilers below, obviously.)
Boba Fett’s Terrible Contract
After leaving the asteroid field, the Millennium Falcon’s hyperdrive fails (again). As a Star Destroyer prepares to blow it up, Han Solo turns around the Falcon and attaches it to the Destroyer.
The plan works. Imperial troops have no idea what happened to the Falcon and assume it jumped to hyperspace. Han, meanwhile, plans to sit there until the Destroyer empties its garbage and then drift away while camouflaged in the trash.
Darth Vader, apparently well-versed in comparative advantage, had commissioned bounty hunter Boba Fett (among others) to track down the Falcon.
Contract theory tells us that the reward for finding the Falcon needs to be substantial, going above and beyond basic expenses—otherwise, a bounty hunter would not have incentive to exert maximal effort in a risky and uncertain process. Furthermore, because the Empire cannot observe the effort a bounty hunter exerts, it needs to offer that large amount regardless of the actions leading up to capture.
Resolving this moral hazard problem created a second problem, however, and it was a problem Vader did not fix. If Boba Fett knew exactly what the Falcon was up to, he had incentive to conceal that information. He could wait for the Falcon to do its thing, catch it, and then claim the substantial reward designed to induce high effort despite only exerting very low effort.
And that’s exactly what happened—Boba hid in the garbage along with the Falcon, followed it through hyperspace, and immediately contacted the Empire upon arrival in Cloud City.
Vader could have solved that problem by paying the high amount even if a bounty hunter gave away the information immediately. But he didn’t. (Or a credible commitment problem stopped him, though apparently the bounty hunters are not particularly worried about this.)
So instead of capturing the Falcon on the Empire’s terms and setting up a confrontation with Luke on a Star Destroyer, they end up in Cloud City. This led to the next blunder…
Vader Miscalculates Lando’s Reservation Value
After landing in Cloud City, Vader and Lando Calrissian engage in crisis bargaining, a subject international relations scholars have explored at length. A mutually beneficial agreement certainly existed—rejection would have led to costly conflict, and we have a nice theorem that says a there is range of settlements that are mutually preferable to war under these conditions.
Something like that appears to have happened in Cloud City. Vader’s initial offer was generous, demanding Han and giving Lando autonomy otherwise. But Vader kept demanding more—including Chewbacca and Leia—causing Lando to eventually begin a fight.
This proved fatal to the Empire. Lando freed Leia. Freeing Leia allowed them to recover Luke. Luke ultimately turned Vader, and Vader killed the Emperor. Meanwhile, Lando led the mission to destroy the second Death Star. All because Vader kept demanding more.
In short, this is because your first offer—if it was truly optimal in the first place—maximizes your payoff conditional on acceptance. So if you try to extract more, the potential gains must not be worth the additional risk you face.
The Jedi are supposed to be master negotiators. I can only conclude that Anakin slept through his bargaining classes as a padawan.
On Season 1, Episode 9 of The Newsroom, Will McAvoy pitches a new structure for a potential Republican primary debate hosted by cable news channel ACN. Rather than asking questions and letting candidates speak freely, Will wants the ability to interrupt any time a candidate goes off topic or drifts away from the question. Predictably, the Republican National Committee hates the idea and doesn’t give the network a debate.
Maggie’s recap of the day’s events clearly show where the crew went wrong:
It was exactly as crazy as it sounded. Maggie is applying non-strategic thinking to a situation where there is clear strategic interdependence. Each network can choose what to offer the RNC, and the RNC can pick the terms most favorable to it. This gives each network an incentive to undercut the other until no one is willing to undercut any further. Standard bargaining theory tells us that basically all of the surplus will go to the RNC under these conditions.
But there is another facet of the interaction here that extends past basic bargaining theory. In standard price negotiations, if I don’t ultimately buy the good, I don’t care at all what you paid. That is not the case here. The lower the “price” a network is willing to offer, the more all the non-winners suffer—i.e., if CNN captured the debate by conceding the farm, journalists at CBS News, NBC News, and ACN are all worse off than had CNN captured the debate without compromising its integrity. So what we have here is essentially a collective action problem, which is just a prisoner’s dilemma with more than two players. Everyone is worse off in equilibrium than had all players agreed to cooperate, but individual incentives mandate that all parties defect.
There is some irony here. Earlier, the news team reduced Sloan Sabbath’s airtime to run stories on the Casey Anthony trial. They needed to do this to improve ratings to make them an attractive host for the debate. But Sloan Sabbath has an economics PhD from Duke and is generally frustrated that no one around her understands basic economics and aren’t really willing to spend any time to learn. The one person who could have saved them from the situation was ignored! (Or maybe she never spoke up as a perverse way of getting her revenge…)
TL;DR: McAvoy et al do not know how the prisoner’s dilemma works.
With the death of Justice Antonin Scalia over the weekend, the scramble has begun to make sense of the nomination process. Senate Republicans are (predictably) arguing that the seat should remain unfilled until after the 2017 election, presumably so a Republican president could potentially select the nominee. Senate Democrats and President Obama (predictably) feel differently.
Overall, it seems that people doubt that Obama will resolve the problem. But most of the arguments for why this will happen fail to understand basic bargaining theory. That’s what this post is about. In sum, nominees exist that would make both parties better off than if they fail to fill the vacancy. Any legitimate argument for why the seat will remain unfilled until 2017 must address this inefficiency puzzle.
You can watch the video above for a more through explanation, but the basic argument is as follows. The Supreme Court has some sort of status quo ideological positioning. This factors who the current median justice is, the average median of lower court justices (which matters because lower courts break any 4-4 splits from the Supreme Court), and (most importantly) expected future medians. That is, one could think about the relative likelihoods of each presidential candidate winning an election and the type of nominee that president would select and project it into this “status quo” ideology.
Confirmation of a new justice under Obama would change that ideological position. In particular, Obama’s goal is to shift the court to the left. Republicans want to minimize the shift as much as possible.
Nevertheless, failing to fill the position is costly and inefficient for the court. In other words, ideology aside, leaving the seat unfilled hurts everyone. These costs come from overworking the existing justices, wasting everyone’s time debating these issues incessantly, and generally making the federal government look bad. Due to these costs, each side is better off slightly altering the ideological position of the court in a disadvantageous way to avert the costs.
Visually, you can think of it like this:
(If that isn’t clear, I draw it step-by-step in the video.)
Thus, any nominee to the right of the Republicans’ reservation point and to the left of the Democrats’ reservation value is mutually preferable to leaving the seat unfilled.
This simple theoretical model helps make sense of the debate immediately following Scalia’s death. Senate Majority Leader Mitch McConnell says “The American people should have a voice in the selection of their next Supreme Court justice. Therefore, this vacancy should not be filled until we have a new president.” But what he’s really saying is “Our costs of bargaining breakdown are low, so you’d better nominate someone who is really moderate, otherwise we aren’t going to confirm him.”
And when Senator Elizabeth Warren says…
Abandoning their Senate duties would also prove that all the Republican talk about loving the Constitution is just that – empty talk.
…what she really means is “Mitch, your costs are high, so you are actually going to confirm whomever we want.”
To be clear, the existence of mutually preferable justices does not guarantee that the parties will resolve their differences. But it does separate good explanations for bargaining breakdown from bad ones. And unfortunately, the media almost exclusively give us bad ones, essentially saying that they will not reach a compromise because compromise is not possible. Yet we know from the above that this intuition is misguided.
So what may cause the bargaining failure? One problem might be that Obama overestimates how costly the Republicans view bargaining breakdown. If Obama believed the Republicans thought it was really costly, he’d be tempted to nominate someone very liberal. But if the Republicans actually had low costs, such a nominee would be unacceptable, and we’d see a rejection. (This is an asymmetric information problem.)
A more subtle issue is that presidents have a better idea of a nominee’s true ideology than senators do. Maya Sen and I explored this issue in a recent paper. Basically, such uncertainty creates a commitment problem, where the Senate sometimes rejects apparently qualified nominees so as to discourage the president from nominating extremists. Unfortunately, this problem gets worse as the Senate and president become more ideologically opposed, and polarization is at an all-time high.
In any case, I think the nomination process highlights the omnipresence of bargaining theory. Knowing the very basics—even just a semester’s worth of topics—helps you identify arguments that do not make coherent sense. And you will be hearing a lot of such arguments in the coming months regarding Scalia’s replacement.
I will assume that most readers are familiar with Hotelling’s game/the median voter theorem game. If not, the basic idea is that two ice cream vendors are on a beach that stretches the 0-1 interval. Customers are uniformly distributed along that interval. The vendors simultaneously select a position. Customers go to the closest vendor and split themselves evenly if the vendors choose an identical position. Each vendors want to maximize its number of customers.
(You can reframe the question as two candidates placing themselves along an ideological spectrum, with citizens voting for whichever one is closest.)
The Nash equilibrium is for both vendors to select the median location (.5); doing this guarantees each vendor half the business, but deviating to any other point generates strictly less. Full details here:
But what happens when there are more than two players? Someone posed me that question earlier today. It turns out, the solution for this game is complicated when there are an odd number of players. But fairly simple pure strategy Nash equilibria exist for an even number of players:
Proposition.For n even number of players, the following is a pure strategy Nash equilibrium to Hotelling’s game. Exactly two players choose each of these locations: 1/n, 3/n, …, (n-1)/n.
So, for example, for n = 2, two players occupy the position 1/2. (This is the median voter theorem.) For n = 4, two players occupy 1/4 and two players occupy 3/4. (No one occupies the median!) For n = 6, two players occupy 1/6, two players occupy 3/6, and two players occupy 5/6. (The median is back!) And so forth.
This has two interesting implications. First, the median voter theorem basically disappears. In half of these cases (i.e., when n/2 is itself even), no player occupies the median at all. In the other half of the cases, only two do. And even then, as n increases, the percentage of total players occupying the median goes to 0.
Second, quality of life for consumers is greatly enhanced compared to the n = 2 version of the game. Under those circumstances, some consumers would have to travel a full half-interval to reach one of the ice cream vendors. But as n increases, the vendors progressively spread out more and more. Things aren’t perfect—the vendors could spread out further (out of equilibrium) by dividing themselves uniformly—but it’s a step up if you previously thought that all of the vendors would take the median.
The proof isn’t terribly insightful, but here goes. In equilibrium, each individual earns a payoff of 1/n. This is because each position attracts 1/n customers on either side of the position. That 2/n sum is divided two ways for the two players occupying the position, so each earns 1/n.
The rest of the proof involves showing that there are no profitable deviations. There are three cases to consider. First, consider deviating to any other occupied position. Now three individuals are placed at the same spot. Their collective domains remains only 2/n portion of the interval. This is now split three ways, giving the individual a payoff of 2n/3, which is strictly less than 1/n.
Second, consider a deviation to a position beyond one of the more extreme locations—i.e., less than 1/n or greater than (n-1)/n. The argument is symmetric, so I will only consider an amount less than 1/n. Let x < 1/n be the deviation position. Consider the following hastily made but nonetheless helpful figure:
Because customers go to the closest location, the deviator takes all the customers between 0 and x as well as all customers to the left of the midpoint between x and 1/n, which is (x + 1/n)/2. The picture clearly shows that this is not a profitable deviation: the deviator’s share is now strictly smaller than the interval between 0 and 1/n, which is how much the deviator would have received if he stayed put. (Some algebra quickly verifies this.)
Finally, consider a deviation to a position between two locations already being occupied. Formally, we can describe this as any position between (m+1)/n and (m+3)/n, for any even integer m. For example, that could be any spot between 1/n and 3/n, 3/n and 5/n, and so forth. Letting y be the deviator’s new position, here’s another hastily made but helpful figure:
Now the deviator’s captured customers are bounded on the left of the midpoint between (m+1)/n and y and bounded on the right by the midpoint between y and (m+3)/n. It takes some algeabra to show, but this winds up being exactly equal to 1/n. In other words, the deviation isn’t profitable.
That exhausts all possible deviations. None are profitable, and thus this is a Nash equilibrium.
A couple notes. First, this doesn’t extend to odd cases, which tend to have…well…odd solutions. Second, there are other Nash equilibria for an even number of players, including some weird mixing.
Powerball is rolling over once again, with organizers projecting a $1.3 billion jackpot on Wednesday. I have read lots of articles discussing some of the pure statistics of the game—the return on investment of each ticket, the likelihood that no one wins—but not so much on the interesting strategic facets of the game. That is what this post is about.
Unfortunately, we cannot discuss strategy without briefly talking about money. So let’s start there.
The Jackpot Is Huge—So Powerball Is Profitable, Right?
Not quite. Showing that it isn’t requires a lot of math, so I have saved that for a separate post. But the summary is as follows. Based on the lump-sum prize, lottery officials are expecting to sell ~413 million tickets for Wednesday’s draw. (This is probably a gross underestimation; we will get to that in a moment.) Taking that at face value, suppose you had the bankroll to purchase all 292 million or so combinations. The binomial distribution can generate a good estimate to know how many people you have to share your winnings with, which winds up at roughly $525 million. Factor in an additional $93 million in consolation prizes, and your expected haul is $618.6 million. With the cost of tickets only $584 million, it would appear you have a $34 million profit…
…until taxes come. While the IRS allows you to deduct gambling losses—here, a nontrivial $584 million—you still must pay approximately $47 million in taxes. Thus, your scheme operates at a slight loss.
What would happen if the prize were significantly higher, perhaps because no winner emerges on Wednesday? Well, even factoring in an anticipated spike in regular purchases, history suggests all the profits will quickly disappear. Beginning in 2005, multiple groups realized that Massachusetts’ Cash Winfall lottery provided profitable draws on a semi-regular basis. (The full story from daftar sbobet is long but fascinating.) Yet each of these groups began pouring more money into the scheme, driving down the profitability for everyone. In essence, economic theory prevailed—the “free money” quickly disappeared.
Massachusetts: Home of Beatable Lotteries
Cash Winfall was also unique in that it did not require buying every single number combination to exploit without opening an investor up to massive risk. This was an issue that an Australian group encountered in 1992 when they tried to purchase all seven million combinations of numbers in the Virginia lottery. They only managed to buy five million tickets before time ran out. Note that this is just 1.7% of the lottery tickets that you would have to buy to cover Powerball.
Thus, even if Powerball were to rollover one more time, we would likely experience a similar phenomenon. Only groups that had mastered the logistics could hope to exploit the system. Those that did would keep Powerball virtually unprofitable, though it will be less unprofitable than under regular circumstances. Thus, players who would not otherwise tolerate the risk will buy tickets—if they have the time to endure long lines.
You cannot change the probability that you win the jackpot, but you can raise your expected payoff by picking numbers that others will not choose.
To illustrate this, think of a simple $1 million lottery with just two number combinations and two players who can each purchase one ticket. Any ticket you buy has a 50% chance of winning the jackpot. If you buy the combination the other player does not, you have a 50% chance at $1 million. But if you purchase the same combination, you have a 50% chance at just $500,000. You have cut your expected value in half.
In practical terms, this suggests that you should avoid picking numbers 1-31. Many players like to choose combinations that contain their birthdays, which leaves those numbers over-selected.
(Of course, if all lottery players read this and followed my advice, then suddenly numbers 1-31 look attractive. Behold the complications of strategic interaction!)
Perhaps you knew about the birthday trap. But did you know to steer clear of fortune cookies as well? Apparently, some fortune writers duplicate their suggested lottery picks ad nauseam. Once, the fortune cookie got it mostly right, matching five numbers but not the Powerball. An astonishing 110 people won the second prize of $100,000, with some garnering an additional $400,000 for buying a more expensive ticket.
While second prize pays a fixed amount, it would have been a disaster had hit the Powerball number as well: each person’s lump sum payoff would have only been $122,727. That’s less than those fancy expensive tickets would have paid for second prize!
It’s a trap!
Forecasting Jackpot Amounts
Lottery officials face a difficult challenge in forecasting the jackpot for Wednesday’s drawing. Under normal circumstances, forecasting is easy—they have plenty of data on regularly-sized jackpots, so they can use the historical record to make clean estimates. The $1.3 billion lottery is unprecedented, though, and it is really difficult to predict way beyond the data you have.
Obviously, for the sake of credibility and paying for everything, Powerball would not want to advertise a jackpot it will not actually reach. But Powerball officials are playing more subtle game. Their incentives are to have as many tickets sold as humanly possible. One way they can push for more sales is through advertisements. And this is why it behooves them to deliberately underestimate the jackpot. As the drawing gets closer, lottery officials can update the jackpot with progressively higher numbers. They then send out press releases, and news organizations are all too happy to put the update into the cycle once again.
In other words, deliberate underestimation is a crafty media ploy.
And that’s some of the game theory behind the record lottery.
No one hit Powerball’s record $913 million jackpot on Saturday. This puts us in uncharted waters, with the annuity payoff for the next drawing to surely exceed $1 billion.
Such large numbers have already started making people wonder whether Powerball is profitable—that is, could an investment in a lottery ticket actually return money in expectation?
That is what this post explores. Spoiler alert: the answer is no, but it is remarkably close.
Heading into Mathland
Okay, we need to do a lot of math to get to answer this question. I am going to make an assumption that will make a lot of these calculations more exact. If Powerball were to be profitable, you would want to buy all the lottery tickets you could. This not only generates a larger payoff for you on average, but it also hedges your risk: if you buy every combination of numbers, you are guaranteed to hit the jackpot. As such, I will be calculating the value of buying all combination of tickets. (Sidebar: If you have the $584.4 million necessary to do this, you and I should become friends.)
Now to calculate your expected winnings. Let’s focus on the jackpot first. There are layers of complication here. To obtain your share of the winnings, we first need to know how likely you are to share your jackpot with other players. But we cannot calculate that without knowing how many other tickets are sold. And once we have that information, we need to figure out just how large the jackpot is going to be.
Let’s tackle each of those problems one at a time.
Step 1: Calculate the Number of Other Tickets Sold
As it turns out, finding out this information is difficult because (1) the lottery does not make it easy to figure out how many tickets it expects to sell and (2) lottery officials are not really sure at this point anyway since we are in uncharted jackpot territory.
Nevertheless, some detective work will help here. If we can figure out how much money Powerball adds to the jackpot per ticket sold, we can use the estimated jackpot as an approximation. Unfortunately, this figure is also difficult to find—in no small part because officials want to obfuscate the small percentage that actually goes to the winner. I read a bunch of news articles throwing various figures around but without citing any source. According to this official memo (warning: awkward Word document download link), Powerball has paid $16.5 billion to winners against $55.8 billion in sales. The implication here is that roughly 30% of ticket revenue funds the jackpot.
(An earlier version of this post cited an NBC News article about the breakdown of ticket revenue. A helpful user noted the problem, launching me into this deeper investigation.)
This gives us a convenient method to estimate the number of tickets sold. Extrapolating from the Powerball memo, the formula for tickets sold is:
($.60)*(Number of Tickets Sold) = Jackpot Contribution
Substituting $248 million for Jackpot Contribution and doing some algebra, we arrive at 413,333,333 and a third tickets sold. I will generously round down to 413,333,333 for the remainder of this post.
Note that this is a rosy outlook. Lottery officials seem to consistently underestimate future jackpots. (This is partly because they do not want to falsely advertise more money but mostly because updating the jackpot to progressively higher amounts keeps the lottery in the news cycle.) If the jackpot goes up, that is a bad thing for you—it means that more potentially winners are diluting your winnings while only adding an additional 60 cents to the jackpot.
Step 2: Calculate Odds of Other Winners
Guaranteeing a win does not ensure that you will be the only winner. Indeed, with 413,333,333 other tickets being sold, there is a very good chance that you will have to share the jackpot. (To put this in perspective, under this scenario, the only way you would be a solo winner is if no one would have won in your absence.)
Fortunately, the binomial distribution makes these relative probabilities easy to calculate. To find the probability of obtaining n winners, all you need to feed the binomial distribution is n, the probability that each ticket wins the jackpot, and the number of tickets sold.
(Math note: This assumes that the each set of numbers is picked with the same probability. This is true for quick picks—which are random—but not true for human players, who like birthday numbers. Still, Powerball’s website notes that manual players and quick pick players are roughly proportional to the number of manual winners and quick pick winners, meaning this assumption does not stretch the truth much.)
We know each of those numbers, so we can generate the likelihood of each of these outcomes:
This does not look too bad. Almost a quarter of the time, you will be the sole winner. Another third of the time, you will share with one other person. Another quarter of the time, you will share with two. That leaves only 17% of the time when the jackpot will be heavily diluted.
Step 3: Calculate Your Share of the Jackpot Conditional on Number of Winners
At first pass, this step might seem easy: simply divide the advertised $806 million by the number of winners, and that will be your share conditional on that number of winners. But there is an extra wrinkle to the system. Because you are buying more than 292 million lottery tickets, you will single-handedly increase the jackpot by about $175 million.
Factoring this into the system, the jackpot will be worth $981,320,802.80. I am sure the lottery officials will do some pleasant rounding, but I will keep that number like that throughout since I am using spreadsheets for all my calculations.
Expected Jackpot Winnings
Now that we have gone through those three steps, we can multiply across the expectations and sum them to calculate your expected winnings:
Whoa! Even with other people sharing the prize, you still rake in $525 million on average. That is a lot of money!
Unfortunately, you also have to buy tickets. A lot of them. At $2 a piece, more than 292 million tickets sets you back $584 million and change. That gives you a net loss of about $59 million. It looks like this will not be profitable…
But wait! Powerball offers lesser prizes for non-jackpot tickets. For example, four numbers and the powerball nets you $10,000. And since you will be playing every single combination of numbers, you are going to be winning a lot of these smaller prizes.
Here is a breakdown of each of those possibilities, their odds, the number of times you will win each prize, and the overall value of the non-jackpot winnings:
This turns out to be critical: the ~$93 million here plus the previous ~$525 million from before gives us an astonishing ~$618 million. Recalling that the ticket price was only $584 million, you wind up ~$34 million in the black. Hark, a profit!
The Tax Man Cometh
Ah, yes, the tax problem. While the IRS is not nearly as much of an issue as you might initially suspect, it winds up being pivotal.
Why is Uncle Sam a phantom menace? The IRS allows you to deduct lottery losses—they kindly tell you that right here. And you are going to be claiming a ridiculous $584 million in losses. Going back to your jackpot winnings, note that there is only one way to exceed $584 million in prizes—you must be the sole jackpot winner. This means you will pay no taxes at all unless you take home everything.
Nevertheless, we still need to adjust for that ideal case. Let’s be optimistic about your home and suppose you live in a tax-free state. (And let’s face it—if you have invested this much in the scheme, you have certainly migrated to one of those havens at this point.) The top federal income tax bracket is 39.6%. Adjusting for that gives us this table:
So close! But sadly, we wind up about $13 million short. This lottery is not profitable.
Note: For reasons unclear, the IRS does not allow nonresident aliens to deduct losses. So if you are a non-resident alien, you are really screwed.
What Would It Take for Powerball to Be Profitable?
As a fun bonus question, I was curious to find a reasonable breakeven point for Powerball. Given what we have seen above, this would require an even higher jackpot, which in turn would result in even heavier ticket sales. This makes reaching a breakeven point more difficult, since the lottery is more profitable the fewer the other players are.
To keep things simple, I held fixed the number of other sales to 750 million tickets. What lump-sum carryover from the previous drawing would put you in the black? About $850 million. The table looks like this:
Yes, the lump sum jackpot under these conditions is almost $1.5 billion.
Of course, if we were at this point, there would probably be a heck of a lot more lottery tickets sold. Nevertheless, it is interesting to see that we are close to this point. The current lump sum jackpot is $806 million. If that jumps to $850 million before the next drawing and no one hits the jackpot, then we would be here.
Powerball is not profitable now. And even if it were, we have not yet begun getting into the logistics of pulling a stunt like this. How would you buy 292 million lottery tickets? How would you even sort 292 million lottery tickets looking for that one winner? Where do you store are the losing tickets in case of a tax audit? If you have a quarter billion dollars floating around, don’t you have better things to do with your money? It’s a giant mess.
That said, Powerball is less unprofitable than it is under normal circumstances. So if you are inclined to buy lottery tickets in general, now is a good time for it.
* * *
I have saved the code and tables for all of the calculations I made here. If you think you have spotted something I missed, please email me or leave a comment.