Blog

Fun with Institutional Design: Seattle Seahawks Edition

A central lesson that game theory and institutional design teach us is that actors play to the rules that the institution creates, not the behaviors that the institution intends to elicit. Thus, you have think through the perverse incentives that an institution might create.

Contract incentives in sports seem to be one of the worst violators in this regard. These incentives usually read “if you achieve x milestone, you will receive $y bonus.” The key problem in the design here is in the discontinuity. There is basically no substantive difference from a team’s perspective between 210 innings pitched and 209 and 2/3rds innings pitched. Yet you will have contracts that create massive paydays at 210 but provide nothing extra at 209 and 2/3rds. (See Phil Hughes as an example.)

Today’s contract failure comes from the Seattle Seahawks. Wide receiver David Moore’s contract called for a $100,000 bonus if he caught 35 passes this season. With 22 seconds left in Week 17 and Seattle up three points, Moore sat at 34. In a world that makes sense, Seattle would take a knee and win the game. Coach Pete Carroll called in just that.

Instead, this happened:

Apparently Russell Wilson knew about the incentive and changed the play in the huddle so that Moore could hit the guarantee—a great response to the incentive structure put in front of them! But also a completely unnecessary injury risk for a team that had secured a win and a spot in the playoffs.

The solution to this type of problem is to eliminate the crazy discontinuities in contract incentives. If Seattle thinks that Moore should be incentivized to hit catch goals, they should incrementally pay some amount of money per catch rather than concentrate all of the value in the 35th.

Explainer: The Median Voter Theorem, Turnout, and the Democratic Primary

[The following is a lightly-edited and annotated transcript of the above video.]

This primary season is a rare time that results from game theory are receiving mainstream media attention. As such, it is worth a moment to talk about how the median voter theorem and turnout are affecting the battle between Bernie Sanders and Joe Biden.

Let’s start with the central argument Biden’s camp is making. It is basically an appeal to the median voter theorem. In practice, the median voter theorem says that, in a two-candidate election, the candidate with the ideological position closest to the median voter is most likely to win.
To diagram this, imagine we put every voter in the United States on a left-right spectrum. Here, to make things simple, let’s suppose there are only five.

We call the middlest-most person the “median voter”.

Now consider the ideological positions of Donald Trump and Joe Biden. Biden is a moderate democrat. He’s left-of-center, but not by too much. Trump, on the other hand, is far to the right.

If everyone votes for the candidate closest to their preferred position, whomever the median voter supports will win the election. This is because everyone on the left or everyone on the right will also vote in the same direction, and that is enough to guarantee more than half the vote.

Here, the median voter supports Biden. As such, Biden is likely to win.

In contrast, Sanders is a very progressive candidate. He has an ideological position far to the left. In fact, it is further to the left from the median voter than Trump is to the right of the median voter. Now the median voter supports Trump, so Trump is more likely to win the election. This is why moderate Democrats say that Biden is the more “electable” candidate.

Sanders supporters have an interesting counterargument. In their view, the median voter theorem isn’t “wrong”, just underspecified. Who the median voter is depends on who is turning out for the election. Their bet is that Sanders will inspire a bunch of young, liberal voters to come to the polls.
And with more liberal voters, the median voter shifts to the left. Now Sanders can win despite his ideology.

There are two counterarguments to the Sanders’ position. First, we haven’t seen young voter turnout rise substantially during the primary season.

Second, by the best we as political scientists can estimate, the turnout necessary to compensate for Sanders’ more extreme ideology would be unprecedented. That doesn’t mean it won’t happen—Sanders’ campaign has been historic for many reasons—just that it isn’t obvious that it will.

If you are interested in learning more about the median voter theorem, click the link for a lecture just on the subject. It’s also a topic I cover in Chapter 4 of Game Theory 101: The Complete Textbook.

Focal Points in Breath of the Wild

In my estimation, The Legend of Zelda: Breath of the Wild is the greatest video game of all time. There are a lot of reasons I think this is true. But, for now, I want to focus on the reason applicable to this website: game theory.

For the narrow set of readers familiar with both BotW and game theory, this might seem puzzling. Game theory is, by definition, the study of strategic interdependence. BotW is a one player game and would thus seem relegated to the field of decision theory.

On the contrary, a good portion of your play time is actually a two player game. It’s you and the game developer, and it’s a cooperative interaction.

Introducing the Game
BotW gives you the premise of the game-within-the-game right off the bat. As Link departs the Shrine of Resurrection, the camera guides you down a clear path. The first stop is a chat with the mysterious man at the campfire. Immediately afterward, Link finds a pointed ledge overlooking a pond.

korok

Without telling you, the game is communicating a clear message. It is giving you a diving board shaped rock, equipped with its own water lily target. The game begs you to jump in. It then rewards Link with his first Korok seed.

Mr. Korok then tells you how the rest of this subquest will play out. Other Koroks are hidden, and you need to find them.

From here, the developers could have gone in two directions. One would have been a disaster. The other would make the game fun. They chose the latter.

Focal Points and Thomas Schelling
To get the fun, we must first take a step back and learn about focal points. In Strategy of Conflict, Thomas Schelling proposes the following problem. Suppose I instantly transport you and a friend to New York City. You have no way to communicate with each other but need to reunite. Where do you go to meet, and when do you do it?

This question has no “correct” answer. Anything you say—no matter how clever or ridiculous—could be right, as long as as your friend would say the same thing. Yet despite the infinite number of possibilities, Schelling found that people had a predisposition to choose Grand Central Station at noon.

Why? Schelling describes such a time and place as a “focal point.” They stand out for one reason or another, which makes them easier to coordinate on. Noon is the middle of the day, so it is a sensible time to meet someone else. Grand Central is a crossroads of New York, so it also seems appropriate.

The Developer’s Strategy
Nintendo could have placed the Koroks in the most obscure of places and make it as hard as humanly possible to find all of them. But they didn’t. Instead, they made a good faith effort to play a focal point coordination game with the player.

Open up your map and find another pool of water nearby? If you head there, a Korok is probably waiting for you. Is one tree much, much taller than the others? You should probably climb it. See a strange rock pattern in the distance? Time to investigate.

In fact, once you discover that category of Koroks, you look at your map in a whole new light:

Tons of these perfectly circular dots of rocks are all over the map. It makes you want to scour the map for more, and it excites you when you inadvertently discover one while using the map for something else.

Once you realize the game developers are trying to the Koroks easy to find, it completely changes your mindset. You start asking yourself “if I were a game developer who wanted to ‘hide’ Koroks in obvious places, where would I put them?”

Granted, not every Korok ends up feeling that way. The game has 900 of them, and the focal point is in the eye of the beholder. Many times, I was left dumbstruck and asking myself how anyone thought that someone could reasonably find that particular seed. Still, most of BotW rewards a type of strategic thinking that you rarely see in a one player game.

gt101cover

Adverse Selection and Cheap Talk in Pokémon: Let’s Go

Suppose you are a pokémon trainer, and a fellow trainer approaches you with the following proposition:

In any real life scenario, this should be a hard pass. But the situation provides a teachable moment on two fronts to justify that claim.

Adverse Selection
Suppose someone tells you they want to engage in a trade. What does that willingness tell you about whether you should do it?

In many contexts, it tells you a lot. Imagine the Exeggutor was the strongest one could possibly imagine. Would the person want to exchange it for another Exeggutor? Absolutely not—no matter the strength of the other Exeggutor the person could receive in return, it would be worse than what he currently has. So a strongest possible Exeggutor would never be offered up in a trade.

Iterating this logic has an interesting implication. Consider the incentives of someone who has the second-strongest possible Exeggutor. Would that person want to exchange it? The only way this could be worth the time is if the other trainer has the strongest possible Exeggutor. But we just discovered that another trainer would not offer such an Exeggutor. So the person with the second-strongest should have no interest in trading either.

What about the third? Well, the only way this would be good is if a strongest or second-strongest Exeggutor were available. But they won’t be. So someone who owns the third-strongest should not engage in a trade either.

That logic unravels all the way down. Even a trainer with a very, very weak Exeggutor should have no interest in a trade. Although it is true that the average Exeggutor is much stronger than the one the trainer currently has, the average Exeggutor being offered for trade is not. In fact, trainers would only want to engage in a trade if the Exeggutor is the worst possible.

This result is well-known in game theory as a product of adverse selection—a situation where one party has private information about the value of a transaction. More specifically, it is an application of the market for lemons, which uses the same mechanism to explain why it is so difficult to buy a quality used car.

Cheap Talk
“But wait!” one might say. “The fellow trainer has assured me that he is very proud of his Exeggutor. It must be not so bad after all.”

Not quite. This is classic cheap talk. The message the other trainer is communicating could just as easily be conveyed by a trainer who actually thinks that his Exeggutor is terrible. Moreover, he does not have a common interest with you in communicating truthful information. Combining these pieces of information together, you should ignore the message altogether.

To make this more concrete, imagine that you were to take whatever the trainer said at his word. Then would a trainer with a truly terrible Exeggutor want to lie? If doing so would convince you to make the trade, then the answer is yes. As a result, you cannot differentiate between an honest assessment and a fib. In turn, the message should not change your beliefs about the Exeggutor’s quality at all.

Nintendo and Strategic Theory
All that said, this is Nintendo we are talking about. They are not interested in teaching you interdependent strategic thinking. The man actually has an exotic Exeggutor from a region you cannot access in the game. The only way to acquire such an Exeggutor is through a trade. You therefore most definitely should make the deal with him.

But this leads to a deeper question: why doesn’t the trainer just lead with that point in the first place? Rather than speaking to your common interest—he has an Exeggutor from his region and wants one from your region, you have an Exeggutor from your region and want one from his—he just speaks strategic nonsense.

Here’s hoping that George Akerlof appears in Pokemon: Sword and Shield offering the same trade. But this time, when you accept, you get a level 1 Exeggutor with all the minimum stats.

gt101cover

Backward Induction and Jenga

Imagine you are playing Jenga, and it is your turn. The tower looks like this:

To save you some time, almost the entire tower is “spent”—either the middle block or the two side blocks are missing from just about every level. The only exceptions are the top two rows. Jenga’s rules only allow you to take blocks from below a completed row, so the top is off limits. Without doing some ridiculous Jenga acrobatics, this gives you exactly three blocks to choose from, all of them in the second row from the top.

If you have played Jenga before, you know what normally comes next. Jenga blocks are not completely uniform; some are slightly smaller or slightly larger than others. Consequently, the tower’s weight is not usually distributed evenly across a row. If the middle block is slightly larger, then the sides will be loose due to this; if the middle block is slightly smaller, then it alone will be loose.

Under normal circumstances, you should poke around that row to figure out which is the case. If the sides are loose, you would take one of them; if the middle was loose, then you would take it.

Backward induction suggests caution here. Broadly, backward induction tells us that the best way to strategize for right now is to consider how others will respond to whatever you can do. After doing so, choose the option that maximizes your benefits given how others will maximize their benefits later. In short, the future matters; if you ignore it, trouble will find you.

Let’s say that the middle block is loose. What happens if you take it? Your opponent is in deep trouble. There are no blocks that can be removed in isolation without crumbling the tower. Instead, your opponent must take a row with the middle block missing, slowly re-position a side block into the middle slot, and then remove the other block. This is ridiculously difficult. Your opponent is almost certain to lose here, so your initial decision is straightforward.

Life is more complicated if the side blocks are loose. Suppose you take one of them. Your opponent will immediately follow up by taking the remaining side block. When the game comes back to you, the new top row will still only have two blocks on it. This means you will be forced to do the aforementioned nearly impossible task. You will almost certainly lose.

Your only other feasible option is to force out the middle block. This is a tall order—it is not loose precisely because it is supporting the weight above it. You are definitely more likely to lose this turn if you try taking it. However, if you are successful, you will place your opponent in the near-hopeless situation and almost certainly win.

Thus, depending on how you perceive those probabilities, your optimal play may be taking the more difficult block!

What we are observing here is a special case of a more general principle in zero-sum games. There are two paths to victory in such interactions: you winning and your opponent losing. Thus, you should not exclusively focus on what puts you in a good position; you need to pay equal attention to what puts your opponent in a bad position. This is why you should not try to maximize your score in Words with Friends or Scrabble. It also helps explain why trades in Monopoly or Catan don’t work with only two players.

gt101cover

Bargaining Theory in Action: Donald Trump and Executive Orders

Over the past seven days, you have either grown to love or hate executive orders. But regardless of your political perspective, Trump’s actions are showcasing one of the major findings of bargaining theory: namely, that proposal power is one of the greatest sources of bargaining power.

What Is Proposal Power?
Put simply, proposal power is the ability to structure the terms of a possible settlement. It is distinct from receivership, which is the ability to say yes or no to any particular offer.

Perhaps surprisingly, whether you can make an offer has an enormous impact on your welfare. Imagine a seller needs at least $30 to sell and a buyer will buy for no more than $50. Clearly, a transaction should take place, as there is a $20 surplus (the buyer’s maximum minus the seller’s minimum).

If the seller can make a take-it-or-leave-it offer, then he can set the price at $50. The buyer, only having the ability to accept or reject, buys the good—it is just enough to convince him to buy.

Now imagine the buyer makes the take-it-or-leave-it offer. She can set the price at $30. The seller, only having the ability to accept or reject, sells the good—it is just enough to make her willing to part with it.

Who proposes the offer makes a major difference. If we endow the seller with the proposal power, she walks away with the entire $20 surplus. Yet if we endow the buyer with the proposal power, the exact opposite happens: the buyer receives the entire $20 surplus. It’s night and day!

You might think this is because of the take-it-or-leave-it bargaining protocol. It is not. If rejecting simply leads to another round of bargaining where the proposer stays the same, the result is identical: the proposer receives the entire surplus.

The lesson here is simple: without the ability to make offers or counteroffers, the deals reached will look very bad for you.

When it comes to executive orders, the POTUS’s proposal power is YUGE.

Proposal Power in Action
In many contexts, proposal power is easy to acquire. At a flea market or car dealership, for instance, nothing stops a buyer or seller from making the initial offer or responding with a counteroffer of their own. They just do it.

Government is different. Some systems have proposal structures baked into the legal system. And this has major consequences for the types of laws that are ultimately implemented.

In the United States, most of the proposal power rests with Congress. That is because only Congress can write laws; the President can merely sign or veto whatever lands on his desk. Thus, if Congress and the President both agree that a law needs fixing, the solution implemented is going to be much closer to what Congress wants than what the President wants.

There is a key exception to that rule, however, and it is what we have been seeing over the last week. Executive orders essentially flip the script—the President institutes the policy, and it is up to Congress to accept it (by doing nothing) or reject it (by creating a bill that undoes the policy, and then overriding the inevitable presidential veto).

It should be obvious that the President choosing his own policies would result in outcomes closer to his ideological preference. I suspect liberals realized this was an issue weeks ago, but only recently realized how bad it would be for them. Bargaining theory goes deeper here and explains this. It isn’t just that the President fares better when choosing the policy than when he signs a Congressional law. It is saying that a strategic, forward-thinking President can set the policy so far to his liking that Congress is just barely willing to go along with it. And if the gulf between Congressional preferences and Presidential preferences is substantial, so too will be the types of policy outcomes associated with executive orders and standard legislation.

This logic sets expectations for what is to come for both fans and foes of Trump. Foes should continue to fear executive orders and should not be particularly worried about laws passed through Congress. Liberals see Paul Ryan as their last remaining hope. And they might find solace there—but only for standard laws. Undoing executive orders requires having a much larger share of Congress on board, so Paul Ryan does not get you very far.

For Trump supporters, it is the opposite: they will find standard laws relatively unexciting but be thrilled with the executive orders.

bargaincover

Strategic Bargaining Blunders in The Empire Strikes Back

After rewatching The Empire Strikes Back over the weekend, it became clear that two major blunders ultimately doom the Empire. (Original trilogy spoilers below, obviously.)

Boba Fett’s Terrible Contract
After leaving the asteroid field, the Millennium Falcon’s hyperdrive fails (again). As a Star Destroyer prepares to blow it up, Han Solo turns around the Falcon and attaches it to the Destroyer.

The plan works. Imperial troops have no idea what happened to the Falcon and assume it jumped to hyperspace. Han, meanwhile, plans to sit there until the Destroyer empties its garbage and then drift away while camouflaged in the trash.

Darth Vader, apparently well-versed in comparative advantage, had commissioned bounty hunter Boba Fett (among others) to track down the Falcon.

Contract theory tells us that the reward for finding the Falcon needs to be substantial, going above and beyond basic expenses—otherwise, a bounty hunter would not have incentive to exert maximal effort in a risky and uncertain process. Furthermore, because the Empire cannot observe the effort a bounty hunter exerts, it needs to offer that large amount regardless of the actions leading up to capture.

Resolving this moral hazard problem created a second problem, however, and it was a problem Vader did not fix. If Boba Fett knew exactly what the Falcon was up to, he had incentive to conceal that information. He could wait for the Falcon to do its thing, catch it, and then claim the substantial reward designed to induce high effort despite only exerting very low effort.

And that’s exactly what happened—Boba hid in the garbage along with the Falcon, followed it through hyperspace, and immediately contacted the Empire upon arrival in Cloud City.

Vader could have solved that problem by paying the high amount even if a bounty hunter gave away the information immediately. But he didn’t. (Or a credible commitment problem stopped him, though apparently the bounty hunters are not particularly worried about this.)

So instead of capturing the Falcon on the Empire’s terms and setting up a confrontation with Luke on a Star Destroyer, they end up in Cloud City. This led to the next blunder…

Vader Miscalculates Lando’s Reservation Value
After landing in Cloud City, Vader and Lando Calrissian engage in crisis bargaining, a subject international relations scholars have explored at length. A mutually beneficial agreement certainly existed—rejection would have led to costly conflict, and we have a nice theorem that says a there is range of settlements that are mutually preferable to war under these conditions.

The existence of settlements does not guarantee peace, though. If one side faces uncertainty—perhaps Vader does not know how resolved Lando is—then it may demand more than the opponent is willing to give up. Conflict results.

Something like that appears to have happened in Cloud City. Vader’s initial offer was generous, demanding Han and giving Lando autonomy otherwise. But Vader kept demanding more—including Chewbacca and Leia—causing Lando to eventually begin a fight.

This proved fatal to the Empire. Lando freed Leia. Freeing Leia allowed them to recover Luke. Luke ultimately turned Vader, and Vader killed the Emperor. Meanwhile, Lando led the mission to destroy the second Death Star. All because Vader kept demanding more.

One perspective on this is that Lando’s initial concessions caused Vader to believe that he could profitably extract more. If so, Vader made a clear error: research on this subject shows that you should not increase your demands after an initial acceptance.

In short, this is because your first offer—if it was truly optimal in the first place—maximizes your payoff conditional on acceptance. So if you try to extract more, the potential gains must not be worth the additional risk you face.

The Jedi are supposed to be master negotiators. I can only conclude that Anakin slept through his bargaining classes as a padawan.

bargaincover

Strategic Thinking on The Newsroom

On Season 1, Episode 9 of The Newsroom, Will McAvoy pitches a new structure for a potential Republican primary debate hosted by cable news channel ACN. Rather than asking questions and letting candidates speak freely, Will wants the ability to interrupt any time a candidate goes off topic or drifts away from the question. Predictably, the Republican National Committee hates the idea and doesn’t give the network a debate.

Maggie’s recap of the day’s events clearly show where the crew went wrong:

nr1

nr2

nr3

It was exactly as crazy as it sounded. Maggie is applying non-strategic thinking to a situation where there is clear strategic interdependence. Each network can choose what to offer the RNC, and the RNC can pick the terms most favorable to it. This gives each network an incentive to undercut the other until no one is willing to undercut any further. Standard bargaining theory tells us that basically all of the surplus will go to the RNC under these conditions.

But there is another facet of the interaction here that extends past basic bargaining theory. In standard price negotiations, if I don’t ultimately buy the good, I don’t care at all what you paid. That is not the case here. The lower the “price” a network is willing to offer, the more all the non-winners suffer—i.e., if CNN captured the debate by conceding the farm, journalists at CBS News, NBC News, and ACN are all worse off than had CNN captured the debate without compromising its integrity. So what we have here is essentially a collective action problem, which is just a prisoner’s dilemma with more than two players. Everyone is worse off in equilibrium than had all players agreed to cooperate, but individual incentives mandate that all parties defect.

There is some irony here. Earlier, the news team reduced Sloan Sabbath’s airtime to run stories on the Casey Anthony trial. They needed to do this to improve ratings to make them an attractive host for the debate. But Sloan Sabbath has an economics PhD from Duke and is generally frustrated that no one around her understands basic economics and aren’t really willing to spend any time to learn. The one person who could have saved them from the situation was ignored! (Or maybe she never spoke up as a perverse way of getting her revenge…)

TL;DR: McAvoy et al do not know how the prisoner’s dilemma works.

Bargaining and Supreme Court Nominations

With the death of Justice Antonin Scalia over the weekend, the scramble has begun to make sense of the nomination process. Senate Republicans are (predictably) arguing that the seat should remain unfilled until after the 2017 election, presumably so a Republican president could potentially select the nominee. Senate Democrats and President Obama (predictably) feel differently.

Overall, it seems that people doubt that Obama will resolve the problem. But most of the arguments for why this will happen fail to understand basic bargaining theory. That’s what this post is about. In sum, nominees exist that would make both parties better off than if they fail to fill the vacancy. Any legitimate argument for why the seat will remain unfilled until 2017 must address this inefficiency puzzle.

You can watch the video above for a more through explanation, but the basic argument is as follows. The Supreme Court has some sort of status quo ideological positioning. This factors who the current median justice is, the average median of lower court justices (which matters because lower courts break any 4-4 splits from the Supreme Court), and (most importantly) expected future medians. That is, one could think about the relative likelihoods of each presidential candidate winning an election and the type of nominee that president would select and project it into this “status quo” ideology.

Confirmation of a new justice under Obama would change that ideological position. In particular, Obama’s goal is to shift the court to the left. Republicans want to minimize the shift as much as possible.

Nevertheless, failing to fill the position is costly and inefficient for the court. In other words, ideology aside, leaving the seat unfilled hurts everyone. These costs come from overworking the existing justices, wasting everyone’s time debating these issues incessantly, and generally making the federal government look bad. Due to these costs, each side is better off slightly altering the ideological position of the court in a disadvantageous way to avert the costs.

Visually, you can think of it like this:

SCOTUS

(If that isn’t clear, I draw it step-by-step in the video.)

Thus, any nominee to the right of the Republicans’ reservation point and to the left of the Democrats’ reservation value is mutually preferable to leaving the seat unfilled.

This simple theoretical model helps make sense of the debate immediately following Scalia’s death. Senate Majority Leader Mitch McConnell says “The American people should have a voice in the selection of their next Supreme Court justice. Therefore, this vacancy should not be filled until we have a new president.” But what he’s really saying is “Our costs of bargaining breakdown are low, so you’d better nominate someone who is really moderate, otherwise we aren’t going to confirm him.”

And when Senator Elizabeth Warren says…

…what she really means is “Mitch, your costs are high, so you are actually going to confirm whomever we want.”

To be clear, the existence of mutually preferable justices does not guarantee that the parties will resolve their differences. But it does separate good explanations for bargaining breakdown from bad ones. And unfortunately, the media almost exclusively give us bad ones, essentially saying that they will not reach a compromise because compromise is not possible. Yet we know from the above that this intuition is misguided.

So what may cause the bargaining failure? One problem might be that Obama overestimates how costly the Republicans view bargaining breakdown. If Obama believed the Republicans thought it was really costly, he’d be tempted to nominate someone very liberal. But if the Republicans actually had low costs, such a nominee would be unacceptable, and we’d see a rejection. (This is an asymmetric information problem.)

A more subtle issue is that presidents have a better idea of a nominee’s true ideology than senators do. Maya Sen and I explored this issue in a recent paper. Basically, such uncertainty creates a commitment problem, where the Senate sometimes rejects apparently qualified nominees so as to discourage the president from nominating extremists. Unfortunately, this problem gets worse as the Senate and president become more ideologically opposed, and polarization is at an all-time high.

In any case, I think the nomination process highlights the omnipresence of bargaining theory. Knowing the very basics—even just a semester’s worth of topics—helps you identify arguments that do not make coherent sense. And you will be hearing a lot of such arguments in the coming months regarding Scalia’s replacement.

bargaincover

Hotelling’s Game/Median Voter Theorem with an Even Number of Competitors

I will assume that most readers are familiar with Hotelling’s game/the median voter theorem game. If not, the basic idea is that two ice cream vendors are on a beach that stretches the 0-1 interval. Customers are uniformly distributed along that interval. The vendors simultaneously select a position. Customers go to the closest vendor and split themselves evenly if the vendors choose an identical position. Each vendors want to maximize its number of customers.

(You can reframe the question as two candidates placing themselves along an ideological spectrum, with citizens voting for whichever one is closest.)

The Nash equilibrium is for both vendors to select the median location (.5); doing this guarantees each vendor half the business, but deviating to any other point generates strictly less. Full details here:

But what happens when there are more than two players? Someone posed me that question earlier today. It turns out, the solution for this game is complicated when there are an odd number of players. But fairly simple pure strategy Nash equilibria exist for an even number of players:

Proposition. For n even number of players, the following is a pure strategy Nash equilibrium to Hotelling’s game. Exactly two players choose each of these locations: 1/n, 3/n, …, (n-1)/n.

So, for example, for n = 2, two players occupy the position 1/2. (This is the median voter theorem.) For n = 4, two players occupy 1/4 and two players occupy 3/4. (No one occupies the median!) For n = 6, two players occupy 1/6, two players occupy 3/6, and two players occupy 5/6. (The median is back!) And so forth.

This has two interesting implications. First, the median voter theorem basically disappears. In half of these cases (i.e., when n/2 is itself even), no player occupies the median at all. In the other half of the cases, only two do. And even then, as n increases, the percentage of total players occupying the median goes to 0.

Second, quality of life for consumers is greatly enhanced compared to the n = 2 version of the game. Under those circumstances, some consumers would have to travel a full half-interval to reach one of the ice cream vendors. But as n increases, the vendors progressively spread out more and more. Things aren’t perfect—the vendors could spread out further (out of equilibrium) by dividing themselves uniformly—but it’s a step up if you previously thought that all of the vendors would take the median.

Proof
The proof isn’t terribly insightful, but here goes. In equilibrium, each individual earns a payoff of 1/n. This is because each position attracts 1/n customers on either side of the position. That 2/n sum is divided two ways for the two players occupying the position, so each earns 1/n.

The rest of the proof involves showing that there are no profitable deviations. There are three cases to consider. First, consider deviating to any other occupied position. Now three individuals are placed at the same spot. Their collective domains remains only 2/n portion of the interval. This is now split three ways, giving the individual a payoff of 2n/3, which is strictly less than 1/n.

Second, consider a deviation to a position beyond one of the more extreme locations—i.e., less than 1/n or greater than (n-1)/n. The argument is symmetric, so I will only consider an amount less than 1/n. Let x < 1/n be the deviation position. Consider the following hastily made but nonetheless helpful figure:

hotelling1

Because customers go to the closest location, the deviator takes all the customers between 0 and x as well as all customers to the left of the midpoint between x and 1/n, which is (x + 1/n)/2. The picture clearly shows that this is not a profitable deviation: the deviator’s share is now strictly smaller than the interval between 0 and 1/n, which is how much the deviator would have received if he stayed put. (Some algebra quickly verifies this.)

Finally, consider a deviation to a position between two locations already being occupied. Formally, we can describe this as any position between (m+1)/n and (m+3)/n, for any even integer m. For example, that could be any spot between 1/n and 3/n, 3/n and 5/n, and so forth. Letting y be the deviator’s new position, here’s another hastily made but helpful figure:

hotelling2

Now the deviator’s captured customers are bounded on the left of the midpoint between (m+1)/n and y and bounded on the right by the midpoint between y and (m+3)/n. It takes some algeabra to show, but this winds up being exactly equal to 1/n. In other words, the deviation isn’t profitable.

That exhausts all possible deviations. None are profitable, and thus this is a Nash equilibrium.

End
A couple notes. First, this doesn’t extend to odd cases, which tend to have…well…odd solutions. Second, there are other Nash equilibria for an even number of players, including some weird mixing.

gt101cover