Catalog · Business & Economics · Vol. VII · Deck 07.06
PROCEEDINGS · STRATEGIC INTERACTION · VOL. VII · NO. 6 · MAY 2026
A game has three primitives: players (decision-makers), strategies (the actions each can take), and payoffs (the value each player assigns to every outcome).[1]
Two assumptions usually accompany the model: players are rational (they maximize expected payoff given beliefs) and the structure of the game is common knowledge (everyone knows the rules; everyone knows that everyone knows; ad infinitum).
Games come in flavors: simultaneous vs. sequential; complete vs. incomplete information; one-shot vs. repeated; cooperative vs. non-cooperative. The taxonomy explodes; the toolkit handles each.
A strategy is strictly dominant if it yields a higher payoff than any alternative, regardless of what others do. If one exists, the rational player picks it. If all players have one, the equilibrium is found by simple inspection.
Iterated elimination of dominated strategies (IEDS) is a refinement: remove strategies that are dominated, then iterate. Often a unique outcome remains.
John Nash (1950) proved that every finite game has at least one equilibrium in mixed strategies — a profile of strategies, one per player, such that no player can do better by unilaterally deviating.[2]
Nash equilibrium is a consistency condition. It need not be efficient (Prisoner's Dilemma), unique (coordination games), or even reachable (some equilibria are off-path).
Two suspects are interrogated separately. If both stay silent, each gets 1 year. If one defects and the other stays silent, defector goes free, silent partner gets 10. If both defect, each gets 5.[3]
| B: Cooperate | B: Defect | |
|---|---|---|
| A: Cooperate | −1, −1 both silent | −10, 0 sucker / free |
| A: Defect | 0, −10 free / sucker | −5, −5 both defect |
The PD captures the canonical tension between individual rationality and collective good. It models arms races, climate cooperation, doping in sports, and over-fishing.
Two hunters can jointly bring down a stag (high payoff, requires cooperation) or each chase a hare alone (lower but safe). Two pure-strategy NE: (Stag, Stag) and (Hare, Hare). The first is payoff-dominant; the second risk-dominant.
Couple disagree on opera vs. football but prefer being together over being apart. Two NE in pure strategies (both opera, both football) plus a mixed-strategy NE. Coordination is the problem.
Two cars hurtle toward each other; first to swerve loses face. Two pure NE: (Swerve, Straight) and (Straight, Swerve). The credible commitment to NOT swerve — say, throwing your steering wheel out the window — wins. Schelling's The Strategy of Conflict built much from this.
When there is no pure NE — Matching Pennies, Rock–Paper–Scissors — players must randomize. The mixed NE is the probability distribution over actions that makes the opponent indifferent between their pure strategies.
In tennis, the optimal serve direction has been shown by Walker & Wooders (2001) to be statistically indistinguishable from minimax play — even among amateur club players.
When players move in sequence, we draw the extensive form as a game tree and solve by backward induction.
Subgame Perfect Equilibrium (SPE), due to Selten (1965), refines NE: a strategy profile is SPE if it forms a NE in every subgame, including off the equilibrium path. SPE rules out non-credible threats.
When the same game is played many times, cooperation can emerge. The Folk Theorem (Friedman 1971) states that any individually rational outcome can be sustained as a NE of an infinitely repeated game, given a high enough discount factor.
Robert Axelrod's 1984 tournament asked academics to submit strategies for an iterated PD. The winner: Tit-for-Tat, submitted by Anatol Rapoport. Cooperate first; thereafter copy the opponent's last move. Four properties of winning strategies:
Von Neumann (1928) proved the minimax theorem: in any two-player zero-sum game with finite strategies, there exists a value v such that player 1 can guarantee at least v, and player 2 can guarantee paying at most v. Poker, chess, and naval pursuit are zero-sum.
Maynard Smith (1973) extended game theory to populations. Strategies that survive the replicator dynamic are Evolutionarily Stable Strategies (ESS): a population of incumbents cannot be invaded by a small mutant.
Hawk–Dove games, the founder example, model conflict over resources. The ESS is a mix of hawkish and dovish behavior, parameterized by the cost of fighting and the value of the prize. Many real-world equilibria — animal contests, gender ratios, plant resource allocation — match the predictions.
Reverse game theory (Hurwicz, Maskin, Myerson — Nobel 2007). Instead of solving a fixed game, we design the rules to elicit desired behavior from rational agents.
Vickrey auction (1961): second-price sealed-bid. Truth-telling is a dominant strategy. Bidder writes down their true value because winning at a price they didn't set has no incentive to inflate.
Used in: eBay (effectively), Google AdWords (modified second-price → generalized second-price), spectrum auctions in many countries. Match-making (Gale–Shapley): NRMP, school-choice algorithms in NYC and Boston.
| Auction | Truth-telling? | Revenue |
|---|---|---|
| First-price sealed | No (shade bid) | Same in expectation* |
| Second-price sealed (Vickrey) | Yes | Same* |
| English (open ascending) | Effectively yes | Same* |
| Dutch (open descending) | No (shade) | Same* |
*Revenue Equivalence Theorem · Myerson 1981 · with risk-neutral, IPV bidders.
October 1962. Soviet missiles in Cuba. Kennedy chooses naval blockade rather than airstrike. The game-theoretic reading (Allison, 1971): a sequential game where each player has incomplete information about the other's resolve.
Schelling's later analysis: the credible commitment mattered more than raw forces. By going public, Kennedy made backing down domestically costly, raising the credibility of escalation. Khrushchev faced a similar bind. The settlement — Soviet withdrawal in exchange for U.S. removing Jupiter missiles in Turkey (kept secret) — let both leaders save face.
Common knowledge of rationality is rarely complete. Behavioral game theory (Camerer 2003) models bounded-rationality players (level-k thinkers, quantal response equilibrium).
Multiple equilibria haunt coordination games. Schelling's focal points (1960) — meet in NYC, no time, no place — solve some via shared culture (Grand Central, noon).
Off-equilibrium play is empirically common. Real organizations make threats they wouldn't carry out. Studying disequilibrium is harder than studying equilibrium.
von Neumann & Morgenstern — Theory of Games and Economic Behavior (1944).
Schelling — The Strategy of Conflict (1960).
Axelrod — The Evolution of Cooperation (1984).
Camerer — Behavioral Game Theory (2003).
Dixit & Nalebuff — Thinking Strategically (1991).
Binmore — Playing for Real (2007).
Yale Open Courses — Ben Polak's Game Theory (ECON 159)
Stanford GSB — strategic decisions seminars
Khan Academy — microeconomics game theory
Y Combinator — Peter Thiel "Last Mover Advantage" lecture