Beyond the basic primer. Nash equilibrium with proof, mechanism design, auction theory, repeated games, evolutionary stability, and where the field has actually had impact.
Game theory is what happens when you take rational-choice theory and add other rational choosers. It is the formal mathematics of multi-agent decision — not a metaphor, a proof system.
The field's basic primer (the Vol. VII deck) covers strategic-form games, Nash equilibrium, the prisoner's dilemma, dominance, and zero-sum solution. This deck assumes that. We pick up at the proof of Nash's theorem, take a tour of the fixed-point machinery, and then move into the directions where game theory has had genuine economic and policy impact: mechanism design, auctions, repeated interaction, and evolutionary games.
The intellectual claim is large: game theory now underpins competition policy, spectrum allocation, kidney-exchange medicine, online advertising, school assignment, and the architecture of internet protocols. That claim is also overstated in popular accounts. We try to be exact about both.
A finite strategic-form game has: a set of players N; for each player i, a finite strategy set S_i; and a payoff function u_i: S → ℝ. The product space S = S_1 × ... × S_n is the set of strategy profiles.
The classic prisoner's dilemma:
Each player's row payoff is bold; each player's column payoff is italic. Defect strictly dominates Cooperate for both — leading to the unique (Defect, Defect) equilibrium with payoff (1,1) — Pareto-dominated by (Cooperate, Cooperate) at (3,3).
Solution concepts in roughly increasing strength: Nash equilibrium ⊂ trembling-hand perfect ⊂ proper equilibrium. Plus refinements for extensive form: subgame perfect, sequential, intuitive criterion.
A mixed strategy σ_i is a probability distribution over S_i. The set of mixed strategies Δ(S_i) is a simplex. Expected payoffs extend linearly: u_i(σ) = E_σ[u_i(s)].
Why mixed strategies matter: many games have no pure-strategy equilibrium. Matching Pennies has no pure equilibrium — for any pure strategy of player 1, player 2 has a strict best response, and vice versa. The unique equilibrium is mixed: each player randomises 50-50.
The interpretation of mixed equilibrium has been controversial. Three readings:
1. Behavioural. Players actually randomise. Plausible in repeated experimental play.
2. Bayesian. Each player has uncertainty about the other's pure strategy; the mixed equilibrium represents that belief distribution. Harsanyi's interpretation.
3. Population. The mixed equilibrium describes the distribution of pure strategies across a population. Used in evolutionary game theory.
Different applications call for different interpretations. The math is the same; the meaning is not.
Theorem (Nash, 1950). Every finite game has at least one Nash equilibrium in mixed strategies.
Proof sketch (the version using Kakutani's fixed-point theorem, the cleaner of the two Nash gave):
Define the best-response correspondence BR: Δ(S) → Δ(S) by BR(σ)_i = argmax_{σ_i'} u_i(σ_i', σ_{-i}). A Nash equilibrium is exactly a fixed point of BR.
To apply Kakutani: (1) Δ(S) is non-empty, compact, convex (product of simplexes). (2) BR(σ) is non-empty (continuous function on compact set has a maximiser). (3) BR(σ) is convex (set of maximisers of a linear function on a simplex is itself a sub-simplex). (4) BR has closed graph (limit of best responses to a converging sequence is itself a best response, by continuity of expected payoffs).
Kakutani's theorem then guarantees a fixed point. ∎
The result is non-constructive — Kakutani gives existence but not algorithm. Computing Nash equilibria is PPAD-complete (Daskalakis, Goldberg, Papadimitriou, 2006); polynomial-time algorithms almost certainly don't exist.
Many games have multiple Nash equilibria, some unreasonable. Refinements rule out the unreasonable.
Subgame perfection (Selten, 1965). In extensive-form games, an equilibrium must be Nash in every subgame. Rules out non-credible threats — "if you don't pay, I'll burn down both of our houses" is not subgame-perfect because following through is not a best response.
Trembling-hand perfection (Selten, 1975). An equilibrium robust to small probability of off-equilibrium play. Rules out equilibria sustained only by zero-probability events.
Sequential equilibrium (Kreps and Wilson, 1982). Adds belief consistency. The standard for incomplete-information extensive-form games.
The intuitive criterion (Cho and Kreps, 1987). For signalling games. Restricts off-path beliefs — if a deviation can only benefit a particular type, the receiver should believe that type sent it.
Forward induction. The idea that earlier moves carry information about future intentions. Hard to formalise cleanly; the literature is varied.
Selten and Harsanyi shared the 1994 Nobel with Nash, for refinements and incomplete-information games respectively.
Real strategic situations rarely satisfy "common knowledge of payoffs." Harsanyi (1967-68) showed how to handle this: model each player as having a private "type" drawn from a known distribution, then play the Bayesian Nash equilibrium of the type-augmented game.
Formally: a Bayesian game has a set of types Θ_i for each player, a prior p on the joint type space, and payoffs u_i(s, θ). A Bayesian Nash equilibrium is a strategy profile σ_i: Θ_i → Δ(S_i) such that each player's strategy is a best response to the others', given their own type and the conditional distribution over others' types.
This is the foundational machinery for all of asymmetric-information economics. Auction theory, contract theory, mechanism design, signalling games — all are special cases of Bayesian game analysis.
Harsanyi's reformulation made these problems tractable. Before Harsanyi, "incomplete information" was a vague label; after, it was a well-defined mathematical object.
Standard game theory: given a game, predict play. Mechanism design: given desired outcomes, design the game.
The framework: a designer chooses rules. Players have private types. Play results in an outcome. The designer wants outcomes that depend on types (e.g., the highest-valuation bidder wins) but the designer cannot observe types directly. So the rules must give players incentives to reveal types — or to behave as if they had — through their actions.
The Revelation Principle (Myerson, 1979). For any equilibrium of any mechanism, there exists a direct, truthful mechanism with the same outcomes. "Direct" means players report their types; "truthful" means truth-telling is an equilibrium.
The principle is purely formal — the indirect mechanism may be far easier to implement in practice — but it dramatically simplifies analysis. Designers can focus on direct truthful mechanisms without loss of generality.
Hurwicz, Maskin, and Myerson shared the 2007 Nobel for the foundations of mechanism design.
Mechanism design has moved from the academy to the field over the last 30 years.
FCC spectrum auctions (1994 onward). Designed by Milgrom, Wilson, McAfee, and others. Auction-based allocation of radio spectrum. Total revenue to date >$120 billion. The 2017 incentive auction (clearing TV broadcasters to make 600 MHz spectrum available for mobile) used a complex two-sided design that took five years to plan.
Kidney exchange. Roth's group, with surgeon Saidman, designed mechanisms for matching kidney donor-recipient pairs when direct donation is incompatible. The first chain (NEPKE, 2007) found 2 matches; current systems find hundreds annually. Roth shared the 2012 Nobel with Shapley for this and matching-theory work.
School choice. Boston, New York, and many other cities run student-school assignments using deferred-acceptance algorithms (variants of Gale-Shapley 1962, made strategy-proof). The Boston system was redesigned in 2005-06 explicitly to fix incentive problems in the prior mechanism.
Online advertising. Google's AdWords used the Vickrey-Clarke-Groves (VCG) generalisation; second-price auctions are now an industry standard. Display advertising, programmatic ad markets, and the entire $400B+ digital advertising economy rest on auction theory.
The field is now more applied than theoretical at the frontier.
The most-cited mechanism in the literature. Vickrey (1961), Clarke (1971), Groves (1973).
For an allocation problem with quasi-linear preferences (utility = value - payment): each player reports their valuation function. The mechanism allocates to maximise reported total welfare. Each player pays an externality charge — the amount their participation reduced others' total welfare.
Properties: truth-telling is a dominant strategy. Each player's optimal strategy, regardless of what others do, is to report truthfully. Allocative efficiency. The mechanism produces the allocation that maximises total realized value.
Vickrey's special case (1961) — single-item second-price auction — is the canonical introduction. The high bidder wins, pays the second-highest bid, and bidding one's true value is dominant.
Costs: VCG can run a deficit (collect less than it pays out); does not satisfy budget balance. In settings with multiple items and complementarities, VCG payments can be very low compared to revenue under simpler schemes — limiting practical use.
Vickrey got the 1996 Nobel for this work. He died three days after the prize was announced.
The four standard auction formats:
English (open ascending). Bidders progressively raise; auction ends when no one tops the standing bid. The traditional Christie's/Sotheby's format.
Dutch (open descending). Auctioneer announces a high price, lowers it; first bidder to accept wins. Used in flower markets, fisheries.
First-price sealed-bid. Each bidder submits one sealed bid; high bid wins, pays its bid. Standard in government procurement.
Second-price sealed-bid (Vickrey). High bid wins, pays second-highest. Less common in practice but central to theory.
Revenue equivalence theorem (Vickrey 1961, Myerson 1981, Riley-Samuelson 1981). Under independent private values, risk-neutral bidders, and equilibrium play, all four formats produce the same expected revenue. The theorem is a benchmark; departures (correlated values, risk aversion, asymmetries) explain why specific auctions favour particular formats.
The winner's curse. Important in common-value auctions. The winner is, by selection, the most-optimistic estimator of value — and so is biased toward overpaying. Sophisticated bidders shade their bids to compensate.
Milgrom and Wilson shared the 2020 Nobel for auction theory and design.
One-shot prisoner's dilemma: defection is dominant. Repeated prisoner's dilemma: cooperation can be sustained.
The folk theorem (so called because it was widely-known before being formally written). For a repeated game with sufficiently patient players (discount factor close to 1), any feasible payoff that gives each player at least their minmax value can be supported as a subgame-perfect equilibrium.
The proof idea: cooperate as long as everyone has cooperated; punish deviations with a sufficiently long stretch of minmax play. A patient deviator weighs short-term gain against long-term punishment; if patient enough, deviation doesn't pay.
The folk theorem is a double-edged result. It shows cooperation is possible — but it also shows that an enormous range of outcomes are equilibria, so the model has weak predictive power without further assumptions about which equilibrium is selected.
Tit-for-tat (Axelrod, 1980, 1984) won repeated-PD computer tournaments and became famous in popular accounts. The strategy is simple, nice, retaliatory, forgiving. It is not the unique optimum — many other strategies do well — but its simplicity and intuitive appeal made it influential.
Two parties divide a surplus. Outcomes depend on patience and outside options.
Nash bargaining (1950). Axiomatic approach: the bargaining outcome is the allocation that maximises the product of utility gains over the disagreement point. (Pareto efficiency, symmetry, scale invariance, independence of irrelevant alternatives.) Nash got cleaner foundations for this in the 1953 paper.
Rubinstein bargaining (1982). Strategic approach: alternating-offer bargaining where players have time preferences. Unique subgame-perfect equilibrium gives the more-patient player a larger share. As the time between offers approaches zero, the equilibrium converges to the Nash bargaining solution.
The theoretical reconciliation is satisfying: an axiomatic prediction (Nash) and a strategic mechanism (Rubinstein) converge to the same outcome. This pattern — axiomatic and strategic foundations for the same prediction — is one of game theory's prouder results.
Empirical bargaining is messier. Real bargaining involves emotions, fairness norms, escalation dynamics, and incomplete information about each other's outside options. Behavioural game theory has chipped away at the predictive accuracy of the strategic models.
An informed party takes an observable action that conveys (or attempts to conceal) private information. Spence's job-market signalling (1973, Nobel 2001) is the canonical example — education as a costly signal of ability rather than directly productive investment.
Equilibria in signalling games come in three types:
Separating. Each type sends a different signal. Information is fully revealed.
Pooling. All types send the same signal. Information is concealed.
Semi-separating. Mixed — some types separate, some pool.
The intuitive criterion (Cho-Kreps 1987) often selects the separating equilibrium when one exists.
Signalling models have been applied to dividend policy (informed managers signal financial health), warranties (signal product quality), advertising (signal repeat-purchase confidence), conspicuous consumption, philanthropy. Some applications are robust; others are post-hoc storytelling. The field's empirical track record is mixed.
A standard critique: signalling models are flexible enough to "explain" almost anything; the discipline comes from comparative statics across institutional regimes.
Maynard Smith and Price (1973) introduced game theory to evolutionary biology. The framework: populations of organisms playing strategies; payoffs are reproductive fitness; strategy frequencies evolve via the replicator dynamics.
Evolutionarily stable strategy (ESS). A strategy that, when adopted by a population, cannot be invaded by any rare mutant. Maynard Smith's hawk-dove game is the canonical example.
The replicator dynamics:
ẋ_i = x_i (f_i(x) - φ(x))
Strategies whose payoff exceeds the population average grow; those below shrink. Equilibria are the fixed points; ESSs are asymptotically stable rest points.
The mathematical link between Nash equilibria and replicator equilibria is striking. Every ESS is a Nash equilibrium; not every Nash is an ESS. Evolutionary stability is strictly stronger.
Applications: animal contests, plant strategies, the evolution of cooperation, sex ratios (Fisher's principle), language change, cultural transmission. The field has had impact in biology and is increasingly used in economic dynamics, learning, and policy modelling.
Thomas Schelling (1921-2016, Nobel 2005) was the most influential applied game theorist of the 20th century. His distinctive contributions:
Focal points (Schelling points). When multiple equilibria exist, players coordinate on the one that is "salient." The classic experiment: "Where in New York City would you meet someone today, given no prior communication, knowing only that they are also choosing where to meet you?" A surprising fraction of subjects answer "Grand Central Station, noon." The focal-point effect is a serious challenge to game-theoretic equilibrium selection.
Credible commitment. The paradox that constraining yourself can be advantageous. The classic example: the kidnap victim who can credibly commit not to identify the kidnapper after release will be released. Without commitment, the kidnapper can't trust the victim's promise; with commitment (perhaps via mutually-implicating evidence-sharing), the deal becomes credible.
The strategy of nuclear deterrence. The Strategy of Conflict (1960) and Arms and Influence (1966). The mathematics of mutual assured destruction, brinkmanship, escalation control. Influential on US nuclear doctrine.
Tipping models. Micromotives and Macrobehavior (1978). How small individual preferences for similar neighbours can produce large-scale segregation. Among the most-cited social-science models.
Schelling wrote in plain English. Most of his game theory is mathematics-light. The intuitions transferred to readers without graduate training; this was deliberate.
How do real people actually play games? Often not the way the theory predicts.
Ultimatum game. One player proposes a split of $10; the other accepts (both keep proposed amounts) or rejects (both get $0). Subgame-perfect equilibrium: proposer offers the smallest positive amount, responder accepts. Empirical: proposers offer ~40-50%, responders reject offers below ~20-30% as "unfair." Replicates across cultures with variation but the qualitative pattern is robust.
Trust game. Investor sends part of $10 to trustee; the amount triples; trustee returns part. Subgame-perfect: investor sends nothing, trustee returns nothing. Empirical: meaningful sending and reciprocation, far from the equilibrium.
Public goods games. Prisoner's-dilemma-like. Equilibrium: free-ride. Empirical: substantial contribution, declining with repetition unless punishment is allowed (Fehr and Gächter, 2000).
Beauty contest game (Keynes 1936, formalised Nagel 1995). Pick a number 0-100; winner is the closest to 2/3 of the average. Iterated dominance gives 0; experimental subjects pick around 20-30 in a single round.
Camerer (Behavioral Game Theory, 2003) is the standard reference. Models like quantal response equilibrium (McKelvey-Palfrey 1995) and level-k thinking try to formalise empirical departures.
Non-cooperative game theory studies what individuals will do given incentives. Cooperative game theory (TU games — transferable utility) studies which coalitions form and how they divide payoffs.
The basic object: a characteristic function v: 2^N → ℝ assigning a value to each coalition. Solution concepts:
The core. Allocations such that no coalition has an incentive to deviate (i.e., no group can do better by leaving and forming its own coalition). The core may be empty.
Shapley value (Shapley, 1953). The unique allocation satisfying efficiency, symmetry, dummy-player, and additivity axioms. Computed as the average of marginal contributions across all orderings of player arrival. Shapley got the 2012 Nobel.
Nucleolus, kernel, bargaining set. Other proposed solution concepts, with varying interpretations and axiomatic support.
Cooperative game theory has had applications in cost-sharing (airport landing fees), revenue sharing in joint ventures, voting power indices (Shapley-Shubik for legislative power), and recently in machine learning interpretability (SHAP values for explaining individual predictions).
Two-sided matching markets — workers and firms, students and schools, donors and recipients — without prices. Gale and Shapley's 1962 paper "College Admissions and the Stability of Marriage" gave the foundational result.
For a marriage problem (n men, n women, with preferences over partners), the deferred-acceptance algorithm produces a stable matching: a matching with no blocking pair (a man-woman pair who both prefer each other to their assigned match).
The algorithm: each man proposes to his most-preferred woman not yet rejected; each woman provisionally accepts her most-preferred current proposer and rejects the rest. Iterate. The algorithm always terminates, and the result is a stable matching — moreover, the man-optimal stable matching among all stable matchings.
The algorithm is strategy-proof for the proposing side (deferred-acceptance is the unique mechanism with this property, given stability). This makes it appropriate for clearinghouses where one side announces preferences (typically applicants).
Roth's National Resident Matching Program reform (1995-98) replaced the older NRMP algorithm with a deferred-acceptance variant. School-choice mechanisms in Boston, NYC, Denver, and others have followed. The 2012 Nobel for Roth and Shapley recognised this body of work.
The intersection of game theory and computer science. Two main directions:
Computing equilibria. Daskalakis-Goldberg-Papadimitriou (2006) showed that finding a Nash equilibrium of a 2-player game is PPAD-complete. Even approximate Nash is hard. This means efficient general algorithms are unlikely to exist; specific game structures may admit fast algorithms.
The price of anarchy (Koutsoupias-Papadimitriou 1999). A measure of equilibrium efficiency loss: the ratio of social cost at the worst Nash equilibrium to social cost at the optimum. Tight bounds for many problem classes (network routing, scheduling, congestion games) have been derived.
Online auction design. Search and display ad auctions run billions of times per day. Practical mechanism design under computational constraints, with bidder learning and adaptive strategies, drives a large empirical and theoretical literature.
Multi-agent reinforcement learning. AI systems learning in games. AlphaGo (2016, beat Lee Sedol). AlphaZero (2017, generalised to chess and shogi). Pluribus (2019, beat top humans at six-player no-limit Texas hold'em — a notable game-theoretic milestone). Cicero (2022, performed at human level in Diplomacy, requiring natural-language negotiation).
The frontier increasingly blends game theory, optimisation, and machine learning.
When players' payoffs depend on a network structure (who is connected to whom), the analysis becomes richer.
Network formation games. Jackson and Wolinsky (1996) — players form bilateral connections at cost; payoffs depend on the resulting network. Trade-offs between efficiency and stability are common; many natural networks are stable but inefficient or efficient but unstable.
Public goods on networks. Bramoullé and Kranton (2007). When neighbours' contributions substitute for one's own, networks select for free-riders; positions in the network determine equilibrium contribution.
Influence maximisation (Kempe, Kleinberg, Tardos 2003). Choose k seed nodes to maximise contagion through a network. NP-hard in general, but submodular structure allows greedy approximation. Applications to viral marketing, vaccination strategy.
Network congestion games. Routing where players choose paths and per-edge cost depends on traffic. Wardrop equilibria (1952), price of anarchy bounds (Roughgarden 2002), Pigovian tolls.
The internet's BGP routing, electricity grid flow, transportation networks, and online social platforms are all network games at scale. Most of the field's recent applied progress is in this area.
The field's reach is real but bounded. Honest constraints:
Multiple equilibria, weak selection. Many games have many equilibria. The folk theorem alone shows that with patient repeated players, almost anything goes. Without strong selection criteria — focal points, learning dynamics, evolutionary stability, social conventions — the theory often gives a set of possibilities, not a prediction.
Common knowledge of rationality. Standard equilibrium concepts assume players are rational and that this is common knowledge — they know they're rational, know others are, know others know they are, and so on. Empirically, common-knowledge assumptions fail. Behavioural and bounded-rationality models help but don't have the same axiomatic clarity.
Computational tractability. Real strategic situations rarely have analytical solutions. Computing equilibria is hard. Approximation requires problem-specific structure.
Empirical model selection. Real-world institutions usually fit several game-theoretic models. Identifying which model captures the relevant strategic logic is hard; the literature has substantial post-hoc fitting.
The honest claim is not "game theory predicts outcomes" but "game theory provides a vocabulary for analysing strategic situations, identifying their key features, and designing institutions with predictable strategic properties." That is more modest, and accurate.
The 2020s have seen game theory return to AI through two channels.
Multi-agent training. Self-play, fictitious play, counterfactual regret minimisation (CFR — the algorithm behind Pluribus poker). Multi-agent reinforcement learning is increasingly used to train language models for cooperation, debate, persuasion. The 2022 Cicero (Meta) performed at human level in Diplomacy, a game requiring sustained negotiation.
Mechanism design for AI. When AI systems become economic agents, mechanism design's "design rules to elicit truthful behaviour" reframes naturally as "design markets that aligned AIs will participate in usefully." The growing field of AI economic alignment uses mechanism-design tools to think about how AI agents should be structured for honest interaction.
The 2024-25 surge in agentic AI has revived interest in old questions. How do we design markets where many AI agents interact? Are existing mechanisms manipulable by AI in new ways? Iterated negotiation by language models, automated bargaining, and cooperative AI all sit at the intersection.
This is one of the more fertile current research directions. Whether it pays off in deployable systems is open.
Five live questions.
1. Equilibrium selection. When multiple equilibria exist, what selects? Folk theorem prediction-emptiness is the canonical example. Schelling's focal points, evolutionary dynamics, learning, and convention all give partial answers. None is universal.
2. Bounded rationality with foundations. Behavioural game theory has cataloged departures from equilibrium play. A theory predicting which departures will occur in which contexts, with axiomatic foundations comparable to Nash, remains aspirational.
3. Learning and convergence. When agents learn over time, do they converge to Nash equilibria? Sometimes (fictitious play in zero-sum games), sometimes not (chaotic dynamics in many games). General theorems are scarce.
4. Mechanism design with computational constraints. Designing mechanisms that are tractable, robust to misspecification, and approximately optimal — rather than provably optimal under unrealistic assumptions — is the modern frontier.
5. AI and game theory. When agents are sophisticated learners (large models), do classical equilibrium predictions hold? Open empirically and theoretically.
↑ Peter Cramton on auctions and market design — mechanism design
Watch · John Nash and game theory
Watch · The Prisoner's Dilemma in one minute
Three paths.
For undergraduates / starting graduate. Tadelis's Game Theory: An Introduction is the gentlest standard. Osborne and Rubinstein's A Course in Game Theory is more rigorous and concise. Watson's Strategy for an applied flavour.
For PhD-level depth. Fudenberg and Tirole's Game Theory is the canonical graduate text. Mas-Colell-Whinston-Green Chapters 7-9 for the standard microeconomic-theory treatment. Myerson's Game Theory: Analysis of Conflict for the philosophical care.
For mechanism design. Milgrom's Putting Auction Theory to Work is the practitioner's guide. Vohra's Mechanism Design: A Linear Programming Approach for the rigorous theoretical treatment.
For algorithmic game theory. Nisan, Roughgarden, Tardos, Vazirani's Algorithmic Game Theory (downloadable from Cambridge UP). Roughgarden's Twenty Lectures on Algorithmic Game Theory.
Online. Yale's open-courseware OCW lectures from Ben Polak. The Stanford Coursera "Game Theory" by Shoham, Leyton-Brown, Jackson. Roughgarden's CS364A lectures (online video).
Three claims.
It is the central language of strategic interaction. Economics, political science, evolutionary biology, computer science, and military strategy all use the framework. Whatever the theory's predictive limitations, the vocabulary and the proof techniques are now part of the basic toolkit of social science and theoretical computer science.
It has produced practical mechanisms. Spectrum auctions, kidney exchange, school choice, ad auctions — billions of dollars of economic value, and tens of thousands of lives saved. The applied edge of mechanism design and matching theory has earned its keep.
The frontier remains open. Computational tractability, behavioural foundations, multi-agent learning, AI mechanism design — these are not solved problems. The next decade of work is at the intersection of game theory and machine learning, where many of the basic questions remain genuinely open.
The popular reception of game theory tends to either overstate it (the field is not "the master science" of social action) or dismiss it (the field is not just decorative formalism). Between these extremes is the actual record: a useful applied science with real reach, real limits, and a productive research frontier.
Four directions worth watching.
AI mechanism design. When agents are sophisticated learners, the design problem changes. How do you set up markets where AI agents — including ones designed adversarially — interact in ways aligned with human interests? Active research, no settled answers.
Empirical mechanism design. Increasingly tested in field experiments rather than only proven theoretically. School-choice, kidney-exchange, and online platform redesigns are now common. The theory is being disciplined by deployment.
Algorithmic and computational frontier. CFR-based algorithms have solved poker; multi-agent reinforcement learning is making progress on richer games. The PPAD-completeness result limits worst-case complexity but specific structures are tractable.
Behavioural foundations. Bounded rationality, learning, and reference-point models continue to be developed. The deepest open question — what selects among multiple equilibria — has shifted from pure theory to a hybrid theoretical-empirical-computational pursuit.
Game theory in 2026 is not a finished field. It is a productive applied science with a vibrant theoretical core. The 13 Nobels (1994 to 2020) are not its endpoint.
Game Theory · Deep — Volume XIII, Deck 11 of The Deck Catalog. Set in Crimson Pro with monospace metadata and matrix display. Cream paper #f7f4ee; coral and ink-blue accents.
Twenty-eight leaves on the mathematics of strategic interaction. The first deck (Vol. VII) was the introduction; this deck is the working theory.
↑ Vol. XIII · Math. · Deck 11