Frege's Sinn und Bedeutung, Russell's descriptions, the Tractatus and the later Wittgenstein, Austin's speech acts, Quine's indeterminacy, Kripke's rigid designation, Davidson's truth-conditional semantics. The hundred-year argument about how words mean.
The philosophy of language is the philosophical investigation of meaning, reference, truth, and the relation of language to thought and world. As an organised research programme, it is the central achievement of analytic philosophy — the discipline that, in the early 20th century, redirected philosophical attention from the structure of consciousness to the structure of language.
The discipline begins with Gottlob Frege's Begriffsschrift (1879), reaches its first synthesis in Russell's Principia Mathematica (1910-13) and Wittgenstein's Tractatus (1921), and is reshaped by the ordinary-language Wittgenstein (the Investigations, 1953) and Austin's speech-act theory (1962). The post-1960 period — Quine, Davidson, Kripke, Putnam, Lewis — produces the framework still operative in analytic philosophy.
This deck covers each major figure, the central technical problems (sense and reference, definite descriptions, the picture theory, the meaning of "meaning"), and the way the field has reshaped neighbouring disciplines (semantics in linguistics, programming-language theory, large-language-model interpretation).
Gottlob Frege (1848–1925), professor of mathematics at Jena, is the founder of modern logic and the founder of analytic philosophy of language. Three of his works are central:
The Begriffsschrift ("Concept-Script," 1879) introduced quantificational logic — the first system in which one could express "for every x, there exists a y, such that…" cleanly. The advance over Aristotelian syllogistic was so substantial that all subsequent logic, mathematics, and computer-science theory descends from this short book.
The Grundlagen der Arithmetik ("Foundations of Arithmetic," 1884) developed Frege's logicist program — the attempt to derive arithmetic from pure logic. The book is also a sustained methodological essay on how to do philosophy: in particular, the famous context principle ("never to ask for the meaning of a word in isolation, but only in the context of a proposition").
"Über Sinn und Bedeutung" ("On Sense and Reference," 1892) introduced the distinction that defines modern semantics. Frege's puzzle:
If the meaning of a name is just its referent, the two identity statements are the same statement. But they are not — one is a triviality, the other is a content-bearing assertion. Therefore meaning is not just reference.
Frege's solution: every meaningful expression has both a reference (Bedeutung — the object the expression picks out) and a sense (Sinn — the mode of presentation under which the object is given). "The morning star" and "the evening star" share a reference (Venus) but differ in sense (mode of presentation as morning-visible vs evening-visible). The identity is informative because it links two senses to one reference.
Bertrand Russell (1872-1970) inherited Frege's framework and reshaped it. His 1905 paper "On Denoting" — published in Mind — is "the paradigm of philosophy" (Frank Ramsey's phrase), and the most-cited paper in the analytic tradition's first hundred years.
The puzzle Russell addressed: what does a sentence like "the present King of France is bald" mean? There is no present King of France. The sentence seems to talk about something that doesn't exist. How can it have meaning at all? And — Russell's specific worry — how do we explain that "the present King of France is bald" and "the present King of France is not bald" both seem false, in violation of the law of excluded middle?
Russell's theory of descriptions: definite descriptions ("the such-and-such") are not, despite appearances, names. The surface grammar of "the F is G" is misleading. The logical form is:
"The present King of France is bald" is therefore false (because there is no F). "It is not the case that the present King of France is bald" is true (negating a false proposition). The law of excluded middle is preserved; the apparent reference-to-nothing dissolves; we can say true or false things using definite descriptions without any ontological commitment to their referents.
Russell's framework also dissolved Meinong's "round square." Alexius Meinong had argued that the sentence "the round square does not exist" requires the round square to have some kind of existence as object of reference. Russell's analysis: "the round square does not exist" means "it is not the case that there is exactly one F (where F = round-and-square)," which is straightforwardly true and quantifies over no impossible objects.
The theory of types (Russell, with Whitehead, in Principia Mathematica, 1910-13) was Russell's solution to the paradox he had identified in Frege's logicism — the paradox of the set of all sets that don't contain themselves. The technical machinery is now standard in computer science (typed lambda calculus) and theoretical mathematics.
Ludwig Wittgenstein's Tractatus Logico-Philosophicus was written in the trenches of the First World War (1914-1918) and published in 1921 (German) and 1922 (English, Routledge). It is one of the strangest and most-influential books of 20th-century philosophy.
The structure is austere — seven numbered propositions and their commentaries:
The book proposes a picture theory of meaning. A meaningful proposition is a logical picture of a possible state of affairs in the world. The proposition shares logical form with what it pictures; this shared form is what makes representation possible. True propositions correspond to actual states of affairs; false propositions to merely possible ones.
The startling consequence: most traditional philosophy is meaningless. Ethical, aesthetic, and religious sentences cannot be pictures of states of affairs in the empirical world. They are not false; they are not contentful at all in the picture-theoretic sense. The famous closing — "Whereof one cannot speak, thereof one must be silent" — is the conclusion.
The Tractatus ends with the auto-deconstruction: its own propositions, by its own theory, are nonsense. They are a "ladder" the reader must throw away after climbing.
The Tractatus was taken up by the Vienna Circle (Wiener Kreis), the group of philosophers and scientists led by Moritz Schlick that met in Vienna from the 1920s. Schlick, Carnap, Neurath, Hahn, Feigl, Gödel, and others read the Tractatus systematically — though Wittgenstein himself thought they had misunderstood it.
The Circle's logical positivism (or logical empiricism) was the dominant philosophy of science of the 1930s. Its central thesis: a sentence is meaningful only if it is either analytic (true by definition) or empirically verifiable. Metaphysics, theology, and most of traditional philosophy fail both tests and are therefore literally meaningless — not false, but cognitively empty.
The verification principle was the criterion. Various formulations: a sentence is meaningful iff there is some possible observation that would confirm or disconfirm it; iff it can be reduced to observation reports; iff its truth conditions are specifiable in terms of sense data.
Carnap's The Logical Structure of the World (Aufbau, 1928) attempted to derive all knowledge from a base of phenomenal experience using the new logic. The project failed in its strict form (Goodman's The Structure of Appearance shows where), but the methodological influence was immense.
The Circle dispersed under the rise of Nazism. Schlick was murdered in 1936. Carnap moved to Chicago, then UCLA. Hempel went to Princeton. Reichenbach went to UCLA. The American post-war analytic tradition was substantially built by these emigrants.
By the late 1940s, the verification principle had collapsed under self-application (the principle itself is neither analytic nor empirically verifiable) and Quine's attack on the analytic-synthetic distinction. The linguistic turn survived; logical positivism specifically did not.
Wittgenstein returned to philosophy in 1929. By the early 1930s he had repudiated the Tractatus's picture theory and was developing what would become the Philosophical Investigations (written 1936-1949, published posthumously 1953).
The Investigations opens by quoting Augustine on language acquisition: a child watches an adult name an object, and learns the meaning of the name from the act of naming. Wittgenstein's response: this picture is at best partial, and treating it as the model of meaning produces deep philosophical confusion.
The new framework is not a theory of meaning but a method. Three central concepts:
Language-games. Language is not a single uniform activity but a vast collection of overlapping practices — giving orders, describing objects, telling jokes, making lists, asking questions, praying, swearing. Each language-game has its own internal rules, its own kind of meaningful move. Asking "what is the meaning of X" outside a specific language-game is asking the wrong question.
Family resemblance. The members of a category — games, for instance — share no single defining feature. Some are competitive; some are not. Some have winners; some don't. Some are amusing; some are deadly serious. What unifies them is overlapping similarities, like family members who all share some features but no single feature shared by all.
Meaning is use. The meaning of a word is not an inner object the word labels; it is the role the word plays in actual practice. To know what a word means is to know how to use it correctly in the relevant language-games.
The private-language argument (Investigations §243-§315) is one of the most-discussed passages in 20th-century philosophy. The conclusion: a strictly private language — one whose terms refer to inner sensations only the speaker can identify — is impossible. Meaning requires public criteria of correctness; without them, "I am following the rule" and "I think I am following the rule" become indistinguishable.
The implications for philosophy of mind, philosophy of language, and the foundations of mathematics are deep and still actively debated.
J. L. Austin (1911-1960, Oxford), the central figure of postwar Oxford ordinary-language philosophy, gave the William James Lectures at Harvard in 1955. They were published posthumously as How to Do Things with Words (1962) and founded speech-act theory.
Austin's starting move: not all utterances are descriptions. The standard analytic philosophy of language had treated the descriptive (constative) utterance as the basic case — a sentence is meaningful when it states a fact that can be true or false. Austin observed that vast numbers of utterances are not descriptions but actions.
"I name this ship the Queen Elizabeth" — said by the right person, at the right ceremony, with the right bottle — does not describe the naming; it is the naming. "I promise to call you tomorrow" does not describe a promise; it makes one. "I now pronounce you husband and wife" is the marriage, not a report of it. These performative utterances are not true or false; they are felicitous (successful) or infelicitous.
Austin's mature framework distinguished three dimensions of every speech act:
Locutionary act — what is said; the literal content of the utterance.
Illocutionary act — what is done in saying it (asserting, promising, commanding, warning, christening).
Perlocutionary act — what is brought about by saying it (convincing, persuading, frightening, calming).
The framework was extended by John Searle in Speech Acts (1969), which gave it analytic precision. Searle's taxonomy of illocutionary acts (assertives, directives, commissives, expressives, declarations) is now standard.
The downstream influence has been wide. Speech-act theory shaped pragmatics in linguistics; legal philosophy (especially around how legal performatives like contracts and verdicts work); philosophy of fiction; and recent work on hate speech, pornography, and political speech, where the question of what is being done in speech (vs merely said) is central.
Paul Grice (1913-1988, Oxford then Berkeley) developed the framework that turned speech-act theory into a systematic theory of communication. His 1967 William James Lectures (published as Studies in the Way of Words, 1989) introduced conversational implicature.
The puzzle: speakers regularly mean more than they say. "Some of the students passed the exam" technically allows that all students passed (since "all" entails "some"); but the speaker is normally taken to mean that not all passed. How is this extra meaning conveyed without being said?
Grice's answer: communication is governed by the Cooperative Principle — make your contribution to the conversation as is required by the accepted purpose of the talk exchange. The principle decomposes into four maxims:
Quantity. Make your contribution as informative as required, but no more so.
Quality. Do not say what you believe to be false; do not say that for which you lack evidence.
Relation. Be relevant.
Manner. Be perspicuous — clear, brief, orderly, unambiguous.
The maxims are not always observed; what is interesting is what happens when a speaker visibly flouts a maxim. The hearer infers an additional meaning that would restore consistency with the Cooperative Principle. "Some students passed" implies "not all passed" because if the speaker knew all had, the maxim of Quantity would have required the stronger claim.
The framework explains a vast range of pragmatic phenomena: irony (flouting Quality — saying what is plainly false to convey the opposite), metaphor, indirect requests ("can you pass the salt?" works as a request because as a literal yes/no question it would flout Relation), polite circumlocution, hint and innuendo.
Gricean pragmatics is now standard in linguistics and central to current work in philosophy of language. The recent computational work on language understanding — Rational Speech Act models, the pragmatic interpretation of LLM outputs — sits squarely in the Gricean lineage.
Willard Van Orman Quine (1908-2000, Harvard) is the central American philosopher of the post-war period. His 1951 paper "Two Dogmas of Empiricism" (Philosophical Review) is among the most consequential papers in 20th-century philosophy.
Quine's targets are the two foundational commitments of logical empiricism:
The analytic-synthetic distinction. The doctrine that sentences divide into those true by virtue of meaning alone (analytic — "all bachelors are unmarried") and those true by virtue of empirical fact (synthetic). Quine argues, through a sustained analysis of synonymy, definition, and substitution, that no defensible criterion separates the two; the supposed distinction collapses on close inspection.
Reductionism. The doctrine that every meaningful statement reduces to a logical construction from sense-experience reports. Quine argues that no isolated sentence faces experience alone; sentences confront experience only as parts of larger theoretical wholes.
The constructive proposal — sometimes called confirmation holism or the Duhem-Quine thesis — is that the unit of empirical significance is not the individual statement but the whole of science. Any statement can be held true in the face of recalcitrant experience, by adjusting other beliefs; any statement can be revised, including the laws of logic, in service of overall theoretical fit.
Quine's later Word and Object (1960) extended the program. The famous indeterminacy of translation thesis: the empirical evidence available to a field linguist (the foreign speaker's behaviour, including their assents and dissents to sentences in their own language under various stimuli) is in principle compatible with multiple, mutually-incompatible translation manuals. There is no fact of the matter, beyond the empirical evidence, about which translation is correct.
The implication is severe. If translation is indeterminate at the level of behavioural evidence, then the meanings the translation purports to capture are themselves indeterminate. There is no Fregean Sinn that the translator is trying to recover; there are only the patterns of speech-disposition the linguist can observe.
Donald Davidson (1917-2003, Stanford then Berkeley), Quine's student and longtime interlocutor, developed the most influential post-Quinean theory of meaning. The 1967 paper "Truth and Meaning" is the foundational statement.
Davidson's proposal: a theory of meaning for a natural language should take the form of a truth theory — specifically, an axiomatic theory in the style of Tarski's 1933 truth definition for formal languages. The theory delivers, for each sentence S of the language, a theorem of the form:
The famous T-sentence: "Snow is white" is true if and only if snow is white. The truth condition is given by translating (or in the simple case, disquoting) the sentence on the right.
Davidson's claim: knowing the meaning of a sentence is knowing its truth-conditions; a finitely-axiomatised truth theory generates infinitely many T-sentences, mirroring our ability to understand infinitely many novel sentences from finite linguistic resources. The theory is therefore a candidate for what speakers tacitly know.
The companion programme is radical interpretation — the project of constructing a truth-theoretic interpretation of a speaker from no prior knowledge of their language, beginning only from observable assents and dissents in observable circumstances. Davidson's principle of charity requires that the interpretation maximise the rational coherence of the speaker's attitudes — speakers are presumed to be largely right, largely consistent, largely interpretable.
The Davidsonian framework dominated formal semantics from the 1970s through the 1990s. The Montague-grammar tradition in linguistics extended it with model-theoretic apparatus (David Lewis's General Semantics; Richard Montague's The Proper Treatment of Quantification in Ordinary English, 1973). The combined Davidson-Montague programme produced what is now called formal semantics, a major branch of contemporary linguistics.
Saul Kripke (1940-2022, Princeton) gave the John Locke Lectures at Princeton in January 1970. The transcript was published as Naming and Necessity (1980) and is the most influential single text in the philosophy of language since the Tractatus.
Kripke's target was the descriptivist theory of proper names — the orthodox view, attributable to Frege and especially Russell, that a proper name like "Aristotle" abbreviates a description (or cluster of descriptions) like "the philosopher who taught Alexander the Great" or "the author of the Nicomachean Ethics."
Kripke's counter-arguments are concise and devastating:
The modal argument. "Aristotle might not have taught Alexander the Great" is true. But on the descriptivist account, "Aristotle" means "the philosopher who taught Alexander," so the sentence becomes "The philosopher who taught Alexander might not have taught Alexander," which is contradictory.
The epistemic argument. Most speakers who use the name "Aristotle" successfully don't know specific descriptions that uniquely identify him. Yet they refer.
The semantic argument. Even if everyone used a single description, they would refer to whoever the name was originally given to, not to whoever fits the description. If it turned out the historical Aristotle did not, in fact, teach Alexander, the descriptivist would have to say "Aristotle taught Alexander" was true (because someone else, who fits the description, taught him); the intuitive answer is that the sentence is false.
Kripke's positive proposal: proper names are rigid designators. They refer to the same individual in every possible world in which that individual exists. Reference is fixed by an initial baptism (a "dubbing") and transmitted through a causal chain of name-uses linking later speakers to the original act of naming.
The framework extends to natural-kind terms (gold, water, tigers): they too are rigid designators referring to a kind whose identity is fixed by the underlying nature, not by surface descriptions. "Water is H₂O" is necessary if true, even though it is not knowable a priori. The category of necessary a posteriori truths — necessary, but discovered by empirical investigation — is Kripke's signature philosophical contribution.
Hilary Putnam (1926-2016, Harvard) developed the externalist theory of meaning in parallel with Kripke. His 1975 paper "The Meaning of 'Meaning'" introduced the Twin Earth thought experiment.
Imagine a planet, Twin Earth, identical to ours in every observable respect, except that the substance that fills the lakes and rivers, that people call "water," that is colourless, odourless, and drinkable — is not H₂O but a different chemical, XYZ, with all the same superficial properties.
Question: when an Earthling says "water is wet" and a Twin Earthling says "water is wet," do they mean the same thing?
Putnam's claim: no. The Earthling's "water" refers to H₂O; the Twin Earthling's "water" refers to XYZ. The two utterances have different meanings even though the speakers' internal psychological states are, by stipulation, identical. Therefore: "Meanings just ain't in the head."
The implication is that semantic content depends on environmental and social facts external to the speaker's mind. Putnam adds the linguistic division of labour: most ordinary speakers don't know the chemical structure of water, but they refer to H₂O nevertheless because the linguistic community includes experts whose use of the term is authoritative, and ordinary speakers defer to expert use.
Externalism became the dominant view in philosophy of mind and language from the 1980s onward. Tyler Burge's parallel work (the "arthritis" thought experiment, 1979) extended the argument from natural-kind terms to broad-content thoughts more generally. The view's implications for the philosophy of mind — that mental content is partly constituted by environmental facts — are still being worked out.
Putnam's later work, increasingly heterodox, eventually rejected the metaphysical realism that had originally motivated the externalist framework. His final Realism with a Human Face (1990) and The Threefold Cord (1999) develop a position closer to Wittgenstein's — meaning grounded in human practice, with metaphysical realism set aside.
David Lewis (1941-2001, Princeton) was the most systematic philosopher of his generation. His contributions to philosophy of language run through three works.
Convention (1969), Lewis's PhD thesis, gave a game-theoretic analysis of language as a coordination convention. A convention is a regularity in behaviour that solves a recurring coordination problem and is sustained by the participants' mutual expectation that everyone will conform. Language — particular words mapping to particular meanings — fits this structure precisely. The framework extended Hume's brief remarks on convention into a full Schelling-style coordination theory.
"Languages and Language" (1975) distinguished language as a formal object (a set of sentence-meaning pairs) from language as a social practice (the use of a particular formal language by a particular community). The two questions — what is the structure of English as a formal object? what is the relation between speakers and their formal language? — are different and need different theories.
Counterfactuals (1973) and the wider possible-worlds framework provided the modal apparatus that has been essential to subsequent semantics. Lewis's modal realism — the view that all possible worlds genuinely exist, on a par with the actual world — was philosophically extreme but technically powerful. Most working semanticists use Lewis's possible-worlds machinery without endorsing the modal-realist metaphysics.
Lewis's two-dimensional semantics (developed in collaboration with Robert Stalnaker) gives a formal apparatus for the Kripke-Putnam necessity-distinctions: each sentence has both a character (a function from contexts to contents) and a content (a function from worlds to truth-values). The framework reconciles a posteriori necessities with a priori knowability of meaning-conditions.
Lewis's reach extended into philosophy of mind, metaphysics, decision theory, and ethics. Within philosophy of language specifically, his synthesis of Davidson-Montague semantics with Kripkean modality is the standard backdrop for contemporary work.
The philosophy of language has been in continuous dialogue with linguistics, particularly with the generative tradition founded by Noam Chomsky.
Chomsky's Syntactic Structures (1957) and the subsequent Aspects-of-the-Theory-of-Syntax (1965) framework reframed linguistics around the generative grammar — a finite system of rules that can produce the infinitely many grammatical sentences of a language and exclude the ungrammatical ones.
The deep philosophical claim: speakers tacitly know a generative grammar of their language. This linguistic competence is realised in the brain and is acquired by children too rapidly and uniformly to be learned from observed input alone. The conclusion — the poverty of the stimulus argument — is that humans are born with substantial linguistic structure already in place. Chomsky's universal grammar is the innate substrate.
The position is controversial. Quine rejected it on philosophical grounds (linguistic structure cannot exceed what is empirically determinable). Connectionists and statistical-learning theorists rejected it on cognitive grounds. The empirical literature is mixed.
The contemporary situation is interesting. The 2017-25 success of large language models — GPT-style systems learning rich linguistic structure from raw text without explicit grammar rules — challenges the strong nativist position. The systems learn syntax, semantics, and substantial pragmatics from statistical exposure to text, with no built-in grammatical scaffolding. This counts as evidence against extreme poverty-of-stimulus arguments and as data the philosophy of language is still absorbing.
Chomsky himself remains skeptical that LLMs are doing anything like what human language users do. The dispute will continue. What is clear is that the questions Chomsky put on the agenda — what is linguistic competence, how is it acquired, what is its biological basis — remain central to both linguistics and philosophy of language.
Robert Brandom (Pittsburgh) developed the most ambitious post-Davidsonian theory of meaning. Making It Explicit (1994) is the foundational work; Articulating Reasons (2000) is the accessible introduction.
Brandom's inferentialism: meaning is constituted not by reference but by inferential role. To know the meaning of "dog" is to know what inferences are licensed by claims involving "dog" — that dogs are mammals, that they have hearts, that they bark. The semantic value of an expression is exhausted by its inferential connections.
The framework descends from Wilfrid Sellars and the Pittsburgh school's reading of Hegel. Brandom is unusually explicit about the lineage: Frege, Sellars, Quine, Davidson on the analytic side; Hegel and Wittgenstein on the continental side.
The technical apparatus distinguishes:
Material inferences — inferences whose validity depends on the content of the concepts involved (from "Pittsburgh is east of San Francisco" to "San Francisco is west of Pittsburgh") — vs formal inferences (from "p and q" to "p"). Brandom argues material inference is conceptually prior; formal inference is a special case.
Score-keeping practices — speakers track each other's commitments and entitlements; a successful claim adds to a speaker's score in ways that license further moves; speakers can challenge each other's entitlements. The pragmatic structure of the discourse is the substrate that makes meaning possible.
Brandom's framework has been less widely adopted than Davidson's or Kripke's but is influential in continental-analytic dialogue (especially around Hegel-readings) and in social epistemology, where the score-keeping framework illuminates how communities track and revise their collective commitments.
The sorites paradox — also called the heap paradox — is one of the oldest unsolved problems in philosophy. The original Greek version: a single grain of sand is not a heap. Adding one grain to a non-heap does not make it a heap. By induction, no number of grains is ever a heap. But ten million grains plainly is.
The same structure applies to most natural-language predicates: bald, tall, red, rich, old. They admit borderline cases; small differences don't seem to flip the predicate; sustained accumulation of such differences must, on pain of contradiction, eventually flip the predicate.
The major theoretical responses:
Epistemicism (Timothy Williamson, Vagueness, 1994). There is a sharp boundary between heap and non-heap; we just don't know where it is. The vagueness is in our knowledge, not in the world.
Supervaluationism (Kit Fine, David Lewis). Vague terms have multiple admissible precisifications. A sentence is true if it is true on every admissible precisification, false if false on every one, and indeterminate if the precisifications disagree.
Many-valued logic / fuzzy logic. Replace bivalent truth values with a continuum (or graded values). "X is bald" can be 0.7-true. The semantics requires non-classical logic.
Contextualism (Stewart Shapiro, Diana Raffman). Vague terms have context-dependent extensions. "Heap" picks out different sets in different conversational contexts; the paradox arises from illegitimately fixing the context across the inductive steps.
The problem is unsolved. Each response has costs; the field's textbooks present the menu of options without endorsing one. The deeper puzzle — that natural language is shot through with predicates that admit sorites pressure, and yet language works — remains a productive constraint on theories of meaning.
Robert Stalnaker (MIT) developed the most influential theory of conversational dynamics. Inquiry (1984) and Context (2014) are the foundational works.
Stalnaker's framework treats a conversation as an ongoing project of narrowing down which possible world is actual. At any moment, the participants share a context set — the set of possible worlds compatible with what has been mutually accepted so far. An assertion proposes to update the context set by eliminating worlds in which the asserted content is false.
The framework gives a clean account of several pragmatic phenomena:
Presupposition. A sentence presupposes p if it can only be felicitously asserted in contexts where p is already in the common ground. "John has stopped smoking" presupposes that John smoked. The context-set framework explains presupposition as a constraint on the context sets in which the sentence updates appropriately.
Conversational implicature (the Gricean phenomena). Implicatures are inferred via reasoning about what would update the context appropriately given the cooperative principle.
Indexical and demonstrative reference. Indexicals (I, here, now) get their content from features of the conversational context — the speaker, location, time of utterance.
The Stalnakerian framework provides the formal apparatus that pragmatics now uses. The MIT-Yale-Stanford lineage (Stalnaker, Roberts, Beaver, Geurts, von Fintel, Heim) has produced a body of formal-semantic / formal-pragmatic work that integrates the Davidson-Montague programme with the dynamic-context machinery.
Stalnaker's broader philosophical position — that thought is fundamentally about narrowing down the actual world among the possible ones — has consequences for philosophy of mind (mental content as propositions, propositions as sets of possible worlds) and metaphysics (modal realism vs ersatzism).
The Grice-Stalnaker tradition has matured into a substantial empirical-formal programme. The major contemporary developments:
Relevance Theory (Sperber and Wilson, 1986/1995). A neo-Gricean framework that replaces the four maxims with a single principle of optimal relevance — the assumption that an utterance carries adequate cognitive effect for minimal processing cost. The framework has been particularly fruitful in the cognitive-pragmatic study of metaphor, irony, and ad hoc concept construction.
Game-Theoretic Pragmatics. The Lewis-style coordination framework extended with explicit game-theoretic models. The Rational Speech Act (RSA) model (Frank and Goodman, 2012) gives a Bayesian model of how listeners infer speaker intent and how speakers anticipate listener inference. RSA models have been productive in computational linguistics, especially for explaining pragmatic phenomena in LLM outputs.
Lexical Pragmatics. Words don't have sharp prelexical meanings; their contributions to utterance content are negotiated in context. The "cup" of coffee is a different cup of coffee in "I drank a cup of coffee" vs "I broke a cup of coffee." Modern lexical pragmatics studies how concepts are dynamically tightened and loosened in use.
Experimental Pragmatics. The empirical study of how human listeners actually interpret speech — using methods from psycholinguistics (eye-tracking, self-paced reading, comprehension-time experiments). The findings sometimes match philosophical intuitions and sometimes don't; the field is now a research programme distinct from armchair pragmatics.
The relationship between philosophy of language and these empirical fields is now bidirectional. Philosophical theories suggest experimental predictions; empirical findings constrain philosophical theorising. The siloed philosophical pragmatics of the 1980s has become an interdisciplinary research culture.
The 2017 transformer paper (Vaswani et al., "Attention Is All You Need") and the subsequent rapid progress in large language models has produced new philosophy-of-language questions and revived old ones.
The questions:
Do LLMs understand? The Bender-Koller "octopus paper" (2020) and the "stochastic parrots" paper (Bender, Gebru, et al., 2021) argued that LLMs trained only on textual form cannot acquire grounded meaning — they manipulate signs without understanding. The opposing view (Mahowald, Ivanova, et al.; the "language network" research programme): LLMs may not have human-style grounded understanding but they have something importantly meaning-like, sufficient for many practical purposes and theoretically interesting in its own right.
What kind of cognitive system is an LLM? The system has rich linguistic competence in Chomsky's sense but no embodiment, no perception, no biological history. The philosophical question is whether linguistic competence in this disembodied form is meaningfully continuous with human linguistic competence or a different (perhaps category-distinct) thing.
Do LLMs have semantic content? The Searle Chinese Room argument (1980) gets re-purposed for the LLM case. The classic externalist response (Putnam, Burge) suggests that meaning depends on causal-historical connections that LLMs may or may not have. The internalist response suggests that LLMs may have semantic content of a more limited but still semantic kind.
What does LLM success tell us about language? If statistical pattern-matching over text can produce most of the apparent surface phenomena of meaning, this constrains the kinds of theory that can be true of how human language works. Either humans are doing something importantly different (and the LLM is a parlour trick), or the long Chomskyan tradition has overstated the degree to which language requires built-in grammatical structure.
The honest assessment in 2026: nobody knows. The empirical case for some kind of LLM linguistic competence is strong; the philosophical case for what to make of it is unsettled.
Alfred Tarski (1901-1983, Lwów school then Berkeley) gave philosophy of language its most consequential technical achievement. His 1933 paper "The Concept of Truth in Formalized Languages" defined truth for formal languages in a way that escaped the classical paradoxes (the Liar) and gave a model for what a theory of truth could look like.
The Convention T requirement: any adequate theory of truth for a language L must entail, for every sentence S of L, an instance of the schema "S is true in L iff p," where p is a translation of S into the metalanguage. The schema is famously simple — "snow is white" is true iff snow is white — but the technical work of defining truth recursively for the entire infinite language of arithmetic was Tarski's achievement.
The hierarchy Tarski insisted on: truth for an object language must be defined in a richer metalanguage. The Liar paradox ("This sentence is false") requires a language to refer to its own truth; Tarski's solution forbids this by separating the level of the language from the level of the truth-predicate.
The Tarskian framework is now the substrate of formal semantics, model theory, and most subsequent philosophical theories of truth. Davidson built his theory of meaning on it. The "deflationist" theories of truth (Horwich, Field) take Tarski's biconditionals as the whole content of the truth-concept. The "correspondence" theories use Tarski's apparatus to give technical form to a much older intuition.
The unresolved philosophical question is what to make of natural language. Tarski himself thought natural language was inherently inconsistent (because it tolerates self-reference) and not amenable to a Tarskian treatment. Davidson and his successors disagreed; the resolution remains a project rather than an achievement.
Michael Dummett (1925-2011, Oxford) was the most ambitious British philosopher of language of the post-war period. His Frege: Philosophy of Language (1973) is one of the great works of philosophical exegesis; The Logical Basis of Metaphysics (1991) develops his own framework.
Dummett's central methodological claim: a theory of meaning is the foundation of metaphysics. The choice between realism and anti-realism — about the past, about mathematics, about the external world — turns on the choice between truth-conditional semantics (which sustains realism) and verification-conditional or assertibility-conditional semantics (which sustains anti-realism).
His own preferred view leans anti-realist. The meaning of a sentence, Dummett argued, must be something speakers can in principle recognise when they encounter it. Truth-conditions for sentences about the unobservable past or about Cantorian uncountable infinities exceed the recognitional capacities of any actual speaker; therefore those truth-conditions cannot be the meanings the speaker grasps. The Brouwerian intuitionist tradition in philosophy of mathematics, which Dummett defended, generalises to other domains.
The framework reads as alien to most American analytic philosophers (Davidson explicitly), who accept truth-conditional semantics and find Dummett's anti-realism strained. The dispute between Dummett and Davidson about whether truth or assertibility is the more basic semantic concept ran through the 1970s and 80s.
The contribution Dummett made even to those who reject his anti-realism: a clear articulation of what a theory of meaning is for, and a careful analysis of what features such a theory must have to be adequate. The "manifestation" requirement (semantic content must be manifestable in linguistic behaviour) is now part of the standard apparatus, even when its anti-realist implications are resisted.
One of the most active research areas in contemporary philosophy of language is conceptual engineering — the philosophical study of how, when, and why we should revise the concepts we use, rather than just analyse them.
The framework draws on Carnap's notion of explication (replacing a vague concept with a precise one for theoretical purposes) and on Sally Haslanger's ameliorative analysis (asking what concept we should adopt given our purposes, rather than what concept we currently have). Herman Cappelen's Fixing Language (2018) gave the field a systematic statement; David Plunkett and others have extended it.
The motivating cases are political. Haslanger's analysis of "woman" and "race" (2000, 2012) proposed that the philosophical task is not to find the actual extension of these terms — that is the empirical sociologist's job — but to ask what concepts of "woman" and "race" we should use given our political commitments to gender and racial justice.
The framework raises hard questions. If we engineer a concept, are we still talking about the thing we were before? When does conceptual revision succeed and when does it produce talking-past-each-other? Cappelen's "lexical effects" framework gives an empirical handle: a concept-revision succeeds insofar as the new concept does the work the old concept used to do, plus the additional work the revision was meant to enable.
The contemporary debate is active. Sarah-Jane Leslie on generics, Esa Díaz-León on social kinds, Catarina Dutilh Novaes on argumentation — there is now a substantial literature on how concepts function in social-political life and how they can be deliberately reshaped.
Conceptual engineering is, in a sense, where the philosophy of language re-engages with the political philosophy from which it had largely separated itself in the analytic tradition's middle decades.
Timothy Williamson (Oxford) is the most prolific contemporary analytic philosopher and one of the central figures in the post-Quinean philosophy of language. His Vagueness (1994), Knowledge and Its Limits (2000), and The Philosophy of Philosophy (2007) cover his major positions.
Williamson's distinctive moves:
Epistemicism about vagueness. The "tall" boundary, the "bald" boundary, the "heap" boundary — they are all sharp, even if we do not and perhaps cannot know where they fall. The argument: bivalence is too strong a logical principle to give up; supervaluationism creates technical mess; many-valued logic is unmotivated; the simplest maintenance of classical logic is epistemicism.
Knowledge-first epistemology. The relevant primitive is not "justified belief that, etc." but "knowledge" itself. Knowledge is not a complex property to be analysed in terms of belief, justification, and truth; it is a basic mental state. The framework reverses much of the post-Gettier epistemological literature.
Modal logic and metaphysics. Williamson's Modal Logic as Metaphysics (2013) defends a striking position: necessitism, the view that everything that exists necessarily exists. The argument is technical (the simplicity of S5 modal logic + the principle of necessity-of-existence + objectual quantification) and the conclusion philosophically extreme.
Williamson's work is unusual for its technical density combined with its breadth. He treats philosophy of language, philosophy of mind, epistemology, and metaphysics as a single integrated field. The "post-Quinean" label fits in that he, like Quine, treats philosophical theorising as continuous with scientific theorising — though Williamson is more willing than Quine to defend strong metaphysical theses on technical-logical grounds.
The contemporary student of philosophy of language has to engage with Williamson; even his critics typically find that engagement clarifies their own positions.
One of the most active applied debates in philosophy of language is over slurs — the linguistic items that derogate members of social groups. The debate sits at the intersection of semantics, pragmatics, and ethics.
The central puzzle: a slur and its neutral counterpart (e.g. an ethnic slur and the corresponding ethnonym) seem to refer to the same group of people. Yet the two expressions differ profoundly. Using a slur is not just descriptively neutral; it conveys derogation, expresses contempt, and can constitute a form of speech-act-level injury.
The competing analyses:
Semantic accounts (Hom, May): the derogation is part of the slur's literal meaning. The slur and the neutral term have different truth-conditions — sentences with slurs may be systematically false because no one fits the slur's full descriptive content.
Pragmatic accounts (Anderson and Lepore): the derogation is conveyed pragmatically, through conventional implicature or expressive content, not as part of the asserted truth-conditional content. The slur and the neutral term refer to the same group; the slur additionally expresses derogation.
Speech-act / illocutionary accounts (Maitra, Langton, Saul): the harm of slurs is in the speech act they perform — subordinating the target group, conferring authority on derogating views, contributing to a conversational and social context in which derogation is normalised.
The framework choice has practical consequences for how to think about the harms of slurs (free speech debates, hate-speech regulation, university policy), about reclamation (when an in-group uses a slur to neutralise its derogating force), and about appropriation (when out-group members use a reclaimed slur).
The debate is one of the few places where philosophy of language directly engages with applied ethics, and it has been productive. The technical apparatus of conventional implicature, expressive content, presupposition, and speech-act theory all bear on the analysis. The political stakes give the technical work consequence.
Metaphor has been a central problem for philosophy of language since at least Aristotle. The 20th-century debate is structured around two competing analyses.
The substitution / comparison view (the orthodox view from Aristotle through the early modern period). A metaphor is an indirect way of saying something that could be said literally. "Juliet is the sun" means something like "Juliet is bright, central, sustaining." The metaphor is a stylistic ornament; the propositional content can be paraphrased.
The interaction view (I. A. Richards, 1936; Max Black, 1955). A metaphor sets up an interaction between two semantic fields. The "tenor" (Juliet) and "vehicle" (sun) are made to interact such that features of the vehicle reorganise our perception of the tenor and vice versa. The cognitive effect cannot be paraphrased; the metaphor is irreducibly creative.
Davidson's no-content view ("What Metaphors Mean," 1978). Metaphors literally have only their literal meaning — and the literal meaning of "Juliet is the sun" is false. Metaphors don't have a special metaphorical meaning; they have their literal meaning, which is used to cause a particular cognitive response in the hearer. The cognitive effect is real; it is not a kind of meaning.
Conceptual metaphor theory (Lakoff and Johnson, 1980). Metaphors are not surface phenomena of language but expressions of underlying conceptual mappings. ARGUMENT IS WAR, TIME IS MONEY, LIFE IS A JOURNEY — these conceptual metaphors structure thought, not just speech, and they are evidenced by patterns of metaphorical expression across many particular utterances.
The cognitive-linguistics tradition that grew out of Lakoff-Johnson has produced empirical work on metaphor in everything from political rhetoric to medical communication to scientific theorising (Boyd, Hesse). Whether conceptual metaphor is a deep cognitive structure or a surface descriptive convenience remains contested. The frameworks are competing on largely separate evidence streams.
An accessible orientation to the field's central questions: how words mean, the Frege-Russell tradition, the Wittgenstein turn, the post-1960 framework. A useful orientation before tackling the primary literature.
· Wittgenstein's Language-Games Made Easy — the later Wittgenstein in compressed form: family resemblance, language-games, meaning as use, the private-language argument. The core of Philosophical Investigations §1-§250.
· Searle's Speech Act Theory in 3 Minutes — Searle's systematisation of Austin: the five categories of illocutionary act, felicity conditions, the role of intention. The starting point for any work on what speech does.
Read: Frege's "On Sense and Reference" first (the founding ten pages); Russell's "On Denoting" second; the Tractatus if you can stand it; the Investigations §1-§250; Quine's "Two Dogmas"; Davidson's "Truth and Meaning"; Kripke's Naming and Necessity. That sequence is the canon. After it, you can read any contemporary paper in philosophy of language and follow what is going on.
After 145 years of sustained work, philosophy of language has converged on a few claims and remained productively unsettled on others.
The convergent claims:
1. Meaning is not just reference. The Frege puzzle is real; identity statements can be informative; some account of mode-of-presentation, narrow content, or two-dimensional structure is needed.
2. The unit of analysis is at least the sentence, not the word. Frege's context principle. Most philosophical confusion at the word level dissolves at the sentence level.
3. Language is many things. Wittgenstein's language-games and Austin's speech acts both make this point. There is no single function of language; meaning theories have to handle the variety.
4. Pragmatic phenomena are real and structured. Grice's implicatures, Stalnaker's context dynamics, Sperber-Wilson's relevance — pragmatics is a respectable formal-empirical discipline.
5. Externalism about meaning has substantial support. Kripke and Putnam's arguments hold up. What words mean depends on the world and the linguistic community, not just on the speaker's head.
The unsettled questions:
1. What grounds linguistic competence — innate structure (Chomsky), social practice (Wittgenstein, Brandom), statistical pattern (the LLM challenge)?
2. What is the relation between meaning and truth — truth-conditional semantics (Davidson) vs inferential-role semantics (Brandom) vs proof-theoretic semantics (Dummett)?
3. How should we model the dynamics of conversational context, the persistent vagueness of natural language, and the apparent indeterminacy of much actual usage?
The field is more empirical and more interdisciplinary than at any point in its history. The questions that began as armchair puzzles have produced linguistics, computer science, cognitive science, and now AI as bordering disciplines. The work continues.
Volume III, Deck 15. From Frege's Begriffsschrift in 1879 to the philosophy of LLMs in the 2020s. The hundred-and-fifty-year argument about how words attach to the world.
The sequence — Frege, Russell, Tractatus, Vienna, Investigations, Austin, Quine, Davidson, Kripke — is the canon. Anyone who has worked through it carefully has been formed by the most rigorous philosophy of the 20th century.
Set in Computer Modern Serif and Sans. Drafted in May 2026.