The mind treated as an information-processing system — what it perceives, how it remembers, why it errs, and what seventy years of laboratory work have taught us about thought itself.
DisciplineCognitive Sci.
Founded1956 / 1967
Leaves30
Cognitive Psychology · Ledeii
A first wordThe mind, examined in detail.
Cognitive psychology is the experimental study of mental processes — perception, attention, memory, language, reasoning, decision-making — treated as steps in an information-handling system that can be measured in milliseconds and modelled in flowcharts.
It replaced behaviourism in the late 1950s by simply ignoring its main rule. Behaviourists held that internal states were unobservable and therefore off-limits. The cognitivists pointed at the digital computer, recently invented, and observed that it too had unobservable internal states yet was clearly doing something. By 1967, when Ulric Neisser gave the field its name, the door had been kicked open.
This deck is a thirty-leaf survey of the discipline as it now stands: its founders, its core findings, the studies that survived replication and the ones that did not, and the books a serious reader should know.
The Deck Catalog · Vol. XII— ii —
Definition · What it isiii
Chapter IWhat cognitive psychology is.
Cognitive psychology is one of the four large branches of academic psychology — alongside social, developmental, and clinical — and it is the branch that took the computer revolution most seriously. Its working assumption is that the mind is a system that takes input (sensation), transforms it (encoding, attention, memory), and produces output (action, speech, decisions), and that each transformation can be studied with experiments measured in reaction time, accuracy, and error patterns.
Five family resemblances
Internal states are real. Memories, intentions, mental images count as legitimate objects of study.
Mind as information processor. The computer is the working metaphor — though increasingly contested.
Reaction time is data. A 50-millisecond difference can falsify a theory.
Error reveals structure. Illusions, slips, and biases are how we infer the underlying machinery.
Replicability matters. The 2010s replication crisis hit cognitive priming research especially hard.
What it is not: psychotherapy, personality assessment, neuropsychiatry, or armchair speculation about consciousness. Those are cousins, not the discipline itself.
Cognitive Psychology · Definition— iii —
Pre-history · The behaviourist ageiv
Chapter IIWhat it broke from.
From John B. Watson's 1913 manifesto until roughly the late 1950s, American academic psychology was officially behaviourist. The doctrine held that since mental events could not be directly observed, science should restrict itself to stimuli and responses. B. F. Skinner pushed the programme furthest in his Verbal Behavior (1957), which proposed to explain language as a chain of reinforced operants.
Three things ended it. The first was Noam Chomsky's 1959 review of Skinner, which argued, devastatingly, that no reinforcement schedule could explain how a child produces grammatical sentences she has never heard. The second was the rise of the digital computer, which provided a working example of an unobservable internal process producing structured output. The third was the simple accumulation of experimental results — on attention, memory, imagery — that behaviourism had no vocabulary to describe.
By 1967 the revolution was won. But behaviourism left a useful inheritance: a commitment to operational definitions and quantitative measurement that cognitive psychology kept.
Cognitive Psychology · Behaviourism— iv —
Miller · 1956v
Chapter IIIMiller & the magical seven.
In 1956, the Harvard psychologist George A. Miller published The Magical Number Seven, Plus or Minus Two in Psychological Review. Reviewing a wide range of laboratory work — on tone discrimination, span of immediate memory, absolute judgement — he noticed that human performance kept landing on a similar ceiling: about seven items, with a spread of two on either side.
Finding 5.1
Short-term memory has a fixed capacity of roughly 7±2 chunks, where a chunk is whatever the mind can treat as a single unit. The figure is a feature of the storage system, not of the materials.
Miller's deeper move was the concept of chunking. The number 4-9-1-7-7-6 is six items; "1492 1776" is two. By recoding raw input into larger meaningful units, the mind smuggles much more information past its narrow gate. Expertise, on this view, is largely a library of chunks.
Later work refined the number downward — Nelson Cowan in 2001 argued that pure short-term capacity is closer to four — but Miller's frame survived. He went on to co-found the Center for Cognitive Studies at Harvard with Jerome Bruner in 1960, the institutional birthplace of the field.
Cognitive Psychology · Miller— v —
Neisser · 1967vi
Chapter IVNeisser names the field.
Ulric Neisser's Cognitive Psychology, published in 1967, was a synthesis aimed at advanced students. It collected a decade of scattered work — on iconic memory (Sperling 1960), pattern recognition, attention, language — under one banner. The title stuck. From 1967 onward the field had its name and its first textbook.
Neisser's framing was that perception itself is a constructive act. The brain does not record the world; it generates a hypothesis and tests it against incoming data. This anticipated by decades what is now called the predictive processing account.
What is striking, in retrospect, is how quickly Neisser became a critic of the very paradigm he had launched. His 1976 Cognition and Reality argued that the laboratory studies of the previous decade had cut perception off from the messy, embodied world it actually operates in. He spent the rest of his career calling for a more ecological psychology — an unfinished argument the field is still having.
Cognitive Psychology · Neisser— vi —
Attention · The cocktail partyvii
Chapter VHow attention works.
Donald Broadbent's Perception and Communication (1958) proposed the first influential model: a sensory buffer holds incoming streams, and a single filter, set by the listener, lets only one through to deeper processing. Anne Treisman's revision (1960, 1964) replaced the all-or-nothing filter with a graded attenuator: unattended streams are dampened but not silenced, which is why your name still pierces the conversation across the room.
Treisman's later feature integration theory (1980, with Garry Gelade) showed that attention is also what binds visual features — colour, shape, motion — into a single perceived object. Without it, one sees the parts but not the whole; with too little, one gets illusory conjunctions, a red square misperceived because a red triangle and a blue square were nearby.
Inattentional blindness Simons & Chabris, 1999
The most famous demonstration: viewers asked to count basketball passes routinely fail to see a person in a gorilla suit walk through the scene and beat their chest. Half of subjects miss it entirely. Attention is not a spotlight that brightens what it lands on; it is what makes things visible at all.
Cognitive Psychology · Attention— vii —
Working Memory · Baddeley & Hitchviii
Chapter VIWorking memory.
In 1974 Alan Baddeley and Graham Hitch replaced the older notion of "short-term memory" with working memory — a system not just for storage but for active manipulation. Their model has three components, plus a coordinator added later.
central executive — allocates attention, switches between tasks phonological loop — speech-based store with sub-vocal rehearsal visuospatial sketchpad — brief visual / spatial store episodic buffer (added 2000) — integrates across modalities and with long-term memory
The phonological loop's existence is shown by the word-length effect: digit span is shorter for long words than short ones, because rehearsal takes longer. The sketchpad is shown by selective interference: doing a spatial tracking task while trying to remember a visual layout disrupts memory; doing a verbal task does not.
Working memory is probably the single best psychological predictor of fluid intelligence, of reading comprehension, and of academic outcomes — though disentangling cause from correlation is genuinely hard.
Cognitive Psychology · Working Memory— viii —
Long-term Memory · Tulvingix
Chapter VIILong-term memory.
Endel Tulving's distinction between episodic and semantic memory cut what had been treated as one system into two. Episodic memory is for events — what you ate yesterday, where you parked, the smell of your grandmother's kitchen — and is bound to time and place. Semantic memory is for facts — the capital of France, the meaning of oblique — and is contextless.
A third division, procedural memory (Squire, Cohen 1980), covers skills: riding a bicycle, touch-typing, the rolled r of a second language. The patient HM, after a 1953 surgery removed his hippocampi, lost the ability to form new episodic memories but kept his procedural ones — he could learn the mirror-drawing task perfectly while denying every day that he had ever attempted it. HM's case, studied for half a century by Brenda Milner and Suzanne Corkin, is the founding case of cognitive neuropsychology.
Encoding and retrieval
Tulving's encoding specificity principle (1973): a cue retrieves a memory only insofar as the cue's features overlap with how the memory was encoded. This is why the smell of chlorine returns one to a childhood pool, and why studying for a test in the room you'll be tested in modestly helps.
Cognitive Psychology · LTM— ix —
Forgetting · Ebbinghausx
Chapter VIIIThe shape of forgetting.
Hermann Ebbinghaus, working alone in Berlin in the 1880s, invented the experimental study of memory by treating himself as both subject and experimenter. He memorised lists of consonant-vowel-consonant nonsense syllables (CVC), then measured how long re-learning took at delays of 20 minutes, an hour, a day, a month.
The result — the forgetting curve, published in Über das Gedächtnis (1885) — shows steep loss in the first hour, flattening into a long shallow tail. Most of what we forget, we forget quickly; what survives the first day tends to survive far longer. The curve has been replicated across more than a century and across vastly different materials.
Why we forget
Three classes of explanation, none mutually exclusive: decay (traces fade with time), interference (proactive: old material disrupts new; retroactive: new material disrupts old), and retrieval failure (the trace persists but cannot be reached without the right cue). The third explains the tip-of-the-tongue state: a word is clearly stored — one knows its first letter, its syllable count — yet refuses to come.
The spacing effect, also Ebbinghaus's: distributed practice beats massed practice, every time. Every successful flashcard app since Anki has been a re-implementation of this finding.
Cognitive Psychology · Forgetting— x —
Bartlett · 1932xi
Chapter IXMemory as reconstruction.
Frederic Bartlett's Remembering (1932) is the founding study of constructive memory. Bartlett gave his Cambridge subjects an unfamiliar Chinook folk tale, then asked them to retell it at intervals. Each retelling drifted further from the original — not randomly, but predictably. Unfamiliar elements ("something black came out of his mouth") were replaced with familiar ones; structure was tightened; the strange was domesticated.
Bartlett's word for the underlying cognitive structures was schema: a generalised expectation about what stories, scenes, and situations are like. A schema lets us understand the new quickly, but it also distorts the new toward the familiar. Memory is not a recording; it is a reconstruction, and the schema is what fills the gaps.
The implication, which took the legal system another fifty years to absorb, is that confidence in a memory is no guide to its accuracy. The most certain witness can be the most reconstructed.
Cognitive Psychology · Bartlett— xi —
Loftus · False memoryxii
Chapter XLoftus and the false memory.
Elizabeth Loftus has spent fifty years showing how easily memory can be edited, distorted, and outright fabricated by post-event information — and how confidently subjects then defend their false versions. The implications for eyewitness testimony, for the recovered-memory therapy boom of the 1980s, and for police interrogation, have been enormous.
Her Lost in the Mall study (Loftus & Pickrell, 1995) showed that about a quarter of adult subjects, told by a trusted family member that they had once been lost in a shopping mall as a child, came to remember the event in detail — sometimes adding their own touches, like the colour of the kindly stranger's flannel shirt. None of it had happened.
Loftus's expert testimony in court has cut both ways — for innocent defendants identified by mistaken eyewitnesses, and, controversially, for defendants accused on the basis of recovered memories of childhood abuse. Her position has earned her death threats. The science, however, is robust: memory is reconstructive, suggestion is potent, confidence is not accuracy.
Cognitive Psychology · Loftus— xii —
Schacter · Seven Sinsxiii
Chapter XIThe seven sins of memory.
Daniel Schacter's The Seven Sins of Memory (2001; revised 2021) offers the most useful functional taxonomy of memory failure. The first three sins are sins of omission: transience (the basic forgetting curve), absent-mindedness (failures of encoding because attention was elsewhere), and blocking (the tip-of-the-tongue state).
The next three are sins of commission. Misattribution — remembering the gist correctly but the source wrongly — is what makes plagiarism so often unintentional. Suggestibility is Loftus's territory: the planted memory. Bias is the shaping of the past by the present, including hindsight bias (the past was more predictable than it was) and consistency bias (we underestimate how much our views have changed).
The seventh, persistence, is the only sin one might want eliminated outright: the intrusive, traumatic memory that returns unbidden. Schacter's broader argument is that the first six are the price of a memory system designed to extract gist and update flexibly — a system optimised for the future, not for a faithful record of the past.
Cognitive Psychology · Schacter— xiii —
Perception · Two viewsxiv
Chapter XIIBottom-up and top-down.
James J. Gibson's The Ecological Approach to Visual Perception (1979) argued that the optical array reaching the eye, sampled across a moving body, contains all the information needed for perception — no inference required. His concept of affordances — the action possibilities a surface offers to a particular animal — remains influential in design and robotics.
Richard Gregory took the opposite view: visual input is ambiguous, and the brain resolves it by unconscious inference, generating perceptual hypotheses and testing them against the data. Visual illusions — the Müller-Lyer arrows, the Ames room, the Märchen of the hollow face — are evidence: when the inference is wrong, the percept is wrong.
The McGurk effect McGurk & MacDonald, 1976
A face mouths ga, audio plays ba. Most listeners hear da — a phoneme in neither stream. Perception fuses across modalities below the level of awareness, and you cannot un-hear it once you know the trick.
The contemporary predictive processing framework (Friston, Clark) absorbs both views: the brain is a hierarchical hypothesis machine, and what we perceive is the brain's best guess, constantly updated by sensory error.
Cognitive Psychology · Perception— xiv —
Language · Chomsky to Pinkerxv
Chapter XIIIThe language faculty.
Chomsky's argument from the poverty of the stimulus: a child hears a fragmentary, ungrammatical, error-strewn sample of speech, yet by age four produces grammatical sentences she has never heard. Such generalisation requires that the child bring something to the input — an innate language faculty with a pre-set range of possible grammars.
The critical period hypothesis (Lenneberg, 1967) holds that this faculty is most plastic before puberty; second languages learned later rarely reach native fluency, and feral children who miss early exposure (Genie, the most-studied case) never recover full grammatical competence.
Pinker's synthesis
Steven Pinker's The Language Instinct (1994) is the best popular argument for the Chomskyan picture, drawing in evidence from linguistics, neuroscience, and child language. Words and Rules (1999) is more technical and, on regular vs. irregular morphology, probably more important. The current debate — whether language is a domain-specific faculty or an emergent product of more general learning, whether recursion is uniquely human, whether large language models force a re-think — is unsettled and lively.
Cognitive Psychology · Language— xv —
Reasoning · Wasonxvi
Chapter XIVHow people reason.
Peter Wason's selection task (1966) is the most famous demonstration of human deviation from formal logic. Fewer than 10% of educated adults pick the logically correct cards. Yet recast the same logical structure in social terms — "if you drink alcohol, you must be over 18" — and most subjects suddenly perform near-perfectly. The deductive machinery is intact; the abstract version simply does not engage it.
Leda Cosmides and John Tooby drew the evolutionary moral in 1989: the mind has dedicated reasoning modules for cheater detection in social exchange, but no general-purpose abstract logic engine. Whether that conclusion holds — the dual-process theorists prefer two systems, neither modular — remains debated.
Confirmation bias
Wason also showed (1960) that people generating their own hypotheses about a number sequence overwhelmingly try to confirm rather than falsify. Given the rule "any ascending sequence" — which subjects had to discover — they would propose 6, 8, 10, get a yes, and stop, missing that 1, 19, 7,000 also fit. Confirmation bias is not a moral failing; it is the default of a mind built to find regularities, not to test them.
Cognitive Psychology · Reasoning— xvi —
Heuristics & Biases · Tversky & Kahnemanxvii
Chapter XVHeuristics and biases.
Amos Tversky and Daniel Kahneman, working together from the late 1960s until Tversky's death in 1996, identified a small set of mental shortcuts that the mind uses in place of statistical reasoning — and the systematic errors each shortcut produces.
Three core heuristics
Availability: we estimate frequency by ease of recall. Plane crashes feel more common than they are because they are vivid.
Representativeness: we judge category membership by similarity to the stereotype. The "Linda problem" — subjects rate "Linda is a feminist bank teller" as more likely than "Linda is a bank teller," violating basic probability — remains the cleanest demonstration.
Anchoring: arbitrary starting numbers contaminate subsequent estimates. Subjects asked whether Gandhi died before or after age 9 vs. 140 give different median estimates afterwards, even though both anchors are obviously absurd.
The programme reset psychology's relationship with rationality. The economist's homo economicus, perfectly Bayesian, was replaced by an organism that deploys cheap, often-wrong shortcuts — but cheap, often-wrong shortcuts that are good enough for most decisions made under time pressure.
Cognitive Psychology · Heuristics— xvii —
Two Systems · Kahneman 2011xviii
Chapter XVIThinking, fast and slow.
Kahneman's Thinking, Fast and Slow (2011) packages forty years of his work, and a fair amount of related literature, around a two-systems frame borrowed from Keith Stanovich and Richard West. System 1 produces the answers that the heuristics-and-biases programme studies; System 2 monitors them, sometimes catches them, and is generally too lazy to do so.
The book is best on the small phenomena: the bat-and-ball problem, the cognitive ease of repeated statements, the planning fallacy, the focusing illusion. It is weakest where it relies on social-priming research that has since failed to replicate — Kahneman has himself acknowledged the chapter on priming should be revised.
Prospect theory 1979
The most consequential single paper Kahneman and Tversky wrote — the founding text of behavioural economics. People do not maximise expected utility. They evaluate outcomes as gains and losses relative to a reference point; losses loom roughly twice as large as equivalent gains; small probabilities are over-weighted, moderate ones under-weighted. Kahneman won the Nobel in Economics in 2002 for it; Tversky had died in 1996 and could not be honoured.
Cognitive Psychology · Two Systems— xviii —
Decision-Making · Bounded rationalityxix
Chapter XVIIDecisions under constraint.
The standard economic model treats the chooser as a flawless utility-maximiser with infinite computation. Cognitive psychology has spent seventy years documenting that real choosers are nothing of the kind. They have limited working memory, limited time, limited information — and yet, often, they choose well.
Gerd Gigerenzer's fast and frugal heuristics programme (Berlin, 1990s onwards) is the major counter-tradition to Kahneman. Where Kahneman emphasises bias, Gigerenzer emphasises that simple heuristics — "take the best," "recognition" — often outperform complex statistical models in real environments. The two are less opposed than they sound: both agree the mind is not a Bayesian calculator; they disagree on whether to celebrate or correct that fact.
Choice overload
Iyengar and Lepper's 2000 jam-tasting study — six varieties drew more buyers than twenty-four — launched a research programme on the costs of too much choice. The effect is real but smaller and more context-dependent than the original suggested; subsequent meta-analyses (Scheibehenne et al., 2010) put the average effect near zero. The case is a useful instance of how psychology's sharp, telegenic findings often shrink under replication.
Cognitive Psychology · Decision— xix —
Problem Solving · Newell & Simonxx
Chapter XVIIIInsight and search.
Allen Newell and Herbert Simon, at Carnegie Mellon, treated cognition and AI as a single subject. Their framework — a problem is a state space, a solution is a path through it, the chooser uses heuristics to prune branches — dominated the field for two decades. Chess research, which they pioneered with Adriaan de Groot, showed that grandmaster skill is built mainly of chunks: tens of thousands of recognised position-fragments, the same Miller mechanism scaled up.
Against the search account, the Gestalt tradition (Wolfgang Köhler, Karl Duncker) emphasised insight — the sudden restructuring that solves a problem the search had been thrashing on. The classic Duncker (1945) candle problem: given a candle, a box of tacks, and matches, mount the candle to a wall. Subjects who see the box as a container are stuck; those who restructure it as a platform solve quickly.
Functional fixedness — the difficulty of seeing an object outside its usual role — is a robust and pedagogically useful demonstration of how representations channel and constrain thought.
Cognitive Psychology · Problem Solving— xx —
Mental Imagery · Shepard & Metzlerxxi
Chapter XIXImagery in the mind.
Behaviourism could not even talk about mental imagery; the cognitive revolution made it measurable. Roger Shepard and Jacqueline Metzler's 1971 mental rotation experiment — one of the most cited papers in psychology — showed that the time to decide whether two objects are the same shape is a linear function of the angle between them. The slope is constant across subjects: about 60 degrees per second. The subjective report ("I rotated it in my head") matches the chronometric data exactly.
Stephen Kosslyn's later work on visual mental imagery — Image and Mind (1980), Ghosts in the Mind's Machine (1983) — showed similar effects for scanning across an imagined map: time to "look" from one location to another scales with distance.
The imagery debate — whether imagery shares format with perception (Kosslyn) or is propositional all the way down (Pylyshyn) — ran for thirty years. Functional imaging (Le Bihan, Kosslyn) eventually showed that visual imagery activates much of the same cortex as visual perception. The debate is largely over; the picture-in-the-head metaphor turns out to be more literal than was once thought.
Cognitive Psychology · Imagery— xxi —
Categories · Roschxxii
Chapter XXConcepts and categories.
The classical view, going back to Aristotle, held that a concept is defined by necessary and sufficient features: bachelor = unmarried + man + adult + human. Eleanor Rosch's work in the 1970s broke that view. Categories, she showed, are organised around prototypes — the best example — with graded membership shading off toward the edges. Subjects are faster and more accurate at classifying typical members and less consistent at the periphery. There is a basic level (chair, dog, hammer) at which categorisation is fastest and at which children learn nouns first.
Lakoff's Women, Fire, and Dangerous Things (1987) extended Rosch's work into linguistics and philosophy, arguing that even our most abstract categories — including those of mathematics — are shaped by embodied experience and metaphor. The embodied cognition turn that this provoked — the claim that abstract thought is grounded in sensorimotor systems — is one of the field's live debates. The strong version is contested; some weaker version is now broadly accepted.
Cognitive Psychology · Categories— xxii —
Neuro Interface · The 1990s turnxxiii
Chapter XXICognition meets the brain.
For most of cognitive psychology's history, the brain was a black box and the mind was the object of study. Functional imaging changed that. By the late 1990s, papers in flagship journals were as likely to involve scanners as paper-and-pencil tasks. Memory had a hippocampus, language had Broca's and Wernicke's areas, attention had parietal-frontal networks, and the gap between psychology and neuroscience narrowed sharply.
The cognitive-neuroscience integration has been productive but uneven. The early decade of fMRI was marked by under-powered studies and over-eager interpretations; the field has since absorbed Russell Poldrack's caution about reverse inference (you cannot read a mental state off a brain area), and the standards for sample size and pre-registration have tightened.
The work of Daniel Schacter and Anthony Wagner on the hippocampus and prefrontal cortex during memory encoding and retrieval is a model of the modern hybrid: precise behavioural paradigms, careful imaging, and theoretical claims modest enough to survive replication.
Cognitive Psychology · Neuro— xxiii —
Implicit · Priming & the crisisxxiv
Chapter XXIIPriming and what it survived.
Priming, in its narrow lexical sense, is among the most replicable findings in cognitive psychology: shown doctor, you recognise nurse faster than butter. The activation spreads through semantic memory.
In the early 2000s a wider behavioural priming literature claimed much more: subjects exposed to old-age words walked more slowly down corridors (Bargh, Chen, Burrows, 1996); subjects exposed to professor stereotypes solved more trivia. These flagship studies failed to replicate when other labs ran them — cleanly and at higher power — in the 2010s. The Doyen et al. (2012) replication of Bargh failed unless the experimenter knew the hypothesis, which strongly implicates experimenter expectancy.
What survived
Lexical priming, conceptual priming within a session, perceptual priming, the basic phenomena of implicit memory (Schacter): all robust. Long-distance behavioural priming, social priming claims, ego depletion (Baumeister): much weaker than originally claimed, sometimes vanishing entirely. The lesson is that the cognitive-laboratory tradition produced solid results; the social-cognition extensions of the 2000s often did not.
Cognitive Psychology · Priming— xxiv —
Replication Crisis · 2015 —xxv
Chapter XXIIIThe crisis and the response.
The mid-2010s replication crisis hit social and personality psychology hardest, but cognitive psychology was not spared. Ego depletion, facial-feedback hypothesis, power posing, growth mindset (in part), and large parts of the behavioural-priming literature did not replicate at the original effect sizes. Brian Nosek, Hal Pashler, Daniel Gilbert, and the team behind the Many Labs and Reproducibility Project — Open Science Collaboration's Estimating the reproducibility of psychological science (2015, Science) — documented the scale.
What was wrong
The diagnosis is now mainstream: low statistical power, undisclosed analytic flexibility ("p-hacking"), publication bias toward positive results, and a system of academic incentives that rewarded novelty over rigour. The fix has been a methodological tightening: pre-registration, larger samples, registered reports, open data, multi-lab collaborations, and a quieter scepticism about single-study results — especially flashy ones.
Cognitive psychology's relatively well-controlled paradigms have weathered the crisis better than its social-psychological neighbours. But the field has a new humility: the question "did this replicate?" now precedes the question "is it interesting?"
Cognitive Psychology · Replication— xxv —
Applications · What it changedxxvi
Chapter XXIVFindings that left the lab.
Education. Distributed practice, retrieval practice (the testing effect, Roediger & Karpicke 2006), interleaving, and worked-example fading are now central to evidence-based teaching. Henry Roediger and Mark McDaniel's Make It Stick (2014) is the best popular synthesis. The cognitive-load tradition (John Sweller) has reshaped instructional design.
Eyewitness reform. The Innocence Project's DNA exonerations — over 70% involved mistaken eyewitness identification — finally moved police procedure. Sequential rather than simultaneous lineups; double-blind administration; explicit instructions that the suspect may not be present; recording of confidence statements at the moment of identification. Loftus, Gary Wells, and the National Academy of Sciences (2014) drove the change.
Interface design. Don Norman's The Design of Everyday Things (1988, revised 2013) translated affordances, feedback, mapping, and constraints into the working vocabulary of design. Nielsen Norman Group still runs on it.
Public policy. Behavioural insights units inside the British, American, and OECD governments apply Kahneman-Tversky-style findings to default settings, framing, and disclosure rules — with mixed but real results.
Cognitive Psychology · Applications— xxvi —
Open Problems · What remainsxxvii
Chapter XXVWhat we still do not know.
For all its successes, cognitive psychology has explained the easy parts of the mind better than the hard ones. We have a decent account of how people see lines, parse sentences, and remember word lists. We have far weaker accounts of how an idea becomes a plan, how analogy works, how understanding happens, and what consciousness is for.
The hard problem of consciousness — David Chalmers's term for the gap between any functional description and the felt quality of experience — is not within cognitive psychology's mandate to solve, but it is a persistent reminder of what the information-processing metaphor leaves out.
The arrival of large language models has reopened questions thought settled. If a transformer trained on text can produce grammatical, contextually appropriate sentences without anything like Chomsky's innate grammar, what does that say about how children do it? The honest answer, for now, is: we do not yet know. The field is recalibrating.
Cognitive Psychology · Open— xxvii —
Reading List · The shelfxxviii
Chapter XXVIBooks and landmark papers.
1885Über das GedächtnisHermann Ebbinghaus
1932RememberingF. C. Bartlett
1956The Magical Number Seven(paper)G. A. Miller
1957Syntactic StructuresNoam Chomsky
1967Cognitive PsychologyUlric Neisser
1972Human Problem SolvingNewell & Simon
1974Judgment under Uncertainty(Science)Tversky & Kahneman
1979The Ecological Approach to Visual PerceptionJ. J. Gibson
1980Image and MindStephen Kosslyn
1988The Design of Everyday ThingsDon Norman
1991Memory's GhostPhilip Hilts
1994The Language InstinctSteven Pinker
1996Searching for MemoryDaniel Schacter
1997How the Mind WorksSteven Pinker
2001The Seven Sins of MemoryDaniel Schacter
2002Heuristics and Biases(ed.)Gilovich, Griffin, Kahneman
2008Memory: A Very Short IntroductionJonathan Foster
2024Cognitive Psychology(textbook, 6th)E. Bruce Goldstein
Cognitive Psychology · Shelf— xxviii —
Watch & Read · The visual recordxxix
Chapter XXVIIWatch and read.
Fig. 6 · Kahneman, Talks at Google, 2011
Two more
Alan Baddeley's working-memory model is a good place to test how well lectures translate the journal literature: Working Memory · Baddeley & Hitch 1974.
Elizabeth Loftus's TED talk on the malleability of memory is the most accessible route into her decades of work on eyewitness error and false memory: Loftus · How Reliable Is Your Memory?
And read, in this order
Kahneman's Thinking, Fast and Slow (2011) for the systems frame. Schacter's Seven Sins (2001) for memory. Pinker's The Language Instinct (1994) for language. Eysenck & Keane's textbook for the field as a whole. Then read Bartlett (1932): a slim, strange, foundational book that will make all the rest legible.
Cognitive Psychology · Watch— xxix —
Colophon
Cognitive Psychology is Deck 01 of Volume XII of The Deck Catalog, an opinionated reading-and-watching guide.
Set in Inter and Charter, on a 28-pixel grid — the laboratory notebook left undisguised. Headings are sans-serif, body is serif, captions and tags are monospace. Findings are framed in lab blue (#2b6cb0); the rust accent (#a4502e) marks pulled quotes.
Compiled at the desk of an attentive amateur, in May 2026.