A COGNITIVE-SCIENCE NOTEBOOK · 13 SLIDES

PHILOSOPHY OF MIND
The puzzle of the inner life.

What is consciousness, why does it feel like anything, and could a machine ever have it? A short tour through the puzzles, positions, and thought experiments that won't sit still.

I. CONSCIOUSNESS II. THE HARD PROBLEM III. AI & INNER EXPERIENCE
02 · THE HARD PROBLEM

Why does any of this feel like anything?

David Chalmers (1995) split the science of mind in two. The easy problems — attention, memory, discrimination, report — are tractable in principle: explain the function, you've explained the phenomenon.

The hard problem is different. Even after we've fully described the neural and computational processes, a question remains: why is there something it is like to undergo them? Why is there an inside?

  • Easy: how does the brain integrate information?
  • Easy: how does it produce verbal report?
  • Hard: why is any of it experienced?
"There is something it is like to be a conscious organism. That subjective character of experience is what the hard problem is about." — D. Chalmers, Facing Up to the Problem of Consciousness, 1995
KEY MOVE
The explanatory gap

No matter how fine-grained the physical story, it seems we can always ask: but why does it feel like this? Function alone doesn't appear to entail phenomenology.

03 · DUALISM

Descartes: two substances, body and mind.

In the Meditations (1641), Descartes argued he could doubt the existence of his body but not of his thinking. Mind (res cogitans) and matter (res extensa) must therefore be distinct kinds of stuff.

Modern philosophy of mind largely begins with this picture — and largely as an attempt to escape it. The trouble: if mind and body are utterly different, how do they interact? Descartes' guess (the pineal gland) satisfied no one.

  • Substance dualism — two basic kinds of stuff
  • Property dualism — one stuff, two kinds of properties
  • Interaction problem — the recurring objection
BODY · RES EXTENSA extended in space measurable, divisible MIND · RES COGITANS unextended thinking, indivisible ??? interaction ??? PINEAL?
FIG. 1 — Cartesian dualism & the interaction problem
04 · BEHAVIORISM

The mind as a set of dispositions.

Tired of Cartesian ghosts, mid-20th-century thinkers proposed: drop the inner theater. To have a mental state just is to behave (or be disposed to behave) in characteristic ways.

Gilbert Ryle called Descartes' picture "the dogma of the ghost in the machine." B.F. Skinner pushed the program empirically: predict and shape behavior; the rest is folklore.

  • Pain = a disposition to wince, withdraw, complain
  • Belief = a disposition to assent and to act accordingly
  • Inner states are bracketed as scientifically unusable
OBJECTION
The super-actor

Surely a brilliant actor could mimic every pain-behavior without feeling anything. And surely a stoic could feel pain without showing it. Behavior under-determines experience.

LEGACY
A useful scaffold

Behaviorism failed as a complete theory of mind, but its insistence on operational tests and observable evidence shaped cognitive science and (later) AI evaluation.

Ryle 1949 Skinner 1953
05 · IDENTITY THEORY

Mental states are brain states.

Place (1956) and Smart (1959) proposed a clean materialism: the relation between mind and brain is identity, just as water = H2O or lightning = electrical discharge.

Pain isn't merely correlated with C-fiber firing — pain is C-fiber firing, full stop. Inner life is preserved (no behaviorist dodge), but it lives entirely in the head.

  • Type identity: every kind of mental state = some kind of brain state
  • Token identity: every instance = some neural instance
  • Reductive but ontologically tidy
"Sensations are nothing over and above brain processes." — J. J. C. Smart, 1959
PROBLEM AHEAD
What about an octopus?

If pain just is C-fiber firing, then a creature without C-fibers can't feel pain. That seems wrong — and it sets the stage for functionalism.

06 · FUNCTIONALISM

Mind as functional organization.

Hilary Putnam's move (1967): a mental state is defined by its causal role — what causes it, what it tends to cause, and how it interacts with other states — not by its physical realizer.

A clock is a clock whether it ticks with gears, quartz, or atomic transitions. A belief is a belief whether it's stored in neurons, transistors, or, in principle, anything that plays the right role.

  • Inputs → internal state transitions → outputs
  • Software/hardware analogy: mind ≈ program
  • Opens the door to AI as candidate mind
INPUT stimulus FUNCTIONAL STATE behavior OUTPUT SUBSTRATE-NEUTRAL
FIG. 2 — A mental state, defined by its role
07 · MULTIPLE REALIZABILITY

Silicon, slime, or neurons.

If functionalism is right, the same mental state could be implemented by radically different physical systems — as long as the causal organization is preserved.

This is the core argument against identity theory. Pain in a human, an octopus, and (perhaps) a Martian with hydraulic neurons may all count as pain, despite sharing nothing chemically.

  • Same function, many substrates
  • Why mind is not identical to any one physical kind
  • Foundational assumption of much AI optimism
Putnam 1967 Fodor 1974
SUBSTRATE A — CARBON
~86 billion neurons

Action potentials, neurotransmitters, glia. The only kind of mind we know to exist, so far.

SUBSTRATE B — SILICON
Logic gates & tensors

Could the right software running on this hardware play the same causal role? Functionalists: yes, in principle.

SUBSTRATE C — SLIME?
A philosopher's hypothetical

If a hydraulic Martian behaves like you, believes like you, suffers like you — on what grounds do we deny it has a mind?

08 · QUALIA

The redness of red, the painfulness of pain.

Qualia (singular: quale) are the felt qualities of experience — the warm orange of a sunset, the sharp ache of a stubbed toe, the strange tang of cilantro to some palates.

They are the part of mind that seems to slip out of every functional description. You can fully describe what red does; the worry is that you've still left out what red is like.

  • Intrinsic — properties of the experience itself
  • Ineffable — hard to convey by description alone
  • Subjective — accessible only "from the inside"
SPIKE red QUALE ?? PHYSICAL EVENT FELT EXPERIENCE
FIG. 3 — From neural firing to felt redness: the gap
09 · THOUGHT EXPERIMENT · MARY'S ROOM

Does Mary learn something new?

Frank Jackson (1982): imagine Mary, a brilliant scientist who knows every physical fact about color vision but has lived her entire life inside a black-and-white room.

One day she steps outside and sees a ripe tomato. Does she learn anything new? Most people say yes — she now knows what red looks like. If so, there are facts about experience that escape the physical story.

  • Intended as a knock-down case for property dualism
  • Physicalist replies: she gains an ability, not a fact
  • Or: she gains an old fact under a new mode of presentation
"It seems just obvious that she will learn something about the world and our visual experience of it." — F. Jackson, Epiphenomenal Qualia, 1982
UPSHOT
The knowledge argument

If complete physical knowledge isn't complete knowledge, then physicalism — the view that everything is physical — leaves something out. The "something" looks suspiciously like qualia.

10 · THOUGHT EXPERIMENT · CHINESE ROOM

Searle: syntax without semantics.

John Searle (1980): a man who speaks no Chinese sits in a room with a vast rulebook. Slips of Chinese characters come in; he looks them up, copies the prescribed responses, and slides them back out.

From outside, the room appears to converse fluently in Chinese. But the man inside understands nothing. By analogy, says Searle, a digital computer manipulating symbols can never have understanding — only the appearance of it.

  • Aimed at "strong AI": that the right program just is a mind
  • Implementation matters: brains do something computers don't
  • Replies: the system understands; or, the robot reply; or, brain simulator reply
THE ROOM OPERATOR RULES questions in answers out SYMBOL MANIPULATION ≠ UNDERSTANDING?
FIG. 4 — Searle's Chinese Room (1980)
11 · INTEGRATED INFORMATION THEORY

Consciousness as Φ (phi).

Giulio Tononi (2004–) takes a different tack. Rather than asking which physical things produce consciousness, IIT starts from the felt structure of experience and asks what kind of system could realize it.

The answer: a system whose parts are causally bound together so tightly that the whole carries information no decomposition into parts can capture. That irreducible, integrated information is measured as Φ.

  • Consciousness is identical to integrated information
  • Φ > 0 ⇒ some inner life; higher Φ ⇒ richer experience
  • Predictions about anesthesia, sleep, split brains
PROVOCATION
A surprising verdict on AI

On IIT, a feed-forward network — even one that perfectly imitates a conscious system's behavior — has Φ ≈ 0. So a digital model of a brain might produce all the right outputs while being, in itself, dark inside.

CONTROVERSY
Open empirical questions

Critics point out that exact Φ is intractable to compute and that IIT may license panpsychism. Defenders see it as the most principled bridge yet between phenomenology and physics.

12 · AI & INNER EXPERIENCE

When (if ever) does a system have an inside?

Large language models can converse, reason, even claim to feel. None of this settles the question. Behavior alone never has — that was the lesson of behaviorism's failure.

The same theories that divided us about brains now divide us about machines. Functionalists are open to silicon minds. IIT theorists worry that the wrong architecture is dark. Searle insists syntax never reaches semantics. Dualists may say the question is malformed.

  • The other-minds problem, sharpened by novel substrates
  • Moral stakes: if systems can suffer, we owe them something
  • Methodological stakes: how would we even test for it?
? Open · perhaps the most consequential open question of the century
"We do not know how anything physical could be conscious. Nobody has the slightest idea." — J. Fodor, 1992
13 · REFERENCES & FURTHER VIEWING

Where to read & watch.

A few starting points. The literature is vast; these are the doors most people walk through first.

Chalmers · The Conscious Mind (1996)
Dennett · Consciousness Explained (1991)
Nagel · "What Is It Like to Be a Bat?" (1974)
Jackson · "Epiphenomenal Qualia" (1982)
Searle · "Minds, Brains, Programs" (1980)
Putnam · "Psychological Predicates" (1967)
Ryle · The Concept of Mind (1949)
Tononi · Phi (2012)
SEP · plato.stanford.edu/entries/consciousness/
Block · "On a Confusion about a Function of Consciousness"
YOUTUBE · SEARCH
The hard problem of consciousness

youtube.com/results?search_query=hard+problem+of+consciousness

YOUTUBE · SEARCH
Searle's Chinese Room

youtube.com/results?search_query=chinese+room+searle

END · 13/13 A NOTEBOOK ON MIND
01 / 13