06.09 · SINGULARITYCyberpunk · v.12026
Volume 6 · File 09

The Singularity.

A survey of the strange idea that intelligence will, at some point, accelerate itself out of our cognitive frame — and an attempt to separate which parts are math, which are extrapolation, and which are prophecy.

§1 · Origins

Three texts, four people

I.J. Good · 1965"Speculations Concerning the First Ultraintelligent Machine." The seed: a machine that designs better machines, recursively.
Vernor Vinge · 1993The 1993 NASA paper: "The Coming Technological Singularity" — coined the contemporary usage. Predicted within 30 years; stopped writing in 2024.
Ray Kurzweil · 2005, 2024The Singularity Is Near; The Singularity Is Nearer. Specific date: 2045. Specific mechanism: nanotech-enabled brain merge.
Hans Moravec · 1988Mind Children. The substrate-independence argument; mind-uploading as continuation.

All four have specific, falsifiable claims. Two are dead. Vinge died in 2024 still believing in it.

§2 · The argument · diagram

What "recursive self-improvement" means

human t = 0 t → RSI takeoff capability self-improves slow-takeoff scenario smooth, decade-scale vinge horizon FIG · TWO TAKEOFF SHAPES · CAPABILITY vs TIME
After Bostrom 2014 (fast/slow/moderate); Christiano 2018 (slow); Yudkowsky 2008 (fast).

The fast-takeoff hypothesis (Yudkowsky, MIRI) says recursive self-improvement happens on hours-to-days timescales: a system smart enough to redesign itself quickly leaves the human reference frame. The slow-takeoff hypothesis (Christiano, Karnofsky) says the curve is steep but still measurable in years, with warning signs.

§3 · Kurzweil's specific bet

2045 · Why this date

Kurzweil's Law of Accelerating Returns extrapolates Moore's-Law-class doublings across many technologies (transistor density, MIPS/$, sequencing $/bp, brain-imaging resolution). At his projected exponential rates, by 2029 a $1,000 computer matches one human brain (~10¹⁶ ops/s); by 2045, it matches all human brains combined.

Where he has been right · 2024 audit

Where wrong

scenario Kurzweil's specific 2045 prediction may verify in spirit (transformative AI by mid-century) and fail in detail (no nano-merge).

§4 · Compute · figure

Compute & what it bought

10²⁶ 10²⁰ 10¹⁴ 10⁸ 10² 1956 1980 2000 2015 2024 2030? AlphaGo · 2016 GPT-3 · 2020 GPT-4 · 2023 human-brain compute (Kurzweil estimate) FIG · TRAINING COMPUTE · LEADING ML SYSTEM · LOG FLOPs
After Sevilla et al. 2022, Epoch AI 2024 trends. Schematic.
§5 · The main critiques

The skeptic's case · five points

  1. Curves don't extrapolate forever. Every exponential is a sigmoid in disguise. Theodore Modis's 2002 critique of Kurzweil; Ramez Naam updates.
  2. Compute ≠ intelligence. Brain compute estimates are off by 3+ orders of magnitude depending on model assumptions. Joseph Carlsmith's 2020 OpenPhil report.
  3. Embodiment & world-model. Yann LeCun: autoregressive prediction does not produce understanding.
  4. Coordination & deployment. Even if a system is "smart enough", the real-world causal chains required for fast takeoff (chip fab control, robotics, infrastructure) take time.
  5. The threshold isn't where you think. Maybe AGI is just "a really good intern" for years; the singularity rhetoric may simply not match what unfolds.
cyberpunk cityscape
Cyberpunk visuals · the genre that made the singularity legible.
§6 · Voices

The argument map

Believers · fast
Yudkowsky · Bostrom · Soares
MIRI lineage. Recursive self-improvement; alignment unsolvable on a deadline.
Believers · slow
Christiano · Karnofsky · Cotra
Open Philanthropy / Anthropic. Smooth takeoff; alignable with effort.
Trajectory engineers
Hassabis · Amodei · Sutskever
Frontier-lab leaders. Powerful AI 2027–35; transformative not necessarily singular.
Skeptics · technical
LeCun · Marcus · Mitchell
Architecture-bound critique; current paradigm has a ceiling.
Skeptics · cultural
Bender · Hanna · Tafani
The singularity is a religious narrative; treat as such.
Original prophets
Vinge · Kurzweil · Moravec
The 1990s–2000s formulation; specific dates & mechanisms.
§7 · Quote

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."
— Vernor Vinge, NASA Vision-21 paper, 1993

"Most of what people call the singularity is just exponential curves projected onto religious priors."
— a paraphrase of every careful skeptic since 1995

§8 · Theological residue

Why it sounds religious

The structure — a transformative event after which the world is unrecognizable, a chosen group of believers, an emphasis on uploading as a kind of immortality — maps onto eschatology. Critics from John Searle to Meghan O'Gieblyn (God, Human, Animal, Machine, 2021) note this. Believers tend to respond: so what? Truth is independent of structural similarity to other belief systems.

Robin Hanson's Age of Em (2016) is the most rigorous adjacent work — a detailed economic model of an emulated-mind civilization. Even if you don't believe in singularity, Hanson's framework helps you think about what near-AGI economies might look like.

§9 · Connectomics · diagram

The mind-uploading bottleneck

FIG · CONNECTOME COMPLETION TIMELINE C. elegans 302 neurons 1986 Drosophila ~140k neurons 2024 (FlyWire) Mouse ~70M neurons ~2030 ? Macaque ~6B neurons ~2040 ? Human ~86B neurons ~2055+ ?? log scale · 5 orders of magnitude · ~10⁵× compute & storage gap C.elegans → human whole-brain emulation requires both connectome AND molecular-state capture scanning is currently destructive; uploading is therefore one-shot
After Sandberg & Bostrom WBE roadmap (2008); FlyEM hemibrain (2020); FlyWire (2024).
§10 · Watch

Recommended source

Isaac Arthur · "Post-Scarcity Civilizations" series

Long-form treatments of singularity-adjacent civilizations. Calmer than the source material, more rigorous than fiction.

youtube.com/@isaacarthurSFIA →

Lex Fridman × Eliezer Yudkowsky

The fast-takeoff case in its most concentrated form. Exhausting and clarifying.

youtube.com/@lexfridman →
§11 · Three scenarios

Three 2045s

A · Curve continues
Transformative AI
Compute & capability scale ~as Kurzweil bet. Civilization restructures. scenario
B · Plateau
Useful, not singular
AI is a permanent productivity-multiplier; transformation, not transcendence. forecast
C · Misalignment
Hard takeoff bad
A misaligned RSI loop produces an outcome humans neither planned nor chose. x-risk
§12 · What to watch

Indicators · 2026–2030

01020304050607080910111213