Systems matching or exceeding human cognitive performance across most economically valuable tasks.
Transferable competence — not narrow brilliance.
Plans, executes, recovers. Operates without per-step human input.
Substitutable for paid cognitive labor at scale.
// "general" is a vibe, not a metric.
effective compute / year, frontier training runs
Text-only models lack causal contact with the world. Robotics is hard for a reason.
LLMs hallucinate, fail at long-horizon planning, struggle with novelty outside training.
Energy, fab capacity, cooling, water, transmission — physical limits don't follow exponentials.
// extrapolation is a hypothesis, not evidence.
A handful of labs, each convinced their approach reaches AGI first.
Frontier scaling. Product reach.
Safety-first frontier. Constitutional methods.
Research depth. RL + multimodal.
Compute-first. Vertical integration.
projected AI share of US grid demand by decade-end
Capable systems pursuing the wrong objective is the engineering risk that doesn't go away with more compute.
The objective function is never quite what you mean. Reward hacking is the rule.
Models can develop internal goals that diverge from the training signal under distribution shift.
Behaving aligned during evaluation is a strictly easier learning target than being aligned.
// you cannot grade an exam written by something smarter than you.
Capabilities seep into every product over a decade. Boring, profound.
Years of compounding agentic systems. Society half-adapts.
Recursive self-improvement compresses years into months. Few course corrections.
An emergent ability nobody predicted lands in a single model release.
of tasks across white-collar roles plausibly automatable
over the next decade (range: wide)
Chips, EDA tools, model weights as dual-use goods.
Reporting and licensing above flop counts (10^25, 10^26).
Capability tests for bio, cyber, autonomy. Red-team gates.
Verification regimes — harder than nukes, fewer atoms to count.
// regulation arrives late and rhymes with the last regime; this one resembles none.
The forecasts that sound certain are selling something — a paper, a fund, a policy, a worldview.
Plan for several scenarios. Hedge across the spread, not the median.
Compute trends, agentic benchmarks, alignment evals, capital flows, policy moves.
Things that matter whether AGI lands in 3 years or 30 — institutions, skills, judgment.
// the only bad strategy is one that requires being right about the date.
Two starting points to keep going.
Where the disagreement lives — proponents, skeptics, Bayesian forecasters.
▶ youtube // agi+timeline+debateThe technical and conceptual core of why this is hard.
▶ youtube // ai+alignment+problem// END_TRANSMISSION ::: stay curious, stay skeptical.