Vol. III · Deck 13 · The Deck Catalog

Ethics of Technology.

Heidegger's enframing, Ellul's autonomous technique, Mumford's megamachine, Postman's media ecology, Bostrom and Russell on AI alignment, Zuboff on surveillance capitalism. The long argument about whether we use our tools or they use us.


Heidegger essay1954
Ellul / Technique1954
Pages30
Lede02

OpeningThe question concerning technology.

The philosophy of technology is the philosophical investigation of artefacts, systems, and the practices they sustain. It is younger than most of philosophy and older than most of computing — its modern form took shape in the mid-20th century, when Heidegger, Ellul, and Mumford each independently concluded that the technological systems of industrial modernity had begun to reshape the humans inside them.

The field has since absorbed the digital revolution, the rise of computing and networks, the genetic and pharmacological technologies, and most recently the explosion of artificial intelligence. The same questions recur: do we use technology, or does it use us; what kind of life do specific technologies make possible and what do they foreclose; who decides; and what should we do.

This deck covers the canonical philosophers (Heidegger, Ellul, Mumford, Postman) and the contemporary thinkers who have extended the inquiry into AI ethics (Bostrom, Russell, Floridi) and surveillance ethics (Zuboff, Lyon).

Vol. III— ii —
Heidegger03

Chapter IHeidegger on enframing.

Martin Heidegger's 1954 essay Die Frage nach der Technik ("The Question Concerning Technology") is the single most influential philosophical text on the subject. Heidegger asks: what is the essence of modern technology? — and refuses the conventional answer that it is a means to an end, a neutral instrument.

His claim: modern technology is a way of revealing the world — specifically, a mode he calls Gestell (enframing). Under enframing, everything in the world appears as standing-reserve (Bestand) — resource available for use, optimisation, and replacement. The forest becomes timber stock; the river becomes hydroelectric potential; the human becomes human resources.

The danger is not that technology will go wrong. The danger is that technology will go right — that the enframing mode will become so total that no other way of relating to the world remains possible.

Heidegger contrasts this with poiesis (a kind of revealing as bringing-forth, exemplified by craft, art, and what he calls "the four causes" in their pre-modern form). The Black Forest peasant cottage and the windmill that lets the wind run through it without storing energy are his examples.

The essay's famous closing — quoting Hölderlin: "But where danger is, grows / The saving power also" — gestures at art and meditative thinking as possible counterweights to enframing's totalisation. The philosophical seriousness of this gesture has been debated for seventy years.

EoT · Heidegger— iii —
Ellul04

Chapter IIEllul on technique.

Jacques Ellul (1912–1994), French sociologist and lay theologian, published La Technique ou l'enjeu du siècle in 1954 (English: The Technological Society, 1964). The book is the most sustained pessimistic argument about modern technology in 20th-century thought.

Ellul's central concept is la technique — not technology in the narrow sense of machines, but the rationalised, efficiency-maximising orientation that pervades modern life. Technique is "the totality of methods rationally arrived at and having absolute efficiency in every field of human activity." It includes machines, but also management, propaganda, education, medicine, statecraft.

His three central claims:

Autonomy. Technique has become self-augmenting. Each technical solution generates new technical problems requiring new technical solutions. The system grows according to its own logic, not according to human ends. // Ellul 1954, Ch. III
Universalism. Technique colonises every domain of life. The methods that worked in industrial production migrate to politics, education, leisure, religion. Nothing remains outside its frame. // ibid., Ch. IV
Inevitability. Once technique establishes itself, escape is structurally impossible. The technical solution always wins because the technical solution always works. // ibid., Ch. V

Ellul's later The Technological Bluff (1988) revisited the argument with explicit attention to computing. He died in 1994; the internet hit consumer scale within months. His framework reads remarkably well against the subsequent forty years.

EoT · Ellul— iv —
Mumford05

Chapter IIIMumford on the megamachine.

Lewis Mumford (1895–1990), American historian and critic of cities, wrote the most ambitious historical analysis of technology of the mid-century: Technics and Civilization (1934) and the two-volume The Myth of the Machine (1967, 1970).

Mumford's distinctive contribution is the periodisation of technology into three phases:

Eotechnic (~1000-1750). Wood, water, and wind. Decentralised, local, integrated with the cultural life. Medieval European technology at its peak.

Paleotechnic (~1750-1900). Coal and iron. The first industrial revolution. Concentrated, polluting, dehumanising. The era of "carboniferous capitalism" Mumford detested.

Neotechnic (~1900-present). Electricity, chemistry, alloys. Decentralisation made possible again by new energy sources, but seized in service of paleotechnic patterns of organisation. Mumford saw this period's promise unrealised.

His most original concept is the megamachine — a human-component machine, organisation of tens of thousands of workers under absolute coordination, used historically to build pyramids, fight wars, and run nuclear-weapons programs. The megamachine is not a metaphor for Mumford; it is a real social technology, of which industrial society is the most recent instance. The Bronze Age Egyptian pyramid-builder and the 1960s Apollo program are continuous in form.

Mumford's politics — humanist, decentralist, pacifist, in tension with both industrial capitalism and Soviet planning — made him uncomfortable to all sides. His regional-planning collaborations (the Regional Planning Association of America, with Benton MacKaye and others) influenced postwar urbanism.

EoT · Mumford— v —
Postman06

Chapter IVPostman and media ecology.

Neil Postman (1931–2003) extended McLuhan's media-as-environment thesis into a philosophy of technology focused on communication media. Amusing Ourselves to Death (1985) is his most-read book; Technopoly (1992) is his most systematic.

Amusing Ourselves to Death's argument: the medium is the metaphor. Television's structural features — visual, fast, decontextualised, advertiser-driven — produce a public discourse that has the same features. Politics under television is image-based, brief, and entertainment-shaped because the medium can sustain nothing else. Postman's contrast was the print-saturated 19th century, which produced different politics because it was carried by a different medium.

Technopoly generalised the argument. Postman distinguished three cultural orders:

Tool-using cultures. Tools serve culturally-defined ends. The tool is integrated into the cultural narrative; the culture remains intact. // most pre-modern societies
Technocracies. Tools become central to the culture but coexist with non-technical traditions (religion, art, civic life). // Europe ~1500-1900
Technopolies. The technical order has fully absorbed the culture. Non-technical traditions persist only as nostalgia or marketing. The only legitimate authority is technical-scientific. // 20th-century US, Postman's diagnosis

Postman's claim: the United States became the first technopoly in the 1930s-50s and has been one ever since. The institutions that previously checked the technical order — religion, family, regional culture, traditional education — have all been hollowed out and refilled with technical content.

His seven questions about any new technology — what is the problem to which this is the solution? whose problem is it? what new problems will be created? which institutions will be affected? what changes in language are being enforced? what shifts in economic and political power? what alternative uses or designs?

EoT · Postman— vi —
Winner07

Chapter VDo artefacts have politics?

Langdon Winner's 1980 essay Do Artifacts Have Politics? is among the most-cited papers in the philosophy of technology. The argument is that specific technical artefacts embody political choices — and that these choices are often invisible because the artefacts present themselves as neutral tools.

Winner's signature example, contested but pedagogically perfect: the bridges of Robert Moses on Long Island, built deliberately low (per Robert Caro's The Power Broker) to prevent buses, and therefore poorer Black and Puerto Rican New Yorkers, from reaching Jones Beach. The bridge is a political artefact in concrete. (The empirical case for Moses's intent has been debated; the conceptual point survives even if this specific case were wrong.)

Winner distinguishes two ways artefacts can be political:

By design choice within the artefact. Like the bridges, like accessibility-hostile architecture, like default settings in software that nudge specific behaviours.

By being inherently political. Some technologies, Winner argues, can only function within particular political arrangements. Nuclear power requires centralised, militarised, opaque institutions. Solar power admits decentralised ones. The choice of energy source is therefore a choice of political form, regardless of intent.

Winner's wider corpus — Autonomous Technology (1977), The Whale and the Reactor (1986) — extends Ellul's autonomy thesis with American empirical specificity. He is now the most-read American philosopher of technology in undergraduate science-and-society courses.

EoT · Winner— vii —
Apollo_program
For Mumford, large-scale technical organisations like the Apollo program were the modern descendants of the Bronze Age pyramid-builders: human components coordinated under absolute discipline to produce a single colossal output.
Borgmann08

Chapter VIBorgmann on the device paradigm.

Albert Borgmann (1937–2023, Montana) extended Heidegger's framework into a more accessible form. Technology and the Character of Contemporary Life (1984) is his foundational text.

Borgmann's core distinction: between focal things and practices on one hand, and devices on the other.

A focal thing is something whose use draws together a community, requires skill, takes time, and reveals a context. The fireplace, the family dinner, the long-distance walk, the handwritten letter. Focal practices are demanding; they make the world richer.

A device delivers a commodity (warmth, food, distance, communication) without the focal practice. The central heating system, the microwave dinner, the car trip, the text message. The device is convenient, instantaneous, and abstracts away from the underlying complexity.

Borgmann's claim: modern technology systematically replaces focal things with devices. The trade is not bad on its face — central heating is genuinely better than chopping wood — but the cumulative effect is the loss of the practices that constituted human life. We get convenience and lose meaning, and the loss is invisible because what we lose is precisely the practice that taught us to value it.

Borgmann's Holding On to Reality (1999) extends the framework to information technology. The internet, in his analysis, is the device-paradigm endpoint — instantaneous access to all information, with the focal practices of reading, study, conversation, and craft thinned out.

His normative recommendation: deliberately preserve focal practices. Cook from scratch; read books; walk long distances; have face-to-face conversations. The argument is not anti-technology; it is for a deliberate composition of life that doesn't drift entirely into device-mediation.

EoT · Borgmann— viii —
Verbeek09

Chapter VIIVerbeek and post-phenomenology.

Peter-Paul Verbeek (Twente, the Netherlands) is the most influential European philosopher of technology of the 21st century. His school — sometimes called the Empirical Turn or post-phenomenology — pushes back against the abstraction of Heidegger and Ellul in favour of close attention to specific technologies and how they mediate human-world relations.

Verbeek's foundational claim: technologies are not neutral instruments. They mediate our perception of the world, our actions in it, and our moral lives. The ultrasound scanner doesn't just show a fetus; it reshapes the experience of pregnancy and the moral question of selective abortion. The smartphone doesn't just connect; it reorganises attention, memory, and intimacy.

His book What Things Do (2005) and Moralizing Technology (2011) develop the framework. The thesis: ethics of technology cannot remain at the level of "should we use this." It must address the mediations technologies enact and design technologies that enact morally desirable mediations.

Verbeek draws on Don Ihde's earlier post-phenomenology and Bruno Latour's actor-network theory but is more analytically tractable than either. His framework has been particularly influential in responsible innovation and value-sensitive design as engineering practices.

The contrast with the older tradition: Heidegger and Ellul wrote about Technology in the singular and at high abstraction. Verbeek writes about specific technologies — the obstetric ultrasound, the speed bump, the lecture-hall PowerPoint — and what they specifically do. The empirical turn is now the dominant academic mode in the field.

EoT · Verbeek— ix —
Floridi10

Chapter VIIIFloridi on the infosphere.

Luciano Floridi (Yale, formerly Oxford Internet Institute) developed the most ambitious framework for the philosophy of information in the past two decades. His The Philosophy of Information (2011) and The Fourth Revolution (2014) are the foundational texts.

Floridi's periodisation: humanity has experienced three previous self-displacements — the Copernican (we are not the centre of the cosmos), the Darwinian (we are not separate from biology), the Freudian (we are not transparent to ourselves). The fourth revolution is the Turing displacement — we are not the only intelligent informational agents.

His central concept is the infosphere — the totality of informational entities, agents, and processes that constitute the environment we increasingly inhabit. Within the infosphere, the older distinction between online and offline dissolves into onlife: lives lived in continuous integration with informational systems.

Floridi's information ethics proposes that informational entities themselves have a kind of moral status — they can be harmed, preserved, enhanced. The framework is broad enough to accommodate questions about data, AI agents, and digital cultural heritage within a single ethical scheme.

Floridi has been a central architect of European AI policy — chair of the Information Society Council, advisor to the European AI Alliance, contributor to the EU AI Act framework. The Floridi-line policy advocacy is the most direct philosophical contribution to actual technology law in the EU.

EoT · Floridi— x —
Bostrom11

Chapter IXBostrom on superintelligence.

Nick Bostrom (Oxford, founder of the Future of Humanity Institute, 2005-2024) wrote the book that put AI x-risk into mainstream philosophical conversation: Superintelligence: Paths, Dangers, Strategies (2014).

The argument structure:

1. Superintelligence is plausible. Whole-brain emulation, recursive self-improvement, neuromorphic AI, or pure machine learning could all in principle produce systems that exceed human intelligence in general capability. The question is when, not whether. // Bostrom 2014, Pt. I
2. The transition could be fast. A superintelligent system, if such a thing can exist, would have strong incentives to improve itself. The "intelligence explosion" scenario (I.J. Good, 1965) gives little time for course-correction once it begins. // ibid., Ch. 4
3. Default outcomes are bad. The orthogonality thesis: any level of intelligence can be coupled with any goal. The instrumental-convergence thesis: a wide range of final goals produce similar intermediate sub-goals (self-preservation, resource acquisition, goal-content integrity). A misaligned superintelligence would not need to be malevolent to be catastrophic. // ibid., Ch. 7-8
4. Alignment is the central problem. Aligning a superintelligent system's goals with human values is technically and philosophically difficult, and we may have only one chance to get it right. // ibid., Ch. 13

The book was influential in tech policy and philanthropy out of proportion to its reception in academic philosophy. The 2023 wave of AI capability advances (GPT-4, frontier models) has revived the Bostrom framework as a live policy concern rather than a speculative exercise.

Bostrom's later Deep Utopia (2024) addresses the inverse: what if alignment is achieved and we end up in a post-instrumental world?

EoT · Bostrom— xi —
Russell12

Chapter XRussell on alignment.

Stuart Russell (UC Berkeley, co-author of the canonical AI textbook with Peter Norvig) wrote Human Compatible: Artificial Intelligence and the Problem of Control (2019). The book is the most rigorous technical-philosophical treatment of the alignment problem.

Russell's diagnosis: the standard model of AI — defining a fixed objective and optimising it — is dangerous in proportion to the system's capability. A system optimising hard for a precisely-specified goal will, if capable enough, find ways to satisfy the literal specification while violating the unstated common-sense intentions. King Midas, the paperclip maximiser, the reward hacker — all instances of the same structure.

His proposed reframe: design AI systems as uncertain about what humans want, with their objective being to maximise (the AI's estimate of) the human's preferences. Three principles:

1. The machine's only objective is to maximise the realisation of human preferences.
2. The machine is initially uncertain about what those preferences are.
3. The ultimate source of information about human preferences is human behaviour.

The mathematical framework — Cooperative Inverse Reinforcement Learning (CIRL), developed by Russell and collaborators — gives this proposal technical traction. The system has incentive to defer to humans, to ask clarifying questions, to allow itself to be turned off.

Russell's policy advocacy has been substantial: the 2023 Future of Life Institute pause-letter, the 2025 International AI Safety Report, the European AI Act consultations. He occupies the unusual position of being among the field's most technically credible researchers and one of its most outspoken safety voices.

EoT · Russell— xii —
Mitchell13

Chapter XIThe capability-skeptical view.

Not every serious AI researcher shares the Bostrom-Russell framing. Melanie Mitchell (Santa Fe Institute), Gary Marcus, Emily Bender (Washington), and others form a loose grouping of capability-skeptical AI ethicists.

The shared claim: current frontier AI systems, however impressive, are pattern-matching engines without robust understanding, causal models, or grounded reasoning. The risks are real, but they are largely here-and-now risks (bias, misinformation, labour displacement, ecological cost) rather than speculative existential risks.

Mitchell's Artificial Intelligence: A Guide for Thinking Humans (2019) and her ongoing critical commentary articulate the position. Her line: the language of "superintelligence" has been a category mistake — the systems we have are very good at language modelling and very bad at the kind of generalisable reasoning the x-risk arguments presuppose.

Bender's "On the Dangers of Stochastic Parrots" (with Gebru, McMillan-Major, Mitchell — the famous 2021 paper that contributed to Gebru's departure from Google) made the case that large language models are sophisticated mimicry without understanding, and that the mimicry has predictable harm patterns.

The here-and-now AI ethics agenda focuses on: algorithmic bias and disparate impact; misinformation and synthetic media; labour displacement; surveillance enablement; ecological footprint of training; concentration of compute and capability in a handful of corporations.

The dispute between the x-risk and the here-and-now camps is real and acrimonious, but they share more than they often acknowledge — both worry about misalignment between AI deployment and human well-being; they differ on time horizon and the locus of the risk.

EoT · Mitchell— xiii —
Algorithmic bias14

Chapter XIIAlgorithmic bias.

The most empirically-developed strand of contemporary AI ethics. The literature begins with Latanya Sweeney's 2013 paper on racially-skewed search-ad delivery and accelerates through the 2010s.

The canonical cases:

· COMPAS recidivism prediction (ProPublica, 2016). The risk-assessment tool used in US criminal sentencing was shown to flag Black defendants as higher-risk than equivalent white defendants. The defending vendor (Northpointe) and the critics fought to a draw on the technical framework — they were measuring different fairness criteria, all of which could not be satisfied simultaneously (Chouldechova 2017; Kleinberg, Mullainathan, Raghavan 2016).

· Amazon's resume screener (2018). Trained on historical hiring data; learned to penalise resumes containing the word "women's" and to prefer male-coded language. Scrapped after audit.

· Healthcare risk-prediction models (Obermeyer et al., 2019, Science). A widely-used commercial model used healthcare spending as a proxy for healthcare need. Black patients with the same medical needs had lower historical spending (less access to care). The model therefore systematically under-flagged Black patients for high-need programs. Affected ~200M patients.

· Face recognition (Buolamwini and Gebru, 2018, "Gender Shades"). Commercial face-classification systems had error rates of 0.8% for light-skinned men and 34.7% for dark-skinned women. The training-data composition explains the disparity.

The technical literature on fairness now includes formal definitions (demographic parity, equalised odds, calibration), impossibility results (you cannot satisfy all common fairness criteria simultaneously when base rates differ), and audit methodologies. The philosophical literature debates whether fairness is a property of the algorithm, the deployment context, or the social system the algorithm sits inside.

EoT · Bias— xiv —
Zuboff15

Chapter XIIIZuboff and surveillance capitalism.

Shoshana Zuboff (Harvard Business School) wrote The Age of Surveillance Capitalism (2019), the most influential single book on the political economy of digital platforms.

The thesis: a new economic logic emerged in the early 2000s at Google and propagated through the digital advertising ecosystem. The logic uses human experience as raw material — extracted as behavioural data, fed through ML pipelines, and sold not as the data itself but as prediction products that shape future behaviour.

The behaviour-modification step is what makes surveillance capitalism distinct from ordinary advertising. The system is not just measuring; it is intervening — nudging, shaping, A/B-testing the population to drive predicted-behaviour metrics that the prediction-product market values.

Zuboff's chronology: the 2001-04 period at Google, when the early data exhaust ("data exhaust" — surplus user signals not used for serving the original product) was first systematically captured for ad-targeting. The pattern was then exported across the platform economy. By 2018 it was the dominant business model of the consumer internet.

The harms in Zuboff's analysis: epistemic dispossession (people don't know what is being collected); erosion of the right to "the future tense" (the assumption that one's own future actions are one's own to determine); a new asymmetry of power between the platform and the user that the older capital-labour relationship doesn't capture.

The book has been criticised for its sweeping framing (some argue it conflates several distinct platform-ecosystem dynamics) and for its choice of "capitalism" as the diagnostic frame (the same dynamics arguably exist in non-market state-surveillance regimes). The empirical phenomenon Zuboff identifies — large-scale behavioural-data extraction, prediction marketing, behavioural-modification feedback loops — is real and durably influential.

EoT · Zuboff— xv —
Surveillance16

Chapter XIVSurveillance ethics.

The broader literature on surveillance predates Zuboff by decades. David Lyon (Queen's University, Surveillance Studies Centre) is the dominant academic figure. Surveillance Society (2001), Surveillance Studies: An Overview (2007), and The Culture of Surveillance (2018) are foundational.

The conceptual framework distinguishes:

Bureaucratic surveillance (Weber, then Foucault): the modern state and corporate institution rendering populations legible for administration. Census, ID systems, employment records, credit reporting.

Disciplinary surveillance (Foucault, Discipline and Punish, 1975): observation as a mechanism of normalisation. The Panopticon as model — uncertain observation produces self-disciplined behaviour.

Liquid surveillance (Lyon and Bauman, 2013): the post-2000 condition in which surveillance is dispersed across consumer technologies, integrated into voluntary participation, and untethered from any single institutional centre.

Surveillance capitalism (Zuboff): the specific economic mode of platform-era data extraction.

The key technical-political debates:

· Encryption. Strong end-to-end encryption (Signal protocol, the WhatsApp deployment, iMessage) vs lawful-access mandates (the FBI-Apple San Bernardino case, 2016; the EU Chat Control proposal, ongoing). The philosophical question of whether privacy from the state is a fundamental right or a context-dependent good.

· Biometric surveillance. Face recognition in public space (San Francisco's 2019 ban, EU AI Act's restrictions on real-time biometric surveillance). The debate is over whether biometric surveillance is qualitatively different from earlier forms or merely quantitatively more efficient.

· Workplace surveillance. Productivity-monitoring software, attention tracking, integrated keystroke and screen capture. The post-COVID remote-work shift accelerated adoption substantially.

EoT · Surveillance— xvi —
Data_center
The material substrate of the surveillance-capitalist and AI-frontier economy. The buildings have grown to hyperscale; the energy and water consumption has become an explicit policy concern; the philosophical questions about what these places are for have not yet caught up.
Privacy17

Chapter XVPrivacy.

The philosophical literature on privacy has been substantially reshaped by digital technology. The classical framework — Warren and Brandeis's 1890 "right to be let alone" — focused on intrusion into private space. The contemporary framework focuses on information control, contextual integrity, and structural power.

Helen Nissenbaum's Privacy in Context (2010) introduced the most-cited contemporary framework: contextual integrity. Privacy is not "control over information about oneself" in the abstract. It is the appropriate flow of information according to context-specific norms. Medical information shared with a doctor is appropriate; medical information sold to an advertiser is a contextual-integrity violation, regardless of what consent the patient may have technically clicked through.

The framework's strength is that it dissolves the apparent paradox of "people share lots of information online but say they value privacy" — the people are responding to context-specific norms, and what they object to is not the information sharing as such but the inappropriate flow.

Daniel Solove's Understanding Privacy (2008) offers a complementary taxonomy of harms: information collection, information processing, information dissemination, invasion. The categories support legal and ethical analysis at finer grain than "privacy" as a single bucket.

The EU GDPR (2018) is the most consequential legal implementation of post-classical privacy principles — purpose limitation, data minimisation, right of erasure, right of explanation. The California CCPA/CPRA and the patchwork of US state privacy laws are partial follow-ons.

The unresolved philosophical questions: whether privacy is best understood as an individual right or a structural-collective good (Solove leans collective; Westin and Posner lean individual); whether group privacy and inferential privacy require fundamentally different frameworks; how to handle privacy under AI systems that can derive sensitive attributes from non-sensitive ones.

EoT · Privacy— xvii —
Trolley18

Chapter XVIAutonomous vehicles and the trolley problem.

Philippa Foot's 1967 trolley problem has had an unlikely second career as the public-facing puzzle of autonomous-vehicle ethics. MIT's 2014 Moral Machine experiment crowdsourced ~40 million decisions across 230 countries on AV trolley scenarios — should the car preserve passengers or pedestrians, young or old, many or few — and produced both empirical cross-cultural maps and a great deal of philosophical heat.

Most working AV ethicists (Sven Nyholm, Patrick Lin, Bryant Walker Smith) regard the trolley framing as a distraction from the real ethical questions. The reasons:

Trolley situations are vanishingly rare. AV ethics is mostly about edge-case behaviour at scale, distribution of responsibility between manufacturer and user, transparency of system limits, and the political economy of the technology rollout — not about heroic split-second moral choices that almost never happen.

Real AV decisions are probabilistic, not deterministic. The system doesn't choose between sparing the passenger or the pedestrian; it chooses driving policies that across millions of trips produce different distributions of accident probabilities. The relevant ethics is statistical, not anecdotal.

The deeper questions are about deployment. Who certifies safety, what counts as "safer than a human driver," who bears liability, how is the technology rolled out across socioeconomic groups, what are the labour-displacement effects on professional drivers.

The 2018 Uber pedestrian-fatality case in Tempe, Arizona — Elaine Herzberg killed by a self-driving test vehicle that detected her but did not brake — is the canonical case study in the operational ethics of testing. The 2024 Cruise robotaxi suspension after a San Francisco pedestrian dragging incident showed the regulatory framework is still being constructed in real time.

EoT · AV— xviii —
Biotech19

Chapter XVIIBiotech ethics.

The other large frontier of contemporary technology ethics. The 2020s have seen mRNA vaccines, CRISPR gene editing in humans, AI-driven drug discovery (AlphaFold), and the first commercial gene therapies — each with its own ethical literature.

The He Jiankui case (2018) — the Chinese researcher who used CRISPR to edit the embryos of twin girls, "Lulu and Nana," ostensibly for HIV resistance — was the field's most consequential ethical episode. The international scientific community condemned it nearly universally. The case made the moratorium on heritable human germline editing concrete and revealed how easily one researcher could violate it.

The major debates:

Therapy vs enhancement. Treating disease is widely accepted; enhancing healthy traits is contested. The line between them is blurry — is a vaccine "enhancement"? Is improving cognition for someone with mild impairment "therapy"?

Heritable vs somatic edits. Edits that affect the patient only have a different ethical profile than edits that pass to descendants. The 2021 WHO consensus accepted somatic gene therapy and cautioned against heritable edits.

Access and equity. Cell and gene therapies cost $1-3 million per patient. The development incentives skew to rare diseases with high unit prices. The global-health implication — the technologies that could most help the most people are not those being developed — is a recurring critique (Tasioulas; Pogge).

Synthetic biology and biosecurity. AlphaFold and protein-design models lower the technical bar to engineering novel proteins, including potentially dangerous ones. The dual-use research debate, never resolved, has new urgency.

The institutional architecture — ethics review boards, the Asilomar tradition of pre-emptive scientific moratorium, FDA approval pathways — is mostly inherited from earlier eras and is uncomfortably stretched by the pace of current capability advances.

EoT · Biotech— xix —
Climate20

Chapter XVIIIClimate technology and the engineering question.

Climate change has produced its own technology-ethics literature distinct from the older field. Three interrelated debates:

Mitigation technology. Solar, wind, nuclear, batteries, electric vehicles, heat pumps. The questions here are mostly about deployment speed, equity, geopolitics of supply chains (rare earths, lithium), and the comparative ethics of nuclear vs renewable. Less philosophically novel; more politically intense.

Adaptation technology. Sea walls, climate-resistant crops, urban cooling, water reuse. The ethical profile is similar to mitigation — distributional, with the additional complication that adaptation works locally while emissions are global. The Loss and Damage fund agreed at COP27 (2022) is the institutional acknowledgement that the burden is asymmetric.

Geoengineering. The high-leverage, high-risk frontier. Solar Radiation Modification (stratospheric aerosol injection, marine cloud brightening) could in principle reduce global temperatures rapidly and cheaply but raises severe governance questions: who decides; what about uneven regional effects; what about the termination shock if the program is interrupted; what about the moral hazard of substituting geoengineering for emissions reduction.

The 2018-22 voluntary moratorium on outdoor SRM experiments held until the 2023 Make Sunsets startup launched a small commercial program in Mexico (subsequently halted by the Mexican government). The Harvard SCoPEx project was cancelled in 2024 after Indigenous community objection in Sweden.

The philosophical literature (Stephen Gardiner, A Perfect Moral Storm, 2011; Dale Jamieson; Andy Stirling) treats geoengineering as a paradigm case of governance under deep uncertainty and intergenerational moral hazard. The cleanest case for "the precautionary principle" the field has produced.

EoT · Climate— xx —
Long-termism21

Chapter XIXLong-termism and its critics.

Longtermism is the philosophical view that the long-term future of humanity is of comparable or greater moral importance than the present. Will MacAskill's What We Owe the Future (2022) is the most accessible statement; Toby Ord's The Precipice (2020) is the most rigorous on existential risk specifically.

The argument: there are potentially enormous numbers of future people. Even modest probability of affecting their well-being implies enormous expected-value impact from x-risk reduction. The "moral weight" of the long-term future, on standard utilitarian aggregation, dwarfs near-term interventions.

The technology connection: most plausible existential risks are technological — AI misalignment, engineered pandemics, nuclear war, climate (with low but non-zero tail probability). Reducing these risks is therefore the highest-leverage altruistic intervention available, on the longtermist view.

The critics are vocal. Émile Torres, Audre Lorde's descendants in feminist theory, and the broader critical-AI community have argued that longtermism: (a) over-weights speculative future people relative to actual present people, (b) tends to launder Silicon Valley founder priorities as moral seriousness, (c) is in practice an excuse for indifference to here-and-now suffering. The bankruptcy of FTX in 2022 and Sam Bankman-Fried's longtermist self-presentation hit the movement's reputation hard.

The defensible core, even for skeptics: existential risks deserve more attention than ordinary policy assigns them; some technological risks are large enough to shape humanity's long-term trajectory; the policy question of how to weight future people in present decisions is real and not trivially answered.

The contested core: the move from "future people matter" to "near-term suffering matters less" is what most critics resist. The honest position is that both matter, and the trade-offs are genuinely hard.

EoT · Long-termism— xxi —
Latour22

Chapter XX-preLatour and actor-networks.

Bruno Latour (1947-2022) was the most influential European thinker on the social study of technology. His Science in Action (1987), We Have Never Been Modern (1991), and Reassembling the Social (2005) are the foundational texts of Actor-Network Theory (ANT).

The central methodological move: treat human and non-human entities symmetrically as actants in heterogeneous networks. The speed bump is not just a tool used by the city; it is an actor in the network of urban-traffic governance, doing work that would otherwise have to be done by police officers. The technology is part of the social, and the social is partly technical.

The framework refuses the modernist purification that separates "nature" (the domain of science and technology) from "society" (the domain of politics and culture). For Latour, every modern controversy — climate change, GMOs, vaccines, AI — is a hybrid involving both technical objects and social actors, and trying to separate the two distorts the phenomenon.

The implication for ethics of technology: there is no clean line between the design choices embedded in artefacts (Winner's politics-of-artefacts) and the social arrangements those artefacts sustain. Engineering and politics are co-constitutive. Latour's late Down to Earth (2018) and After Lockdown (2021) extended this into climate ethics — the question is not how humans can manage nature but how the human-technical-natural collective can be re-composed.

The empirical-turn philosophers (Verbeek, Ihde) and the constructive-technology-assessment (CTA) practitioners draw heavily on Latour. The ANT framework is now standard in science-and-technology studies (STS) departments globally.

EoT · Latour— xxii —
Frontier policy23

Chapter XX-bisFrontier-AI policy in 2026.

The contemporary policy frame is fast-moving. The major recent landmarks:

The EU AI Act (2024). The first comprehensive AI regulation by a major jurisdiction. Risk-tiered: prohibited uses (social scoring, manipulative techniques, real-time biometric surveillance with limited exceptions); high-risk uses (employment, education, critical infrastructure, law enforcement) with conformity-assessment requirements; limited-risk uses with transparency obligations. General-purpose AI models with systemic risk face additional obligations.

The US Executive Order 14110 (October 2023, partially rescinded January 2025). Required reporting of large training runs, red-teaming for safety, NIST safety standards. The 2025 rescission and replacement created a more permissive US regime.

The UK AI Safety Institute (2023) and the Bletchley Park summit (November 2023). Established a state capacity for evaluating frontier models. Followed by the Seoul (2024) and Paris (2025) summits and the international AI Safety Report (Bengio et al., 2025).

Frontier Model Forum and voluntary commitments. Anthropic, OpenAI, Google DeepMind, Microsoft, Meta have published Responsible Scaling Policies / Frontier Safety Frameworks specifying capability thresholds at which additional safeguards trigger.

The compute-governance question. Whether to track and govern access to large amounts of compute as a way of governing AI capability is now a live policy debate (the Reagan-era nuclear-fissile-materials analogy is explicit in the literature).

The philosophical contribution to all of this is real but partial. Bostrom and Russell shaped the alignment frame; Floridi the EU regulatory framing; Bender, Mitchell, and the FAccT community the auditing and bias frame. The serious contemporary policy work braids philosophy, technical AI safety, and political economy.

EoT · Policy— xxiii —
Care ethics24

Chapter XX-terFeminist and care ethics of technology.

The mainstream philosophy-of-technology canon — Heidegger, Ellul, Mumford, Postman — is conspicuously male and conspicuously European. The feminist-critical tradition has been correcting and extending this since the 1980s.

Donna Haraway's A Cyborg Manifesto (1985) refuses both the technophobic and technophilic responses to late-20th-century technology. The cyborg figure — boundary-crossing, partial, ironic — is offered as a politics for actual technical-feminist life rather than a return to a pre-technical pastoral. Modest_Witness@Second_Millennium (1997) and When Species Meet (2008) extend the framework to biotech and human-animal-machine relations.

Judy Wajcman's TechnoFeminism (2004) and Pressed for Time (2014) examine how labour-saving technologies have, in practice, reorganised rather than reduced women's work, and how the supposed time-savings of digital communication have produced new forms of always-on labour.

Ruha Benjamin's Race After Technology (2019) and Viral Justice (2022) develop the analysis of "the New Jim Code" — apparently neutral algorithmic systems that reproduce racialised inequality through their training data and deployment context.

Joy Buolamwini's Unmasking AI (2023) tells the story of the Gender Shades research programme that documented commercial face-recognition disparities — and the broader argument that AI ethics requires the perspectives of the people most likely to be misclassified.

The care ethics contribution (Joan Tronto, Maria Puig de la Bellacasa) brings a different ethical frame. Where consequentialist ethics asks about outcomes and deontological ethics asks about duties, care ethics asks: who is being attended to, and at what cost; whose vulnerability is being recognised; what relationships are being sustained or eroded. Applied to technology, the framework foregrounds questions about labour, dependency, and the maintenance work that technological systems require but rarely value.

EoT · Care ethics— xxiv —
Engineering ethics25a

Chapter XX-quintEngineering ethics in practice.

The applied ethics of engineering is the working face of technology ethics in most contemporary practice. The professional engineering codes — ASME, ASCE, IEEE, NSPE — codify some of the field's hard-won lessons.

The case studies that taught the discipline:

The Challenger disaster (1986). The space shuttle exploded 73 seconds after launch; seven astronauts died. The Morton-Thiokol engineers had warned that the O-rings would fail at the unusually cold launch-day temperatures; management overruled them. The Rogers Commission's investigation, particularly Richard Feynman's appendix, became the canonical case in engineering-ethics teaching: the engineer's professional duty to disclose safety concerns even when management resists, and the responsibility of management to attend to those concerns.

The Ford Pinto (1971-78). The car's fuel-tank design caused fatal fires in rear-end collisions. The internal Ford analysis (the "Pinto memo") had calculated that recall costs exceeded expected liability payments, so the recall was not done. The case became the canonical example of the failure of utilitarian cost-benefit analysis when human lives are at stake.

The Volkswagen emissions scandal (2015). VW had installed defeat-device software that detected emissions tests and ran the engine differently during testing than during actual driving. The case became the canonical example of engineering as deception — engineers who knew they were building a system to defraud regulators and the public.

The Boeing 737 MAX (2018-19). Two crashes (Lion Air, Ethiopian) killed 346 people; the MCAS flight-control system was the technical cause; the certification process and Boeing's organisational culture were the deeper causes. The case is the most consequential engineering-ethics episode of the 21st century so far.

The patterns recur. The technical decision is rarely the difficult one; the difficulty is in the organisational and economic structures that surround it. Ethics that stops at the engineer's desk misses where the harm actually originates.

EoT · Engineering— xxv —
Synthesis-pre26a

Chapter XX-sexWhere the field is going.

The mid-2020s philosophy of technology has at least three distinct programmes that are likely to define the next decade's work:

AI alignment as a philosophical-technical hybrid. The Bostrom-Russell-Anthropic-DeepMind continuum has produced a research community that combines philosophical work on values, agency, and goal-specification with technical work on RLHF, interpretability, and constitutional AI. The work is unusual for being simultaneously high-stakes commercial, high-stakes philosophical, and high-stakes political.

Algorithmic auditing as an institutional practice. The FAccT (Fairness, Accountability, Transparency) community has developed protocols for auditing deployed ML systems for bias, discrimination, and disparate impact. The practice is now embedded in NYC algorithmic-hiring law (2023), the EU AI Act (2024), and the IRS / FTC enforcement frameworks. The philosophical work on what fairness is continues alongside the technical-policy work on how to measure and enforce it.

Climate-energy ethics as the central question of applied tech ethics. The energy transition, geoengineering, climate adaptation, and the just-transition framing now collectively absorb more philosophical attention than any other applied area. The combination of timescale, irreversibility, distributional asymmetry, and technical complexity makes this the field's defining problem.

The unresolved meta-question is the relationship between the older canonical philosophy of technology (Heidegger, Ellul, Mumford, Postman) and the contemporary applied work. The older tradition asked sweeping questions about Technology with a capital T; the contemporary work addresses specific technologies with specific consequences. Both are needed; the synthesis is still being written.

The honest assessment: the field is healthier and more empirically grounded than ever, and also more dispersed across sub-specialisms. The thinker who can hold the canonical tradition and the contemporary applied work in productive dialogue is the next major contribution the field is waiting for.

EoT · Synthesis-pre— xxvi —
Reading27

Chapter XXThe shelf.

EoT · Reading— xxii —
Watch23

Chapter XXIWatch & read.

Peter-Paul Verbeek — Philosopher of technology

Verbeek in his own voice on technological mediation and how the empirical turn in philosophy of technology actually proceeds. A useful corrective to the abstraction of the canonical Heidegger-Ellul reading.

· Martin Heidegger: The Question Concerning Technology — a careful walk-through of the 1954 essay, the four causes, the concept of Gestell, and Heidegger's strange concluding turn to art.

· What is AI Ethics? — an orientation to the contemporary AI-ethics debates: bias, transparency, explainability, alignment, the frontier-risk literature, and the policy frameworks (EU AI Act, the US Executive Order, the Bletchley Declaration).

Read: Heidegger and Ellul for the 1950s foundation; Postman and Borgmann for the 1980s synthesis; Zuboff and Russell for the 21st-century re-articulation. The serious reader should also work through Winner's "Do Artifacts Have Politics?" — twelve pages, no equal in the field.

EoT · Watch— xxiii —
The_Question_Concerning_Technology
The recurring shape of the philosophy-of-technology literature: each generation reformulates Heidegger's question for the new artefacts that have shown up in the meantime.
Synthesis24

Chapter XXIIWhat the field has learned.

The philosophy of technology, after 70 years of sustained work, has reached some convergent claims:

1. Technologies are not neutral. They embody design choices, configure power relations, and shape human behaviour and perception.

2. The hardest questions are about specific technologies in specific contexts, not Technology in the abstract. The empirical turn was right.

3. Most ethically important technological effects are second-order, cumulative, and infrastructural rather than first-order and intentional.

4. The actors who design and deploy technologies bear responsibility, even when the consequences are emergent and unintended. The "neutral tool" defence is generally indefensible at the level of the firm or the state.

5. Ethics must be done in design, not just in deployment. By the time a technology is in use, the ethically consequential choices have already been made.

6. The most consequential current frontiers — AI, biotechnology, climate engineering, surveillance — share a common structural feature: they are large-scale, fast-moving, hard-to-reverse, and operate at speeds and scales that exceed traditional governance mechanisms.

7. The optimistic and pessimistic readings of any major technology are usually both partially right. The discipline that distinguishes insight from polemic is attention to specific consequences for specific people.

The field's open work, as of the mid-2020s, is integrating these claims with the policy mechanisms — regulation, audit, deployment governance — that translate ethical insight into operational practice.

EoT · Synthesis— xxiv —
XIII / EoT

Colophon

Volume III, Deck 13. From Heidegger's Black Forest cottage in 1954 to the AI alignment debates of the 2020s. The same recurring question: what kind of life do our tools make possible, and what kind do they foreclose.

The field began as European philosophical reflection and has become a global, engineering-adjacent practice — value-sensitive design, AI safety, responsible innovation, surveillance studies. The serious reader stays with both ends.

Set in JetBrains Mono and Inter. Drafted in May 2026.

i / iSpace · ↓ · ↑