Vol. IV · Deck 09 · The Deck Catalog

Applied math.

The mathematics that builds bridges, predicts weather, prices options, models cells, and runs your phone. The branch where theorems are validated by airplanes.


Newton1687
Black-Scholes1973
Pages32
LedeII

OpeningWhat applied math is.

Mathematics put to work on the world.

Applied mathematics is the branch in which the test of truth is empirical as well as logical. A theorem is correct if it follows from its premises; an applied result is correct if, in addition, the bridge stands, the wing flies, the prediction matches the data.

The split between "pure" and "applied" is more sociological than mathematical. Newton's Principia (1687) was applied mathematics in the modern sense — calculus invented to describe planetary motion. Euler, Gauss, Riemann, and Hilbert all moved freely between domains. The hard division dates from roughly 1930 and is now eroding again.

This deck moves through differential equations, transforms, optimisation, numerical methods, and the modern applied subfields — biology, finance, operations research. The texture varies; the underlying habit does not. Applied mathematics is the discipline that asks does it predict?

Vol. IV— ii —
DEsIII

Chapter IModelling with differential equations.

The most-used tool in applied mathematics. A differential equation relates a function to its derivatives. The function describes a state; the equation expresses how that state changes.

Newton's second law: F = m·a = m·d²x/dt² A second-order ordinary differential equation. Given the force F as a function of position and velocity, the trajectory is determined by initial conditions.

The taxonomy: ordinary (one independent variable) versus partial (several); linear versus nonlinear; first-order versus higher; autonomous versus time-dependent. Each axis matters for what techniques apply.

Most natural laws are differential equations. Maxwell's equations of electromagnetism. The Navier-Stokes equations of fluid flow. The Schrödinger equation of quantum mechanics. The Lotka-Volterra equations of population dynamics. The Black-Scholes equation of option pricing. To work in physics, engineering, biology, economics, or finance is, very often, to solve a differential equation.

Applied · DEs— iii —
OscillationIV

Chapter IIThe harmonic oscillator.

The single most important model in applied mathematics. m·d²x/dt² = −k·x A mass on a spring. Solution: x(t) = A·cos(ωt + φ), with angular frequency ω = √(k/m).

Add damping (−c·dx/dt) and forcing (F(t)) and you have the equation for almost every linear vibrating system. The pendulum (small angle), the LC circuit, the modes of a vibrating string, the lattice modes of a crystal — all harmonic oscillators in disguise.

The resonance phenomenon — large response when forcing matches natural frequency — explains the collapse of the Tacoma Narrows Bridge (1940), the destruction of crystal glasses by sopranos, the design of MRI machines, and the operating principle of every clock since Huygens.

Quantum mechanics keeps the oscillator at its centre — the quantum harmonic oscillator has equally spaced energy levels and is the building block of the second-quantised description of fields. Most of theoretical physics is harmonic oscillators all the way down.

Applied · Oscillator— iv —
Wave equationV

Chapter IIIThe wave equation.

∂²u/∂t² = c²·∇²u The first major partial differential equation. Jean d'Alembert derived the one-dimensional version in 1747 to describe a vibrating string.

The general solution in 1D is u(x,t) = f(x − ct) + g(x + ct) — superposition of right-moving and left-moving waves. Disturbances propagate at speed c without distortion.

Higher-dimensional versions describe sound (3D acoustic wave equation), water waves (with corrections), seismic waves, and electromagnetic radiation (Maxwell's equations decouple into wave equations for E and B). The same equation, c → speed of light, gives every photon you see.

Solution methods: separation of variables, Fourier series, characteristics, Green's functions. Each is a major topic. The wave equation is the cleanest non-trivial PDE and the standard first example in any course.

Applied · Wave— v —
Heat equationVI

Chapter IVThe heat equation.

∂u/∂t = α·∇²u Joseph Fourier's 1822 Théorie analytique de la chaleur founded the subject. Heat flows from hot to cold at a rate proportional to temperature gradient — Fick's law of diffusion is the same equation.

The solution smooths data instantly. Sharp initial profiles become smooth profiles in any positive time. The diffusion process is irreversible — running it backwards amplifies noise without bound.

Fourier's solution method — expand the initial condition as a sum of sines and cosines, evolve each mode separately by exponential decay, sum — opened all of harmonic analysis. The same equation, with different physics interpretations, governs Brownian motion (Einstein 1905), neutron diffusion in reactors, the Black-Scholes equation of finance, and the forward pass of certain neural-network architectures (Sohl-Dickstein et al. 2015).

Applied · Heat— vi —
Navier-StokesVII

Chapter VThe Navier-Stokes equations.

The governing equations of fluid flow, derived by Claude-Louis Navier (1822) and George Stokes (1845). For an incompressible Newtonian fluid:

ρ(∂v/∂t + v·∇v) = −∇p + μ∇²v + f
∇·v = 0

Three equations in three unknowns (the velocity components) plus the pressure-incompressibility constraint. Nonlinear, because of the v·∇v term — and that nonlinearity carries the entire phenomenology of turbulence.

The Clay Millennium Prize includes the question: do smooth, well-defined solutions exist for all time given smooth initial data, in three dimensions? Or can a smooth flow develop a singularity in finite time? Open. A million dollars and the most famous unsolved problem in applied mathematics.

The equations are nevertheless solved numerically, daily, in weather forecasts, aircraft design, blood-flow models, and climate simulations. Working numerical solutions long preceded a satisfactory theoretical understanding and continue to outpace it.

Applied · Navier-Stokes— vii —
FourierVIII

Chapter VIFourier analysis.

The decomposition of functions into sums or integrals of sines and cosines. Joseph Fourier's 1807 manuscript on heat conduction announced that any function on a bounded interval can be expanded in such a series. The community resisted; the technique won.

The Fourier series of a periodic function f(x) on [−π, π]:

f(x) = a₀/2 + Σ (aₙ cos nx + bₙ sin nx)

The coefficients are inner products of f against the basis functions. Carleson's theorem (1966) settled the convergence question for L² functions: the series converges to f almost everywhere.

Fourier analysis is the foundation of signal processing, image compression, the spectral theory of operators, the heat and wave equations, and most of partial differential equations. Few mathematical inventions have done more work.

Applied · Fourier— viii —
FTIX

Chapter VIIThe Fourier transform.

The non-periodic version. For a function f on the real line:

f̂(ξ) = ∫ f(x) e^(−2πixξ) dx

The Fourier transform takes a function and produces another function — its spectrum — recording how much of each frequency the original contains. Inverse transform recovers f from f̂.

Key properties. Linearity. Convolution-multiplication duality: f * g maps to f̂·ĝ — the most useful identity in signal processing. Plancherel: ‖f‖² = ‖f̂‖² (the transform is an isometry on L²). Uncertainty principle: a function and its transform cannot both be sharply localised.

The fast Fourier transform (Cooley-Tukey, 1965) computes a discrete version in O(N log N) time. It is one of the ten algorithms Computing in Science and Engineering named most influential in the 20th century, and probably the most-used algorithm in scientific computing.

Applied · FT— ix —
Signal processingX

Chapter VIIISignal processing.

The applied discipline born from Fourier analysis and Shannon's information theory. The basic objects: signals (functions of time or space), filters (linear time-invariant operators), noise (random processes), sampling (taking discrete values from continuous signals).

The Nyquist-Shannon sampling theorem: a band-limited signal can be perfectly reconstructed from samples taken at twice the highest frequency. This is the theorem that makes digital audio possible. CD audio sampling at 44.1 kHz handles human-audible frequencies up to ~22 kHz.

Modern signal processing extends to wavelets (Daubechies 1988), compressed sensing (Candès, Tao, Donoho 2006) — recovering sparse signals from far fewer samples than Nyquist would require — and the array of techniques used in MP3, JPEG, MRI reconstruction, and ground-penetrating radar.

Speech recognition, computer vision, and the modern AI stack all begin with signal processing — the conversion of raw measurement into representations that downstream models can use.

Applied · Signal— x —
Numerical methodsXI

Chapter IXNumerical methods.

The branch concerned with computing approximate answers to mathematical problems on finite machines. The basic constraints: limited precision, limited memory, limited time.

The standard topics. Root-finding (bisection, Newton, secant). Linear systems (Gaussian elimination, LU, QR, conjugate gradient). Integration (trapezoidal, Simpson, Gauss-Legendre, Monte Carlo). ODEs (Euler, Runge-Kutta, multistep). PDEs (finite difference, finite element, spectral). Eigenvalues (power iteration, QR algorithm, Lanczos, Arnoldi).

The discipline is older than the digital computer — Newton, Gauss, and Lagrange all developed numerical methods. But the computer transformed the field. James Wilkinson's 1965 The Algebraic Eigenvalue Problem founded modern numerical linear algebra. The QR algorithm (Francis 1961) for eigenvalues was named one of the algorithms of the 20th century.

Numerical methods are the part of mathematics that engineers and physicists actually use. They are taught too lightly in undergraduate mathematics curricula.

Fourier transform
The Fourier transform — decomposing signals into frequencies
Applied · Numerical— xi —
Floating pointXII

Chapter XFloating-point arithmetic.

The way computers approximate real numbers. A floating-point number is a sign, a fraction (the mantissa), and an exponent: x = ±m × 2^e. The IEEE 754 standard (1985, revised 2008 and 2019) specifies the format precisely.

Single precision: 32 bits, ~7 decimal digits of precision, range ~10^±38. Double precision: 64 bits, ~16 decimal digits, range ~10^±308. Half precision (16 bits) and bfloat16 are now standard in machine learning, where reduced precision suffices and saves memory and compute.

Catastrophic cancellation. Subtracting two nearly equal numbers loses most of the precision. The classical pitfall: computing (1 − cos x) for small x by direct subtraction loses digits; using 2 sin²(x/2) does not.

NaN (Not a Number) and infinity are first-class values. The Patriot missile incident (1991, 28 deaths) was caused by a floating-point timing error. The Ariane 5 rocket explosion (1996) was caused by a 64-bit-to-16-bit conversion overflow. Numerical analysis is not academic.

Applied · Floating point— xii —
Newton's methodXIII

Chapter XINewton's method.

To solve f(x) = 0, iterate x_{n+1} = x_n − f(x_n)/f'(x_n) Geometrically, follow the tangent line to its x-intercept.

The method converges quadratically — the number of correct digits roughly doubles each step — when started near a simple root. For roots of high multiplicity, convergence degrades to linear. For poor starting points, the method can diverge or oscillate.

Newton's method generalises to systems via the Jacobian: x_{n+1} = x_n − J(x_n)^(−1) f(x_n). It is the workhorse of nonlinear root-finding and the basis of most optimisation algorithms via its application to ∇f = 0.

Modern variants: quasi-Newton methods (BFGS, L-BFGS) avoid the cost of computing the Jacobian by approximation. Trust-region methods stabilise convergence. The Levenberg-Marquardt algorithm interpolates between Newton and gradient descent and is the standard for nonlinear least squares.

Applied · Newton— xiii —
Runge-KuttaXIV

Chapter XIIRunge-Kutta methods.

The standard tool for integrating ordinary differential equations numerically. Carl Runge (1895) and Wilhelm Kutta (1901) developed a family of methods that improve on Euler's first-order scheme.

The classical fourth-order method (RK4) takes four evaluations of f per step and achieves O(h⁴) global accuracy. It hits the sweet spot of accuracy versus cost for many practical problems and remains the default in introductory ODE solvers.

Modern adaptive methods — Dormand-Prince (used in MATLAB's ode45), Cash-Karp, Verner — automatically adjust step size to keep local error within a tolerance. Stiff problems (with widely separated time scales) require implicit methods like BDF (backward differentiation formulae) or Rosenbrock methods.

The SUNDIALS library and SciPy's solve_ivp are the practical tools. Most ODE problems in industry are solved by code descended directly from these algorithms.

Applied · Runge-Kutta— xiv —
FEMXV

Chapter XIIIFinite element method.

The dominant numerical method for partial differential equations on complex geometries. Origins in 1940s structural engineering (Hrennikoff, Courant); systematised by Argyris and Clough in the 1950s and given a rigorous mathematical foundation by Strang and Fix (1973).

The technique. Discretise the domain into small elements (triangles, tetrahedra, hexahedra). Approximate the solution as a piecewise polynomial on the mesh. Convert the PDE to a system of linear (or nonlinear) equations by Galerkin projection. Solve.

The reward: arbitrary geometry, easy treatment of complex boundary conditions, direct refinement near features of interest, theoretical convergence guarantees.

FEM is the engineering analysis method. It runs the simulations behind every modern bridge, every aircraft wing, every car-crash analysis, every building structural calculation. Major commercial codes: Abaqus, Ansys, COMSOL, Nastran. The open-source FEniCS and deal.II are research and education staples.

Applied · FEM— xv —
CFDXVI

Chapter XIVComputational fluid dynamics.

The applied field that solves Navier-Stokes numerically. CFD developed in the 1960s and 1970s under the demands of weapons design and aerospace. By 2026 it has displaced wind tunnels for most aerodynamic design.

The standard methods. Finite volume on structured or unstructured meshes is the workhorse for compressible flow. Finite element dominates incompressible flow. Spectral methods handle high-precision turbulence simulations on simple geometries.

Turbulence is the central difficulty. Direct numerical simulation (DNS) resolves all scales but costs scale as Re³ — prohibitive for engineering Reynolds numbers. Large eddy simulation (LES) and Reynolds-averaged Navier-Stokes (RANS) trade resolution for tractability. Closure models (Smagorinsky, k-ε, Spalart-Allmaras) embody the tradeoffs.

Weather and climate models are CFD on rotating spheres at planetary scale. The 2021 Nobel Prize in Physics went, in part, to Syukuro Manabe and Klaus Hasselmann for CFD-based climate modelling.

Applied · CFD— xvi —
OptimisationXVII

Chapter XVOptimisation.

The branch concerned with finding the input that minimises (or maximises) a function, subject to constraints. The umbrella covers calculus of variations, linear programming, convex optimisation, integer programming, and stochastic search.

The classification. Smooth versus non-smooth. Convex versus non-convex. Constrained versus unconstrained. Continuous versus discrete. Deterministic versus stochastic. Each axis selects a different toolkit.

The unifying KKT conditions (Karush 1939; Kuhn-Tucker 1951) generalise Lagrange multipliers to inequality constraints and characterise optima for any smooth problem.

Optimisation is the algorithmic core of machine learning. Training a neural network is solving a high-dimensional non-convex optimisation problem. Stochastic gradient descent, Adam, and their variants are the methods that make modern deep learning work.

Applied · Optimisation— xvii —
Linear programmingXVIII

Chapter XVILinear programming.

Optimise a linear objective subject to linear inequality constraints. George Dantzig's simplex algorithm (1947) walks vertices of the feasible polyhedron to the optimum. Khachiyan (1979) and Karmarkar (1984) gave polynomial-time interior-point methods.

Applications saturate operations research. Airline scheduling, manufacturing planning, supply chain optimisation, portfolio construction (in the linear case), diet problems, network flows. The annual contribution of LP to the world economy is in the hundreds of billions of dollars.

Duality is the conceptual heart. Every LP has a dual problem with the same optimum value (by the strong duality theorem). The dual variables — Lagrange multipliers — have economic interpretations as marginal prices and underpin much of mathematical economics.

The largest commercial solvers (Gurobi, CPLEX, COPT) routinely handle problems with millions of variables and millions of constraints. The open-source HiGHS solver, released 2019, has caught up with commercial performance for many problem classes.

Applied · LP— xviii —
Convex optimisationXIX

Chapter XVIIConvex optimisation.

The minimisation of a convex function over a convex set. The remarkable fact: every local minimum is a global minimum. Convex problems are tractable; non-convex problems are not.

The taxonomy of convex problems: linear programming (LP), quadratic programming (QP), second-order cone programming (SOCP), semidefinite programming (SDP), and the most general conic programming. Each generalises the previous and admits polynomial-time interior-point methods.

Boyd and Vandenberghe's Convex Optimization (2004; free PDF) is the textbook of record. The cvxpy and CVX modeling languages let practitioners specify convex problems in near-mathematical notation and have them solved.

Many ostensibly non-convex problems can be reformulated or relaxed as convex. The Goemans-Williamson SDP relaxation for max-cut (1995) achieves a 0.878 approximation ratio — best possible under the unique games conjecture. The lift-and-project hierarchy of Sherali-Adams and Lasserre formalises the practice.

Applied · Convex— xix —
VariationsXX

Chapter XVIIICalculus of variations.

Optimisation when the unknown is a function rather than a finite-dimensional vector. The objective is a functional — a function of a function — and the optimum is sought by setting the functional derivative to zero.

The founding problem: Johann Bernoulli's 1696 brachistochrone — what curve between two points minimises the time of descent under gravity? The answer is a cycloid. Newton, Leibniz, l'Hôpital, Jakob Bernoulli, and Tschirnhaus all submitted solutions.

Euler's 1744 Methodus inveniendi lineas curvas systematised the field. The Euler-Lagrange equation — a differential equation that any optimum must satisfy — is the central result.

The calculus of variations underlies classical mechanics (least-action principle), optical paths (Fermat's principle), quantum mechanics (path integrals), control theory, and modern machine learning (PINNs, neural ODEs). Hilbert's twentieth problem asked when variational problems have solutions; the modern answer involves Sobolev spaces and lower-semicontinuity.

Applied · Variations— xx —
Optimal controlXXI

Chapter XIXOptimal control.

The branch concerned with steering a dynamical system to a desired state at minimum cost. Origins in 1950s aerospace engineering. Two pillars.

Pontryagin's maximum principle (Pontryagin et al., 1956): a necessary condition for optimal control, generalising the Euler-Lagrange equation to systems with controls and state constraints. Bellman's dynamic programming (Bellman 1953) recursively solves the problem by working backwards from the terminal state, producing an optimal feedback policy.

The methods landed Apollo on the Moon (Apollo guidance computer, 1969). They underlie the trajectory optimisation of every modern rocket, missile, robotic arm, and autonomous vehicle. Model predictive control (MPC) — solving a finite-horizon optimisation at every time step and applying the first action — has become the standard for industrial process control.

Reinforcement learning is, conceptually, optimal control with unknown dynamics. The Bellman equation is the central recursion in both fields.

Joseph Fourier
Joseph Fourier (1768–1830) — heat conduction and the Fourier series
Applied · Control— xxi —
Mathematical biologyXXII

Chapter XXMathematical biology.

The application of mathematics to biological systems. Founded by D'Arcy Thompson's On Growth and Form (1917); modernised by Alan Turing's 1952 paper "The Chemical Basis of Morphogenesis," which derived spatial patterns from reaction-diffusion equations.

Standard topics: population dynamics (Lotka-Volterra predator-prey, 1925), epidemiology (Kermack-McKendrick SIR, 1927), neural dynamics (Hodgkin-Huxley axon model, 1952; Nobel 1963), population genetics (Fisher, Wright, Haldane). Hamilton's 1964 inclusive fitness theory is mathematical biology in disguise.

Modern fields: systems biology (network models of cellular regulation), cancer modelling (clonal evolution dynamics), computational neuroscience (neural coding, dynamics on networks), phylogenetics (algorithms for tree reconstruction).

The COVID-19 pandemic placed compartmental epidemiological models — descendants of Kermack-McKendrick — on every news broadcast. The forecasts were not always accurate; the underlying mathematics is correct and was being worked on long before March 2020.

Applied · Biology— xxii —
Mathematical economicsXXIII

Chapter XXIMathematical economics.

The mathematisation of economic theory accelerated after 1940 and won the discipline its centrality. Three pillars.

General equilibrium theory. Arrow and Debreu (1954) proved the existence of competitive equilibria in a multi-market economy under convexity assumptions. Nobel Prizes followed. The proof uses Kakutani's fixed-point theorem and helped establish the language of contemporary economics.

Game theory. von Neumann and Morgenstern's Theory of Games and Economic Behavior (1944) founded it. Nash (1950) introduced the equilibrium concept that bears his name. Harsanyi, Selten, Aumann, Shapley, and others extended it. Game theory now structures industrial-organisation analysis, auction design (the FCC spectrum auctions, 1994), and matching markets (the National Resident Matching Program).

Macroeconomic dynamics. The dynamic stochastic general equilibrium (DSGE) models of the modern macro mainstream rest on stochastic optimal control under rational expectations.

Applied · Economics— xxiii —
Mathematical financeXXIV

Chapter XXIIMathematical finance.

The applied subdiscipline that prices financial instruments and manages risk. Louis Bachelier's 1900 thesis "Théorie de la spéculation" proposed Brownian motion as a model for stock prices — five years before Einstein, ignored at the time.

The field exploded after the early 1970s. Markowitz (1952) gave portfolio theory — the variance-of-returns framework that defined modern portfolio management. Sharpe and Lintner developed the capital asset pricing model. Black, Scholes, and Merton (1973) gave the option-pricing equation that founded the derivatives industry.

The mathematical content is stochastic calculus — Itô calculus, martingale methods, change of measure. The standard textbooks are Shreve's two volumes (2004) and Hull's Options, Futures, and Other Derivatives.

Long-Term Capital Management (1998 collapse) and the 2008 financial crisis demonstrated that the models can mislead — particularly when the assumed distributions of returns underweight tail events. The discipline survived, with humility added.

Applied · Finance— xxiv —
Black-ScholesXXV

Chapter XXIIIThe Black-Scholes equation.

The 1973 result that founded the modern derivatives industry. Fischer Black and Myron Scholes derived a partial differential equation for the price V of a European option on an asset following geometric Brownian motion:

∂V/∂t + ½σ²S²·∂²V/∂S² + rS·∂V/∂S − rV = 0

The equation has a closed-form solution for European calls and puts in terms of the cumulative normal distribution. Robert Merton contributed key extensions in the same year. Scholes and Merton received the 1997 Nobel Prize in Economics; Black had died in 1995 and was thus ineligible.

The technique — replicate the option payoff by dynamic trading in the underlying asset, observe that the replicating portfolio must earn the risk-free rate to avoid arbitrage, derive the PDE — is one of the most beautiful arguments in applied mathematics.

The Chicago Board Options Exchange opened in April 1973, one month before the Black-Scholes paper. Forty years later the global derivatives market measured in the hundreds of trillions of dollars notional. The mathematics did not cause this; it enabled it.

Applied · Black-Scholes— xxv —
ORXXVI

Chapter XXIVOperations research.

The applied discipline that decides how to allocate scarce resources to achieve objectives. Origins in WWII Britain — Patrick Blackett's 1941 group studied submarine evasion, anti-aircraft gunnery, and convoy strategy mathematically. The Allies' radar and convoy analysis, refined by Blackett and others, materially shortened the war.

Postwar topics: queueing theory (Erlang 1909, Kendall 1953), inventory theory (Wagner-Whitin 1958), scheduling (Johnson's flow-shop algorithm, 1954), network flows (Ford-Fulkerson 1956), simulation (Monte Carlo, Metropolis et al. 1953).

Major industrial users: airlines (crew scheduling, fleet assignment), logistics (UPS's ORION system, claimed $300M annual savings), retail (inventory and replenishment), utilities (unit commitment), oil and gas (refinery optimisation).

OR as a department has waxed and waned in business schools. The methods have only become more pervasive — under different labels, including data science and AI engineering.

Applied · OR— xxvi —
Inverse problemsXXVII

Chapter XXVInverse problems and tomography.

An inverse problem recovers an unknown cause from observed effects. Forward problem: given the cause, predict the data. Inverse problem: given the data, recover the cause.

Inverse problems are usually ill-posed — small data perturbations cause large solution perturbations. Tikhonov regularisation (1963) trades data fit for smoothness and is the standard remedy.

Computed tomography reconstructs a 3D image from X-ray projections through different angles. The mathematical core is the Radon transform (Johann Radon, 1917), which records line integrals of a function. Cormack and Hounsfield built the first CT scanner; 1979 Nobel Prize in Medicine.

The same techniques drive MRI image reconstruction, seismic imaging of the Earth's interior, electron microscopy reconstruction, and synthetic-aperture radar. Modern variants combine compressed sensing with deep learning to push reconstruction quality further than classical methods allow.

Applied · Inverse— xxvii —
NLAXXVIII

Chapter XXVINumerical linear algebra.

The branch concerned with solving systems Ax = b, computing eigenvalues, and factorising matrices on a computer. James Wilkinson founded modern numerical linear algebra in the 1950s and 1960s.

The standard toolkit. LU decomposition with partial pivoting for general linear systems. QR factorisation for least-squares problems. Cholesky for symmetric positive-definite systems. SVD (singular value decomposition) for everything difficult.

For large sparse problems: Krylov subspace methods — conjugate gradient, GMRES, BiCGStab, Lanczos, Arnoldi. The first practical sparse direct solvers (Duff, Erisman, Reid, 1986) and modern descendants like SuperLU, MUMPS, and PARDISO handle systems with millions of unknowns from PDE discretisations.

The BLAS-LAPACK stack (1979 onward) standardised numerical linear algebra at the library level. Almost every numerical computation in science and engineering, from ANSYS simulations to PyTorch matrix multiplications, ultimately calls into this stack.

Applied · NLA— xxviii —
PageRankXXIX

Chapter XXVIIEigenproblems and PageRank.

An eigenvalue problem finds vectors v and scalars λ with Av = λv. Eigenvalues describe vibrational modes, principal axes of inertia, equilibria of dynamical systems, and stationary distributions of Markov chains.

The most-cited applied eigenvalue problem: PageRank (Brin and Page, 1998). The web is a directed graph; the stationary distribution of a damped random walk on this graph defines a notion of "importance" that ranks pages. The mathematics is the dominant left eigenvector of a stochastic matrix — Perron-Frobenius theory, 1907.

Google made PageRank the basis of its search ranking and grew into a trillion-dollar company. The algorithm was a 1996 PhD project at Stanford with the explicit mathematical core known to the authors. The example is canonical: deep eigenproblem theory, applied at scale, can be transformative.

The same eigenvector machinery underlies spectral clustering (Shi-Malik 2000), the Laplacian eigenmaps of dimensionality reduction, Markov chain Monte Carlo convergence analysis, and quantum mechanics.

Applied · PageRank— xxix —
Reading listXXX

Chapter XXVIIIReading list.

Applied · Reading list— xxx —
Watch & ReadXXXI

Chapter XXIXWatch & read.

↑ Visual introduction to the Fourier transform · 3Blue1Brown

More on YouTube

Watch · Differential equations, a tourist's guide
Watch · Linear programming explained

And read

For a one-volume map of the field, the Princeton Companion to Applied Mathematics is the standard reference. Trefethen & Bau's Numerical Linear Algebra is the cleanest expository textbook in the discipline. Boyd & Vandenberghe's Convex Optimization is essential for anyone whose work touches optimisation. Brunton & Kutz is the modern bridge to data-driven applied mathematics.

Applied · Watch & Read— xxxi —
ColophonXXXII

The end of the deck.

Applied Mathematics — Volume IV, Deck 09 of The Deck Catalog. Set in Inter and Tiempos. Light gray-white paper at #f5f5f5; navy ink with industrial orange accent.

From oscillators to PageRank — thirty-two leaves of mathematics validated by airplanes, weather, and markets.

FINIS

↑ Vol. IV · Math. · Deck 09

i / iSpace · ↓ · ↑