Vol. IV · Deck 08 · The Deck Catalog

Discrete math.

The mathematics of countable structures — sets, graphs, recurrences, and algorithms. The branch that rose with the computer and the branch the computer cannot do without.


Königsberg1736
Four-color1976
Pages32
LedeII

// 01 — OpeningWhat discrete math is.

The mathematics of objects you can count.

The discrete-continuous split runs through mathematics. On one side lie the integers, finite graphs, finite automata, and the combinatorial objects you can list one by one. On the other lie the reals, smooth manifolds, and the analytic machinery of calculus.

Discrete mathematics is the side the computer lives on. RAM is finite. Bits are 0 or 1. Programs are finite strings of finite instructions. Almost everything an undergraduate sees in CS theory — automata, complexity, cryptography, algorithms — is discrete mathematics in disguise.

The subject was not always central. Until the mid-20th century, "real" mathematics was continuous mathematics; combinatorics was a craft. The computer changed that. This deck traverses the standard topics — sets, counting, graphs, number theory, algorithms — emphasising the pieces that mattered before the computer and the pieces that matter because of it.

Vol. IV— ii —
SetsIII

// 02 — FoundationSets.

The most general mathematical object: a collection of distinct things, called elements. The notation x ∈ S reads "x is in S." A set is determined by its elements alone — order and repetition do not matter.

The basic operations: union (A ∪ B), intersection (A ∩ B), difference (A \ B), complement, Cartesian product (A × B, the set of ordered pairs). The power set 𝒫(A) is the set of all subsets of A.

De Morgan's laws. (A ∪ B)ᶜ = Aᶜ ∩ Bᶜ; (A ∩ B)ᶜ = Aᶜ ∪ Bᶜ. The first algebraic identities a discrete-math student meets, and the dual structure that runs throughout the field.

For finite sets, cardinality |A| is just the count. For infinite sets, Cantor's theory takes over — but for discrete math, finite sets and countably infinite sets are usually all you need.

Discrete · Sets— iii —
FunctionsIV

// 03 — MapsFunctions.

A function f : A → B assigns to each element of A exactly one element of B. The domain is A, the codomain is B, the image is the subset of B actually hit.

Three classes of function dominate discrete math.

Injective (one-to-one): no two domain elements map to the same image. Surjective (onto): every codomain element is hit. Bijective: both. Bijections between finite sets establish equal cardinality and underlie nearly every counting argument.

The pigeonhole principle: if you map n+1 pigeons into n holes, some hole receives two. Trivial in statement, decisive in application — Erdős and Szekeres's 1935 theorem on monotone subsequences, Ramsey theory, and the Dirichlet approximation theorem all follow from variants of pigeonhole.

Discrete · Functions— iv —
RelationsV

// 04 — StructureRelations.

A relation on A is a subset of A × A. Given a, b ∈ A, either (a,b) ∈ R or not. Three properties matter.

Reflexive: aRa for all a. Symmetric: aRb ⟹ bRa. Transitive: aRb & bRc ⟹ aRc.

A relation that is reflexive, symmetric, and transitive is an equivalence relation. It partitions A into equivalence classes — disjoint subsets whose union is A. Modular arithmetic is the canonical example: integers congruent mod n form an equivalence class.

A relation that is reflexive, antisymmetric, and transitive is a partial order. Subsets ordered by inclusion, divisibility on positive integers, and the dependency graph of a build system are all partial orders. Partial orders organise dynamic programming, scheduling, and the theory of databases.

Discrete · Relations— v —
CountingVI

// 05 — CombinatoricsThe art of counting.

Two rules cover most beginning combinatorics.

Sum rule. If A and B are disjoint, |A ∪ B| = |A| + |B|. Product rule. The number of pairs (a, b) with a ∈ A, b ∈ B is |A| · |B|.

The inclusion-exclusion principle handles overlap. For two sets: |A ∪ B| = |A| + |B| − |A ∩ B|. For n sets the formula alternates over all 2ⁿ intersections.

Most combinatorial problems reduce to a careful application of these three rules to a well-chosen bijection. The art lies in finding the bijection. Counting the number of binary strings of length n with no two consecutive 1s is Fibonacci. Counting parenthesisations of n+1 factors is Catalan. The reduction in each case is non-obvious until you see it.

Discrete · Counting— vi —
n! / r!(n−r)!VII

// 06 — Permutation & combinationOrder matters, sometimes.

The number of orderings of n distinct objects is n! = n × (n−1) × … × 1. 5! = 120. 10! = 3,628,800. The factorial grows fast.

Permutations P(n, r) = n!/(n−r)! count ordered selections of r from n. Combinations C(n, r) = n!/(r!(n−r)!) count unordered selections — the same divided by r! to remove order.

The combination C(n, r), written (n choose r) or nCr, is the binomial coefficient. It satisfies Pascal's identity: C(n, r) = C(n−1, r) + C(n−1, r−1). Pascal's triangle is the array of binomial coefficients arranged so each entry is the sum of the two above.

The binomial theorem expands (x + y)ⁿ as Σ C(n, k) xᵏ yⁿ⁻ᵏ. Newton extended it to non-integer exponents in 1665 — the result is one of the founding identities of analysis.

Discrete · Perm & comb— vii —
Generating functionsVIII

// 07 — Encoding sequencesGenerating functions.

A generating function packages a sequence (a₀, a₁, a₂, …) into a single formal power series f(x) = Σ aₙ xⁿ. Operations on the series correspond to operations on the sequence.

Addition of series adds sequences. Multiplication produces convolutions — useful for counting compositions. Differentiation indexes by n. The closed form 1/(1−x) = 1 + x + x² + … encodes the constant sequence; x/(1−x−x²) encodes Fibonacci.

Herbert Wilf's generatingfunctionology (1990) made the technique standard for combinatorialists. The technique converts hard recurrences into algebraic manipulation. Donald Knuth's Concrete Mathematics (1989) trained a generation of computer scientists in the discipline.

Exponential generating functions Σ aₙ xⁿ/n! handle labelled structures. Dirichlet series Σ aₙ/nˢ handle multiplicative structures and tie discrete combinatorics to analytic number theory.

Four color theorem
The four-colour theorem — first proof to require a computer (1976)
Discrete · GF— viii —
RecurrencesIX

// 08 — Self-referenceRecurrence relations.

A sequence defined by a rule that expresses each term in terms of earlier ones. The Fibonacci recurrence: F₀ = 0, F₁ = 1, Fₙ = Fₙ₋₁ + Fₙ₋₂.

Linear recurrences with constant coefficients have closed-form solutions. The Fibonacci numbers: Fₙ = (φⁿ − ψⁿ)/√5, where φ = (1+√5)/2 and ψ = (1−√5)/2. Binet's formula, 1843.

The technique generalises. The characteristic polynomial of the recurrence has roots r₁, …, rₖ; the general solution is Σ cᵢ rᵢⁿ for constants cᵢ determined by initial conditions.

Recurrences are how algorithms describe their cost. Mergesort: T(n) = 2T(n/2) + O(n). The master theorem (Bentley-Haken-Saxe 1980) gives a quick recipe for the asymptotic solutions of divide-and-conquer recurrences.

Discrete · Recurrence— ix —
GraphsX

// 09 — NetworksGraph theory.

The richest object in discrete mathematics. A graph G = (V, E) is a set of vertices with a set of edges connecting pairs. Directed graphs have ordered edges. Weighted graphs attach numbers to edges.

Graphs model: road networks, social ties, web links, electrical circuits, dependency relations, finite-state machines, atomic bonds. Almost every domain of computer science and a great deal of operations research reduce to graph problems.

The handshake lemma: in any graph, the sum of vertex degrees equals 2|E|. Hence the number of odd-degree vertices is even. Trivial; consequential — Euler's bridge proof depends on it.

The complete graph Kₙ has every possible edge. The bipartite graph K_{m,n} has two parts with all edges across. The classification of "nice" subgraph properties begins with these.

Discrete · Graphs— x —
Königsberg revisitedXI

// 10 — Founding problemEulerian paths.

The 1736 result, restated graph-theoretically. An Eulerian path is a walk that crosses every edge exactly once. Euler's theorem: a connected graph has an Eulerian path iff it has zero or two vertices of odd degree. It has an Eulerian circuit iff all vertices have even degree.

The Hamiltonian path — visiting every vertex exactly once — is the harder cousin. Whereas Eulerian paths can be detected in linear time, the Hamiltonian path problem is NP-complete (Karp 1972). The asymmetry is one of the deepest in discrete mathematics.

Hierholzer's algorithm (1873) finds an Eulerian circuit in linear time when one exists. The Chinese postman problem — find a shortest closed walk traversing every edge at least once — generalises and remains tractable. The travelling salesman problem — its vertex-visit cousin — is hard.

Discrete · Eulerian— xi —
AlgorithmsXII

// 11 — Procedures on graphsGraph algorithms.

The standard repertoire, taught in every algorithms course.

Breadth-first search (Moore 1959, Lee 1961): explore neighbours in order of distance from source; finds shortest paths in unweighted graphs in O(V + E). Depth-first search (Tarjan 1972): explore as far as possible before backtracking; underlies algorithms for connected components, topological sort, and biconnectivity.

Dijkstra's algorithm (1959): single-source shortest paths in graphs with non-negative weights, O((V+E) log V) with a priority queue. Bellman-Ford (1958): the same with negative weights, O(VE). Floyd-Warshall: all-pairs, O(V³).

Prim and Kruskal find minimum spanning trees. Edmonds-Karp and push-relabel solve maximum flow. Hopcroft-Karp finds maximum bipartite matchings. The vocabulary is shared across operations research, networking, and machine learning.

Discrete · Graph algorithms— xii —
TreesXIII

// 12 — Acyclic graphsTrees.

A tree is a connected acyclic graph. Equivalently: a connected graph on n vertices with exactly n−1 edges. Equivalently: a graph in which every two vertices are connected by exactly one simple path.

Trees are the workhorse data structure. Filesystems are trees. Decision trees are trees. The Huffman codes that compress your files are trees. Heaps, B-trees, red-black trees, tries — all trees.

Cayley's formula (1889): the number of distinct labelled trees on n vertices is nⁿ⁻². For n=4 that's 16; for n=10 that's 100 million. The proof — by Prüfer's bijective encoding — is one of the elegant arguments in combinatorics.

A spanning tree of a graph is a subgraph that is a tree containing all vertices. The number of spanning trees is given by the matrix tree theorem (Kirchhoff 1847) as a determinant of the graph Laplacian.

Discrete · Trees— xiii —
PlanarityXIV

// 13 — On the pagePlanar graphs.

A graph is planar if it can be drawn in the plane with no edge crossings. Some graphs are not — K₅ (the complete graph on 5 vertices) and K_{3,3} (the bipartite graph 3-by-3) are the canonical non-examples.

Kuratowski's theorem (1930): a graph is planar iff it contains no subdivision of K₅ or K_{3,3}. The result reduces an infinite question (drawability) to a finite check.

Planar graphs satisfy Euler's formula V − E + F = 2 (the topological invariant translated to graphs). They have edge density at most 3V − 6, so they are sparse. They admit linear-time algorithms for many problems that are hard on general graphs.

The most famous theorem about planar graphs is the four-color theorem.

Discrete · Planar— xiv —
1976XV

// 14 — A controversyThe four-color theorem.

Posed by Francis Guthrie in 1852. Statement: every map drawn on the plane can be coloured with four colours so that no two adjacent regions share a colour. Equivalently: every planar graph is 4-colourable.

Proven false-starts by Kempe (1879) and Heawood (1890), the latter establishing the five-colour theorem. The full conjecture resisted for 124 years.

Kenneth Appel and Wolfgang Haken announced a proof in 1976. The proof reduced the problem to checking 1,936 unavoidable configurations and verified each by computer. About 1,200 hours of IBM 360 time. The first major theorem with a proof no human could check by hand.

The proof was philosophically controversial — does a computer-verified case analysis count as a proof? — but technically sound. Robertson, Sanders, Seymour, and Thomas (1996) gave a simpler proof, still computer-assisted. Georges Gonthier formally verified the entire proof in Coq in 2005, removing the last technical reservation. The four-color theorem was the first machine-verified non-trivial mathematical theorem.

Graph theory
A graph — the basic object of discrete mathematics
Discrete · Four-color— xv —
mod nXVI

// 15 — Number theoryModular arithmetic.

Two integers a and b are congruent modulo n, written a ≡ b (mod n), if n divides a − b. Congruence partitions ℤ into n equivalence classes ℤ/nℤ.

The arithmetic of ℤ/nℤ is the workhorse of computer science. Integer overflow, hash functions, error-correcting codes, and almost all of cryptography happen here.

Three classical results. Fermat's little theorem (1640): if p is prime and gcd(a,p) = 1, then a^(p−1) ≡ 1 (mod p). Euler's generalisation: a^φ(n) ≡ 1 (mod n) when gcd(a,n) = 1, where φ is Euler's totient. Chinese remainder theorem: a system of congruences with pairwise coprime moduli has a unique solution mod the product.

RSA public-key cryptography (1977) is one paragraph of modular arithmetic, given a few millennia of mathematical infrastructure to land on.

Discrete · Mod n— xvi —
Discrete probabilityXVII

// 16 — Chance on finite spacesDiscrete probability.

A discrete probability space is a finite or countable set Ω together with weights p(ω) ≥ 0 summing to 1. An event is a subset of Ω; its probability is the sum of weights of its elements.

The basic theorems carry over from continuous probability. Linearity of expectation: E[X + Y] = E[X] + E[Y], regardless of independence. Markov's inequality: P(X ≥ a) ≤ E[X]/a for non-negative X. Chebyshev, Chernoff, Hoeffding: increasingly sharp tail bounds for sums of independent random variables.

The probabilistic method (Erdős, 1947) proves combinatorial existence by showing the probability of existence is positive. The most famous early use: Erdős's lower bound on Ramsey numbers — the existence of a graph whose colouring has no large monochromatic clique, established by counting bad colourings rather than constructing the good graph. The technique has been generative ever since.

Discrete · Probability— xvii —
Game theoryXVIII

// 17 — Combinatorial gamesTwo-player perfect-information.

Combinatorial game theory studies games of pure strategy with no hidden information and no chance. Nim, Hex, Go, chess, Connect-Four — all combinatorial games in this sense.

The fundamental result is Sprague-Grundy theory (1935): every position in an impartial game is equivalent to a Nim heap of some size, the Grundy value. Disjunctive sums of games have Grundy values that XOR.

Surreal numbers (Conway, 1976) extend the theory to partisan games, embedding both real numbers and ordinals in a single algebraic structure. On Numbers and Games (1976) and Winning Ways (Berlekamp, Conway, Guy, 1982) are the founding texts.

Connect-Four was strong-solved by Allis in 1988 (first player wins). Checkers was weak-solved by Schaeffer in 2007 (perfect play draws). Chess and Go remain unsolved.

Discrete · Games— xviii —
Boolean algebraXIX

// 18 — Algebra of {0, 1}Boolean algebra.

The two-element algebra with operations AND (·), OR (+), and NOT (¬). Identities: x · x = x, x + x = x, distributivity, De Morgan, complementation.

Disjunctive normal form: every Boolean function can be written as an OR of ANDs of variables and their negations. Conjunctive normal form dualises. SAT — given a Boolean formula, is it satisfiable? — is the prototypical NP-complete problem (Cook 1971; Levin 1973).

Boolean algebra is the abstract foundation of digital logic. Every digital circuit implements a Boolean function. The minimisation of Boolean expressions — Karnaugh maps, Quine-McCluskey, BDDs — was a first-class research topic for decades and is now embedded in every chip-design toolchain.

Discrete · Boolean— xix —
Logic gatesXX

// 19 — HardwareLogic gates.

Claude Shannon's 1937 master's thesis "A Symbolic Analysis of Relay and Switching Circuits" demonstrated that Boolean algebra describes electrical switching networks. AND, OR, NOT became circuit primitives. NAND alone is functionally complete — every Boolean function can be implemented from NAND gates alone.

From gates: half-adders, full-adders, ripple-carry adders, multiplexers, decoders, latches, flip-flops, registers, ALUs, CPUs. The hierarchy is unbroken from the 0/1 algebra to the chip on your desk.

Modern processors contain on the order of 10¹⁰–10¹¹ transistors. Each is roughly a Boolean primitive. The full Boolean function the processor computes — given inputs, what outputs — is intractable to write down but exact in description. Discrete mathematics is the literal language of the artefact.

Discrete · Gates— xx —
AlgorithmsXXI

// 20 — ProceduresAlgorithm design.

The five paradigms of algorithm design.

Divide and conquer. Split, recurse, combine. Mergesort, quicksort, fast Fourier transform, Strassen's matrix multiplication.

Dynamic programming. Solve subproblems, memoise, combine. Bellman 1953. Edit distance, longest common subsequence, optimal binary search trees, Bellman-Ford, Floyd-Warshall.

Greedy. At each step, take the locally best choice. Works when the problem has matroid structure. Kruskal, Prim, Huffman, scheduling.

Backtracking. Search a tree of partial solutions, prune bad branches. SAT solvers, constraint satisfaction, n-queens.

Reduction. Convert one problem to another already solved. The basis of NP-completeness theory and the practical workhorse of much algorithm engineering.

Discrete · Algorithms— xxi —
Big-OXXII

// 21 — Asymptotic analysisBig-O notation.

Big-O notation describes the growth rate of an algorithm's resource use. f(n) = O(g(n)) means there exist constants c, n₀ such that |f(n)| ≤ c|g(n)| for all n ≥ n₀. The notation absorbs constants and lower-order terms.

The standard hierarchy: O(1) < O(log n) < O(√n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2ⁿ) < O(n!). Algorithms beyond O(n³) are rarely practical at scale.

Donald Knuth popularised the notation in computer science with The Art of Computer Programming (1968 onward). Big-O is asymmetric — n = O(n²) is true but uninformative — so refinements Ω (lower bound), Θ (tight bound), and o (strictly slower) are used when needed.

The P versus NP question — does every problem checkable in polynomial time admit a polynomial-time algorithm? — is the central open question of computer science and one of the seven Millennium Prize Problems.

Discrete · Big-O— xxii —
SortingXXIII

// 22 — Order n elementsSorting algorithms.

The most-studied algorithmic problem. Inputs: a list of n comparable elements. Output: the same list in order.

O(n²) algorithms: bubble sort, insertion sort, selection sort. Trivial to implement, slow for large n, fine for small. Insertion sort is the working horse for arrays of size ≤ 20.

O(n log n) algorithms: mergesort (von Neumann 1945), heapsort (Williams 1964), quicksort (Hoare 1961). Quicksort is fastest in practice on average but O(n²) in the worst case; randomised pivot selection makes the worst case essentially impossible. Mergesort is stable and parallelises well; it underlies most production library sorts.

Comparison-based sorting cannot beat Ω(n log n) — a classical lower-bound result via decision trees. Radix sort and counting sort beat the bound by exploiting properties of the keys.

Modern library implementations (Timsort, Python and Java; pdqsort, Rust) are hybrids that switch strategies by input.

Seven Bridges of Königsberg
The Seven Bridges of Königsberg — Euler's 1736 paper founded graph theory
Discrete · Sorting— xxiii —
CryptographyXXIV

// 23 — Discrete secretsCryptography preview.

Modern cryptography is applied number theory and applied algorithm design. Two pillars.

Public-key cryptography. RSA (Rivest, Shamir, Adleman, 1977) rests on the hardness of factoring n = pq when p and q are large primes. Diffie-Hellman (1976) rests on the discrete logarithm problem. Elliptic-curve cryptography (Koblitz, Miller, 1985) replaces multiplicative groups with elliptic-curve point groups, reducing key sizes drastically.

Symmetric cryptography. Block ciphers — DES (1977), AES (Rijmen-Daemen, 2001) — and stream ciphers handle bulk encryption with shared keys.

The threat horizon. Shor's algorithm (1994) factors integers in polynomial time on a quantum computer. The day a sufficiently large quantum computer exists, RSA and ECC die simultaneously. Post-quantum cryptography — lattice-based, hash-based, code-based — is being standardised by NIST. CRYSTALS-Kyber and CRYSTALS-Dilithium were finalised in 2024.

Discrete · Crypto— xxiv —
Coding theoryXXV

// 24 — Detecting errorsCoding theory.

The mathematics of communicating reliably over noisy channels. Claude Shannon's 1948 paper "A Mathematical Theory of Communication" founded the field.

The basic trade: append parity bits to messages. The redundancy lets the receiver detect — or correct — errors introduced by the channel. Minimum distance d means up to ⌊(d−1)/2⌋ errors can be corrected; up to d−1 can be detected.

Hamming codes (1950): the simplest single-error-correcting linear codes, used in early computer memories. Reed-Solomon codes (1960): block codes over finite fields with strong burst-error correction; standard in CDs, DVDs, QR codes, deep-space communication. BCH, convolutional, turbo, LDPC, and polar codes form the modern repertoire. Polar codes (Arıkan, 2009) are the first explicit codes proven to achieve channel capacity and are the basis of 5G control channels.

Discrete · Coding— xxv —
HammingXXVI

// 25 — A specific codeHamming codes.

Designed by Richard Hamming at Bell Labs around 1947, in frustration with the Bell Model V relay computer halting on weekend runs after a single bit-flip. Hamming's first code added 3 parity bits to 4 data bits — the (7, 4) Hamming code — to detect and correct any single-bit error.

The construction is elegant. The 7 bits are arranged so that 3 distinct parity checks cover overlapping sets, and the syndrome (the pattern of failed checks) directly identifies the bit position in error.

Hamming distance — the number of positions in which two codewords differ — became a basic notion of coding theory and machine learning. K-nearest-neighbour classification, locality-sensitive hashing, and Bloom filters all use Hamming distance or close cousins.

Hamming's other contribution to the field: The Art of Doing Science and Engineering (1995, posthumous), a collection of his Bell Labs lectures. Read it.

Discrete · Hamming— xxvi —
OptimisationXXVII

// 26 — Discrete & combinatorialDiscrete optimisation.

The branch concerned with finding the best object in a discrete set. Travelling salesman, knapsack, set cover, bin packing, vehicle routing, scheduling. Most are NP-hard.

The standard responses. Exact algorithms via branch-and-bound or integer programming for small instances. Approximation algorithms with proved performance ratios — set cover admits a (ln n)-approximation by greedy and is hard to improve (Feige 1998). Heuristics — simulated annealing, tabu search, genetic algorithms — for practice.

The Concorde solver has solved travelling-salesman instances with 85,900 cities. The MIP solvers Gurobi and CPLEX power large parts of the operations-research industry. The technical and engineering depth of modern discrete optimisation is undercovered in undergraduate curricula.

Discrete · Optimisation— xxvii —
LP introXXVIII

// 27 — Convex relaxationLinear programming.

Linear programming optimises a linear objective subject to linear inequality constraints. The feasible region is a polyhedron; the optimum is at a vertex.

Continuous, but central to discrete optimisation as the relaxation of choice for integer problems. George Dantzig's simplex algorithm (1947) is the founding result. Khachiyan's ellipsoid method (1979) showed LP is in polynomial time; Karmarkar's interior-point method (1984) made polynomial-time LP fast in practice.

LP duality. Every LP has a dual LP whose optimum equals the primal optimum (when both are feasible). The duality theorem is a deep min-max result and the basis of much of optimisation theory.

For discrete problems, LP relaxation drops integrality constraints, solves the LP, and rounds. For some problems (vertex cover, set cover) this gives constant-factor approximation ratios; for others (max-cut) it gives optimal ratios under standard hardness assumptions (Goemans-Williamson 1995, with semidefinite programming).

Discrete · LP— xxviii —
Reading listXXIX

// 28 — LibraryReading list.

Discrete · Reading list— xxix —
Watch & ReadXXX

// 29 — WatchWatch & read.

↑ Graph theory from a computer science perspective

More on YouTube

Watch · Permutations and combinations explained
Watch · Big-O notation in six minutes

And read

Kleinberg & Tardos's Algorithm Design is the cleanest undergraduate textbook on algorithms. CLRS is the reference. Concrete Mathematics rewards the patient reader and is one of the genuine pleasures of computer-science writing. Moore & Mertens's The Nature of Computation is the best single book combining algorithms, complexity, and the physics of computation.

Discrete · Watch & Read— xxx —
What enduresXXXI

// 30 — ConstantsThe constants.

What discrete mathematics has taught:

1. Bijection is the deepest tool in counting. Every clean combinatorial result is, on inspection, a bijection between two sets we already know how to count.

2. Recurrences capture structure. A problem of size n broken into smaller instances of itself is the unifying signature of dynamic programming, divide-and-conquer, and most analytical combinatorics.

3. Asymptotic dominates constants. An O(n log n) algorithm beats an O(n²) algorithm at large enough n, regardless of the constant. The lesson is now folklore in software engineering and frequently violated in practice for small inputs.

4. Discrete and continuous interact. Linear programming relaxes discrete problems. Generating functions package combinatorial sequences as analytic objects. Spectral graph theory uses continuous spectrum to learn discrete structure. The boundary is porous and traffic flows both ways.

5. Hardness is real. NP-completeness is not an artefact of imperfect technique. Some problems do not admit polynomial algorithms. P vs NP is the working hypothesis that they do not.

Discrete · Constants— xxxi —
ColophonXXXII

The end of the deck.

Discrete Mathematics — Volume IV, Deck 08 of The Deck Catalog. Set in Inter and JetBrains Mono. White paper at #fafafa; cyan accent. Section markers as code-comments.

From sets to sorting — thirty-two leaves of the mathematics of countable things.

FINIS

↑ Vol. IV · Math. · Deck 08

i / iSpace · ↓ · ↑