6.4 What the Negative Layer Reveals
The separation lattice of Figure 6.1 carries a meta-theorem about the structure of formal learning theory, which we can now state explicitly.
Of the \(\binom {k}{2}\) potential pairwise implications among the \(k\) major learning paradigms and complexity measures in the graph, only 13 have been either established or refuted with witnesses. The remaining pairs are either unrelated (operating on different mathematical types) or connected by non-implicational relations (analogy, measurement, assumption).
This sparsity is not an artifact of incomplete knowledge. It reflects a genuine structural fact: the major paradigms of learning theory are largely incomparable. PAC learning and Gold-style identification operate on different data models (i.i.d. samples vs. infinite enumerations). Online learning and exact learning use different interaction protocols (adversarial instances vs. query access). VC dimension and SQ dimension measure different properties (combinatorial shattering vs. correlation structure).
The witnesses in this chapter make the incomparability concrete. Thresholds on \(\mathbb {R}\) separate PAC from online learning. All finite subsets of \(\mathbb {N}\) separate Gold identification from PAC learning. Parities separate VC dimension from SQ dimension. Each witness exploits a specific structural mismatch, in the data model, the success criterion, the adversarial model, or the computational model, and these mismatches are not removable by clever proof techniques. They are features of the mathematical landscape.
The four strict strength edges, taken together, form two chains:
The first chain orders three paradigms by the stringency of their learnability requirements. The second orders two dimensions by their sensitivity to multiclass structure. In both cases, the strict containment is proved by a single, explicit construction.
These are the theorems that textbooks usually omit. They deserve their chapter.