Generalization Results (redirected to primed versions) #
Theorems moved from Generalization.lean and Rademacher.lean.
Each call has been redirected from the orphaned sorry'd version
(e.g. vcdim_finite_imp_uc) to the primed version
(e.g. vcdim_finite_imp_uc') in Symmetrization.lean.
Main results #
uc_does_not_imply_online: UC ⊬ online learnability (paradigm separation)consistent_learner_pac: consistency + finite VCDim → PACsample_complexity_lower_bound: PAC lower bound via VCDimrademacher_vanishing_imp_pac: uniform Rademacher vanishing → PAC
PAC uniform convergence does NOT imply online learnability. There exist concept classes with finite VCDim (hence PAC learnable) but infinite Littlestone dimension (hence not online learnable with finite mistake bound). Witness: threshold class C_φ on X via embedding φ : ℕ ↪ X. VCDim(C_φ) ≤ 1 (monotonicity prevents 2-shattering). LittlestoneDim(C_φ) = ∞ (adversary binary search forces d mistakes for any d).
Typeclass version of uc_does_not_imply_online. Uses UniversallyMeasurableSpace to avoid threading ∀-quantified measurability.
Consistent learners are PAC learners when VCDim < ⊤.
Typeclass version of consistent_learner_pac.
Sample complexity lower bound: ⌈(d-1)/2⌉ ≤ SampleComplexity.
Typeclass version of sample_complexity_lower_bound.
Rademacher vanishing → PAC learnability (← direction of fundamental_rademacher). Uses uniform vanishing (∃ m₀ ∀ D), the textbook-standard form.
Typeclass version of rademacher_vanishing_imp_pac.