Documentation

FLT_Proofs.Criterion.Online

Online Learning Criteria #

Mistake-bounded learning, online learnability, and regret bounds. Characterized by Littlestone dimension.

noncomputable def OnlineLearner.mistakes {X : Type u} {Y : Type v} [DecidableEq Y] (L : OnlineLearner X Y) (c : Concept X Y) (seq : List X) :

Helper: run an online learner on a sequence, counting mistakes.

Equations
Instances For
    noncomputable def OnlineLearner.mistakes.go {X : Type u} {Y : Type v} [DecidableEq Y] (L : OnlineLearner X Y) (c : Concept X Y) (state : L.State) (remaining : List X) (count : ) :
    Equations
    Instances For
      def MistakeBounded (X : Type u) (Y : Type v) [DecidableEq Y] (C : ConceptClass X Y) (M : ) :

      Mistake-bounded learning: the learner makes at most M mistakes on ANY sequence. No distribution assumption. Characterized by Littlestone dimension.

      Equations
      Instances For
        def OnlineLearnable (X : Type u) (Y : Type v) [DecidableEq Y] (C : ConceptClass X Y) :

        Online learnable: there exists a finite mistake bound.

        Equations
        Instances For
          noncomputable def OnlineLearner.cumulativeLoss {X : Type u} {Y : Type v} (L : OnlineLearner X Y) (loss : LossFunction Y) (seq : List (X × Y)) :

          Helper: cumulative loss of an online learner on a sequence.

          Equations
          Instances For
            noncomputable def OnlineLearner.cumulativeLoss.go {X : Type u} {Y : Type v} (L : OnlineLearner X Y) (loss : LossFunction Y) (state : L.State) (remaining : List (X × Y)) (acc : ) :
            Equations
            Instances For
              noncomputable def fixedHypothesisLoss {X : Type u} {Y : Type v} (h : Concept X Y) (loss : LossFunction Y) (seq : List (X × Y)) :

              Helper: cumulative loss of a fixed hypothesis on a sequence.

              Equations
              Instances For
                def RegretBounded (X : Type u) (Y : Type v) (H : HypothesisSpace X Y) (loss : LossFunction Y) (bound : ) :

                Regret-bounded learning: the learner's cumulative loss is close to the best hypothesis in hindsight. No distributional assumptions.

                Equations
                Instances For