Documentation

FLT_Proofs.Process

Processes and Applications #

Concrete learning processes, algorithms, and scope boundaries:

Grammar Induction #

structure GrammarInduction (Sym : Type u_1) :
Type u_1

Grammar induction: learning a formal grammar (regular, CFG) from examples. A Gold-style learning process applied to the formal language hierarchy.

  • grammarClass : Set (FormalLanguage Sym)

    The class of grammars to learn (regular, CFG, etc.)

  • learner : GoldLearner (Word Sym) Bool

    The learning algorithm (Gold-style)

  • identifies : Prop

    The learner identifies the grammar class in the limit (EX-sense)

Instances For
    structure LStar (Sym : Type u_1) [DecidableEq Sym] [Fintype Sym] :
    Type u_1

    L* algorithm (Angluin 1987): learns DFAs using membership and equivalence queries in polynomial time. The canonical exact learning algorithm.

    Instances For

      CEGIS (Counterexample-Guided Inductive Synthesis) #

      structure CEGIS (X : Type u) (Y : Type v) :
      Type (max u v)

      CEGIS: a loop between a synthesizer and a verifier. The synthesizer proposes candidates, the verifier checks them and returns counterexamples. Terminates when the verifier accepts.

      • synth : Synthesizer X Y

        The synthesizer: produces candidate concepts

      • verify : Concept X YOption X

        The verifier: checks candidates

      • loop (counterexamples : List (X × Y)) : Concept X Y

        CEGIS loop: iterate synth → verify → refine

      Instances For

        Concept Drift and Non-Stationary Learning #

        structure ConceptDrift (X : Type u) (Y : Type v) :
        Type (max u v)

        Concept drift: the target concept changes over time. Extends the standard (stationary) learning framework. The drift model specifies how the target evolves.

        • conceptClass : ConceptClass X Y

          The concept class (stationary)

        • target : Concept X Y

          Time-varying target

        • in_class (t : ) : self.target t self.conceptClass

          All targets are in the class

        • drift : DriftRate

          Drift rate: fraction of X on which target changes per step

        • drift_bounded : 0 self.drift

          Drift is bounded

        Instances For
          structure LifelongLearning (X : Type u) (Y : Type v) :
          Type (max u v)

          Lifelong learning: learning a sequence of tasks, leveraging shared structure across tasks. Meta-learning over time.

          • tasks : ConceptClass X Y

            The sequence of tasks (each task is a concept class)

          • metaLearner : MetaLearner X Y

            The meta-learner that improves across tasks

          • performance :

            Performance on task t after seeing tasks 1..t-1

          Instances For

            Inductive Logic Programming #

            def BackgroundKnowledge (B : Type u_1) :
            Type u_1

            Background knowledge: domain-specific information that guides learning. In ILP: a set of known rules/facts that constrain the hypothesis space. Analogous to advice.

            Equations
            Instances For
              structure ILP (X : Type u) (Y : Type v) (B : Type u_1) :
              Type (max (max u u_1) v)

              Inductive Logic Programming: learning first-order logic programs from examples and background knowledge.

              Instances For

                Granger Causality #

                def GrangerCauses (X Y : ) :

                Granger causality: X Granger-causes Y if past values of X improve prediction of Y beyond past values of Y alone. Analogy to online learning (sequential, predictive).

                Equations
                Instances For

                  Program Synthesis #

                  structure ProgramSynthesis (Input : Type u_1) (Output : Type u_2) :
                  Type (max u_1 u_2)

                  Program synthesis: learning programs from input-output examples. Scope boundary - connects to formal verification.

                  • spec : InputOutput

                    Specification: desired input-output behavior

                  • program : InputOutput

                    Synthesized program

                  • correct (x : Input) : self.program x = self.spec x

                    Correctness (partial or total)

                  Instances For

                    Scope Boundaries #

                    These concepts mark the BOUNDARY of formal learning theory as covered in this formalization. They are NOT formally developed; they exist as markers for potential future extension.

                    Scope boundary: Multi-armed bandits. Related to online learning but with partial feedback (only see reward for chosen action, not counterfactual rewards).

                    Equations
                    Instances For

                      Scope boundary: Reinforcement learning. Sequential decision-making with state transitions. Beyond the concept-learning framework.

                      Equations
                      Instances For

                        Scope boundary: Quantum learning theory. Learning with quantum examples or quantum computation. Requires quantum information theory infrastructure.

                        Equations
                        Instances For