Processes and Applications #
Concrete learning processes, algorithms, and scope boundaries:
- Grammar induction and L* (Angluin's query-learning algorithm)
- CEGIS (counterexample-guided inductive synthesis)
- Concept drift, lifelong learning, meta-learning applications
- Inductive logic programming
- Scope boundaries (bandits, RL, quantum - markers only)
- Granger causality (causal inference connection)
Grammar Induction #
Grammar induction: learning a formal grammar (regular, CFG) from examples. A Gold-style learning process applied to the formal language hierarchy.
- grammarClass : Set (FormalLanguage Sym)
The class of grammars to learn (regular, CFG, etc.)
- learner : GoldLearner (Word Sym) Bool
The learning algorithm (Gold-style)
- identifies : Prop
The learner identifies the grammar class in the limit (EX-sense)
Instances For
L* algorithm (Angluin 1987): learns DFAs using membership and equivalence queries in polynomial time. The canonical exact learning algorithm.
- teacher : MinimallyAdequateTeacher (Word Sym) Bool
The MAT providing queries
- numStates : ℕ
Number of states in the learned DFA
The learned DFA (after termination)
Instances For
CEGIS (Counterexample-Guided Inductive Synthesis) #
CEGIS: a loop between a synthesizer and a verifier. The synthesizer proposes candidates, the verifier checks them and returns counterexamples. Terminates when the verifier accepts.
- synth : Synthesizer X Y
The synthesizer: produces candidate concepts
The verifier: checks candidates
CEGIS loop: iterate synth → verify → refine
Instances For
Concept Drift and Non-Stationary Learning #
Concept drift: the target concept changes over time. Extends the standard (stationary) learning framework. The drift model specifies how the target evolves.
- conceptClass : ConceptClass X Y
The concept class (stationary)
Time-varying target
All targets are in the class
- drift : DriftRate
Drift rate: fraction of X on which target changes per step
Drift is bounded
Instances For
Lifelong learning: learning a sequence of tasks, leveraging shared structure across tasks. Meta-learning over time.
- tasks : ℕ → ConceptClass X Y
The sequence of tasks (each task is a concept class)
- metaLearner : MetaLearner X Y
The meta-learner that improves across tasks
Performance on task t after seeing tasks 1..t-1
Instances For
Inductive Logic Programming #
Background knowledge: domain-specific information that guides learning. In ILP: a set of known rules/facts that constrain the hypothesis space. Analogous to advice.
Equations
- BackgroundKnowledge B = B
Instances For
Inductive Logic Programming: learning first-order logic programs from examples and background knowledge.
- background : BackgroundKnowledge B
Background knowledge
- learner : B → GoldLearner X Y
The learner (uses background knowledge)
Instances For
Granger Causality #
Granger causality: X Granger-causes Y if past values of X improve prediction of Y beyond past values of Y alone. Analogy to online learning (sequential, predictive).
Equations
Instances For
Program Synthesis #
Program synthesis: learning programs from input-output examples. Scope boundary - connects to formal verification.
- spec : Input → Output
Specification: desired input-output behavior
- program : Input → Output
Synthesized program
Correctness (partial or total)
Instances For
Scope Boundaries #
These concepts mark the BOUNDARY of formal learning theory as covered in this formalization. They are NOT formally developed; they exist as markers for potential future extension.
Scope boundary: Multi-armed bandits. Related to online learning but with partial feedback (only see reward for chosen action, not counterfactual rewards).
Equations
Instances For
Scope boundary: Reinforcement learning. Sequential decision-making with state transitions. Beyond the concept-learning framework.
Equations
Instances For
Scope boundary: Quantum learning theory. Learning with quantum examples or quantum computation. Requires quantum information theory infrastructure.