ℝ-valued KL divergence (bridge to Mathlib's ℝ≥0∞-valued klDiv) #
klDivReal P Q := if P ≪ Q then ∫ log (dP/dQ) dP else 0, the real-valued
Kullback–Leibler divergence between two measures. For probability
measures with P ≪ Q it equals (InformationTheory.klDiv P Q).toReal,
bridging Mathlib's ℝ≥0∞-valued KL to a real-valued arithmetic layer
where linarith / nlinarith / field_simp compose naturally. This is
the same proof-engineering specialization as FintypePMF relative to
Mathlib.PMF, documented in CHARTER.md under "Duplicate rule
exception."
Main results #
klDivReal_eq_toReal_klDiv: bridge to Mathlib's KL for probability measures with absolute continuity.klDivReal_nonneg: nonnegativity for probability measures.klDivReal_eq_zero_iff: KL = 0 ↔ P = Q (under absolute continuity and finite KL).
ℝ-valued KL divergence. Returns 0 when P is not absolutely continuous
with respect to Q (by convention; the ℝ≥0∞-valued klDiv returns ⊤
in that case).
Equations
Instances For
The integrand log ((P.rnDeriv Q x).toReal) is definitionally Mathlib's
log-likelihood ratio llr P Q x.
For probability measures with P ≪ Q, the ℝ-valued KL equals
(Mathlib.klDiv P Q).toReal. Both measures have total mass 1, so the
Mathlib correction term vanishes.
ℝ-valued KL is nonnegative for probability measures.
For probability measures with P ≪ Q and finite KL, klDivReal P Q = 0
iff P = Q. The finite-KL hypothesis is essential: a non-integrable
log-likelihood ratio makes the Bochner integral collapse to 0 even when
P ≠ Q.