Documentation

ZPM.InformationTheory.KullbackLeibler.Binary.Def

Binary KL divergence and its boundary values #

klBin p q = p·log(p/q) + (1 − p)·log((1 − p)/(1 − q)), the KL divergence between Bernoulli(p) and Bernoulli(q). Well-defined for q ∈ (0, 1); the boundary values at p = 0, p = 1, and p = q reduce to explicit log-terms via the conventional log 0 = 0.

noncomputable def InformationTheory.klBin (p q : ) :

Binary KL divergence between Bernoulli(p) and Bernoulli(q).

Equations
Instances For
    theorem InformationTheory.klBin_self (p : ) (hp₀ : 0 < p) (hp₁ : p < 1) :
    klBin p p = 0

    klBin p p = 0 for p ∈ (0, 1).

    Boundary case: klBin 0 q = -log(1 - q).

    Boundary case: klBin 1 q = -log q.

    theorem InformationTheory.klBin_expand (p q : ) (hp₀ : 0 < p) (hp₁ : p < 1) (hq₀ : 0 < q) (hq₁ : q < 1) :
    klBin p q = p * Real.log p - p * Real.log q + (1 - p) * Real.log (1 - p) - (1 - p) * Real.log (1 - q)

    Expanded form: split into constants and q-dependent pieces. Used to compute the derivative via HasDerivAt in the next shard.