theanets.losses.KullbackLeiblerDivergence¶
-
class
theanets.losses.
KullbackLeiblerDivergence
(target, weight=1.0, weighted=False, output_name='out')¶ The KL divergence loss is computed over probability distributions.
Notes
The KL divergence loss is intended to optimize models that generate probability distributions. If the outputs \(x_i\) of a model represent a normalized probability distribution (over the output variables), and the targets \(t_i\) represent a normalized target distribution (over the output variables), then the KL divergence is given by:
\[\mathcal{L}(x, t) = \frac{1}{d} \sum_{i=1}^d t_i \log \frac{t_i}{x_i}\]Here the KL divergence is computed as a mean value over the output variables in the model.
-
__init__
(target, weight=1.0, weighted=False, output_name='out')¶
Methods
__init__
(target[, weight, weighted, output_name])log
()Log some diagnostic info about this loss. Attributes
variables
A list of Theano variables used in this loss. -