theanets.recurrent.Autoencoder¶
-
class
theanets.recurrent.
Autoencoder
(layers, loss='mse', weighted=False, rng=13)[source]¶ An autoencoder network attempts to reproduce its input.
Notes
Autoencoder models default to a
MSE
loss. To use a different loss, provide a non-default argument for theloss
keyword argument when constructing your model.Examples
To create a recurrent autoencoder, just create a new model instance. Often you’ll provide the layer configuration at this time:
>>> model = theanets.recurrent.Autoencoder([10, (20, 'rnn'), 10])
See Creating a Model for more information.
Data
Training data for a recurrent autoencoder takes the form of a three-dimensional array. The shape of this array is (num-examples, num-time-steps, num-variables): the first axis enumerates data points in a batch, the second enumerates time steps, and the third enumerates the variables in the model.
For instance, to create a training dataset containing 1000 examples, each with 100 time steps:
>>> inputs = np.random.randn(1000, 100, 10).astype('f')
Training
Training the model can be as simple as calling the
train()
method:>>> model.train([inputs])
See Training a Model for more information.
Use
A model can be used to
predict()
the output of some input data points:>>> test = np.random.randn(3, 200, 10).astype('f') >>> print(model.predict(test))
Note that the test data does not need to have the same number of time steps as the training data.
Additionally, autoencoders can
encode()
a set of input data points:>>> enc = model.encode(test)
See Using a Model for more information.
-
__init__
(layers, loss='mse', weighted=False, rng=13)[source]¶ x.__init__(…) initializes x; see help(type(x)) for signature
Methods
__init__
(layers[, loss, weighted, rng])x.__init__(…) initializes x; see help(type(x)) for signature add_layer
([layer])Add a layer to our network graph. add_loss
([loss])Add a loss function to the model. build_graph
([regularizers])Connect the layers in this network to form a computation graph. decode
(z[, layer])Decode an encoded dataset by computing the output layer activation. encode
(x[, layer, sample])Encode a dataset using the hidden layer activations of our network. feed_forward
(x, **kwargs)Compute a forward pass of all layers from the given input. find
(which, param)Get a parameter from a layer in the network. itertrain
(train[, valid, algo, subalgo, …])Train our network, one batch at a time. load
(filename_or_handle)Load a saved network from disk. loss
(**kwargs)Return a variable representing the regularized loss for this network. monitors
(**kwargs)Return expressions that should be computed to monitor training. predict
(x, **kwargs)Compute a forward pass of the inputs, returning the network output. save
(filename_or_handle)Save the state of this network to a pickle file on disk. score
(x[, w])Compute R^2 coefficient of determination for a given input. set_loss
(*args, **kwargs)Clear the current loss functions from the network and add a new one. train
(*args, **kwargs)Train the network until the trainer converges. updates
(**kwargs)Return expressions to run as updates during network training. Attributes
DEFAULT_OUTPUT_ACTIVATION
INPUT_NDIM
Number of dimensions for holding input data arrays. OUTPUT_NDIM
Number of dimensions for holding output data arrays. inputs
A list of Theano variables for feedforward computations. params
A list of the learnable Theano parameters for this network. variables
A list of Theano variables for loss computations. -
INPUT_NDIM
= 3¶ Number of dimensions for holding input data arrays.
-
OUTPUT_NDIM
= 3¶ Number of dimensions for holding output data arrays.
-