layer_conv_lstm_2d {keras} | R Documentation |
Convolutional LSTM.
Description
It is similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
Usage
layer_conv_lstm_2d(
object,
filters,
kernel_size,
strides = c(1L, 1L),
padding = "valid",
data_format = NULL,
dilation_rate = c(1L, 1L),
activation = "tanh",
recurrent_activation = "hard_sigmoid",
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
recurrent_initializer = "orthogonal",
bias_initializer = "zeros",
unit_forget_bias = TRUE,
kernel_regularizer = NULL,
recurrent_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
recurrent_constraint = NULL,
bias_constraint = NULL,
return_sequences = FALSE,
return_state = FALSE,
go_backwards = FALSE,
stateful = FALSE,
dropout = 0,
recurrent_dropout = 0,
batch_size = NULL,
name = NULL,
trainable = NULL,
weights = NULL,
input_shape = NULL
)
Arguments
object |
What to compose the new
|
filters |
Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). |
kernel_size |
An integer or list of n integers, specifying the dimensions of the convolution window. |
strides |
An integer or list of n integers, specifying the strides of
the convolution. Specifying any stride value != 1 is incompatible with
specifying any |
padding |
One of |
data_format |
A string, one of |
dilation_rate |
An integer or list of n integers, specifying the
dilation rate to use for dilated convolution. Currently, specifying any
|
activation |
Activation function to use. If you don't specify anything,
no activation is applied (ie. "linear" activation: |
recurrent_activation |
Activation function to use for the recurrent step. |
use_bias |
Boolean, whether the layer uses a bias vector. |
kernel_initializer |
Initializer for the |
recurrent_initializer |
Initializer for the |
bias_initializer |
Initializer for the bias vector. |
unit_forget_bias |
Boolean. If TRUE, add 1 to the bias of the forget
gate at initialization. Use in combination with |
kernel_regularizer |
Regularizer function applied to the |
recurrent_regularizer |
Regularizer function applied to the
|
bias_regularizer |
Regularizer function applied to the bias vector. |
activity_regularizer |
Regularizer function applied to the output of the layer (its "activation").. |
kernel_constraint |
Constraint function applied to the |
recurrent_constraint |
Constraint function applied to the
|
bias_constraint |
Constraint function applied to the bias vector. |
return_sequences |
Boolean. Whether to return the last output in the output sequence, or the full sequence. |
return_state |
Boolean. Whether to return the last state in addition to the output. |
go_backwards |
Boolean (default FALSE). If TRUE, rocess the input sequence backwards. |
stateful |
Boolean (default FALSE). If TRUE, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. |
dropout |
Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. |
recurrent_dropout |
Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. |
batch_size |
Fixed batch size for layer |
name |
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
trainable |
Whether the layer weights will be updated during training. |
weights |
Initial weights for layer. |
input_shape |
Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model. |
Input shape
if data_format='channels_first' 5D tensor with shape:
(samples,time, channels, rows, cols)
if data_format='channels_last' 5D tensor with shape:
(samples,time, rows, cols, channels)
References
-
Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting The current implementation does not include the feedback loop on the cells output
See Also
Other convolutional layers:
layer_conv_1d()
,
layer_conv_1d_transpose()
,
layer_conv_2d()
,
layer_conv_2d_transpose()
,
layer_conv_3d()
,
layer_conv_3d_transpose()
,
layer_cropping_1d()
,
layer_cropping_2d()
,
layer_cropping_3d()
,
layer_depthwise_conv_1d()
,
layer_depthwise_conv_2d()
,
layer_separable_conv_1d()
,
layer_separable_conv_2d()
,
layer_upsampling_1d()
,
layer_upsampling_2d()
,
layer_upsampling_3d()
,
layer_zero_padding_1d()
,
layer_zero_padding_2d()
,
layer_zero_padding_3d()