nn_lstm {torch} | R Documentation |
Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
Description
For each element in the input sequence, each layer computes the following function:
Usage
nn_lstm(
input_size,
hidden_size,
num_layers = 1,
bias = TRUE,
batch_first = FALSE,
dropout = 0,
bidirectional = FALSE,
...
)
Arguments
input_size |
The number of expected features in the input |
The number of features in the hidden state | |
num_layers |
Number of recurrent layers. E.g., setting |
bias |
If |
batch_first |
If |
dropout |
If non-zero, introduces a |
bidirectional |
If |
... |
currently unused. |
Details
\begin{array}{ll} \\
i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\
f_t = \sigma(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\
g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{(t-1)} + b_{hg}) \\
o_t = \sigma(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\
c_t = f_t c_{(t-1)} + i_t g_t \\
h_t = o_t \tanh(c_t) \\
\end{array}
where h_t
is the hidden state at time t
, c_t
is the cell
state at time t
, x_t
is the input at time t
, h_{(t-1)}
is the hidden state of the previous layer at time t-1
or the initial hidden
state at time 0
, and i_t
, f_t
, g_t
,
o_t
are the input, forget, cell, and output gates, respectively.
\sigma
is the sigmoid function.
Inputs
Inputs: input, (h_0, c_0)
-
input of shape
(seq_len, batch, input_size)
: tensor containing the features of the input sequence. The input can also be a packed variable length sequence. Seenn_utils_rnn_pack_padded_sequence()
ornn_utils_rnn_pack_sequence()
for details. -
h_0 of shape
(num_layers * num_directions, batch, hidden_size)
: tensor containing the initial hidden state for each element in the batch. -
c_0 of shape
(num_layers * num_directions, batch, hidden_size)
: tensor containing the initial cell state for each element in the batch.
If (h_0, c_0)
is not provided, both h_0 and c_0 default to zero.
Outputs
Outputs: output, (h_n, c_n)
-
output of shape
(seq_len, batch, num_directions * hidden_size)
: tensor containing the output features(h_t)
from the last layer of the LSTM, for each t. If atorch_nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated usingoutput$view(c(seq_len, batch, num_directions, hidden_size))
, with forward and backward being direction0
and1
respectively. Similarly, the directions can be separated in the packed case. -
h_n of shape
(num_layers * num_directions, batch, hidden_size)
: tensor containing the hidden state fort = seq_len
. Like output, the layers can be separated usingh_n$view(c(num_layers, num_directions, batch, hidden_size))
and similarly for c_n. -
c_n (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for
t = seq_len
Attributes
-
weight_ih_l[k]
: the learnable input-hidden weights of the\mbox{k}^{th}
layer(W_ii|W_if|W_ig|W_io)
, of shape(4*hidden_size x input_size)
-
weight_hh_l[k]
: the learnable hidden-hidden weights of the\mbox{k}^{th}
layer(W_hi|W_hf|W_hg|W_ho)
, of shape(4*hidden_size x hidden_size)
-
bias_ih_l[k]
: the learnable input-hidden bias of the\mbox{k}^{th}
layer(b_ii|b_if|b_ig|b_io)
, of shape(4*hidden_size)
-
bias_hh_l[k]
: the learnable hidden-hidden bias of the\mbox{k}^{th}
layer(b_hi|b_hf|b_hg|b_ho)
, of shape(4*hidden_size)
Note
All the weights and biases are initialized from \mathcal{U}(-\sqrt{k}, \sqrt{k})
where k = \frac{1}{\mbox{hidden\_size}}
Examples
if (torch_is_installed()) {
rnn <- nn_lstm(10, 20, 2)
input <- torch_randn(5, 3, 10)
h0 <- torch_randn(2, 3, 20)
c0 <- torch_randn(2, 3, 20)
output <- rnn(input, list(h0, c0))
}