layer_bidirectional {keras3} | R Documentation |
Bidirectional wrapper for RNNs.
Description
Bidirectional wrapper for RNNs.
Usage
layer_bidirectional(
object,
layer,
merge_mode = "concat",
weights = NULL,
backward_layer = NULL,
...
)
Arguments
object |
Object to compose the layer with. A tensor, array, or sequential model. |
layer |
|
merge_mode |
Mode by which outputs of the forward and backward RNNs
will be combined. One of |
weights |
see description |
backward_layer |
Optional |
... |
For forward/backward compatability. |
Value
The return value depends on the value provided for the first argument.
If object
is:
a
keras_model_sequential()
, then the layer is added to the sequential model (which is modified in place). To enable piping, the sequential model is also returned, invisibly.a
keras_input()
, then the output tensor from callinglayer(input)
is returned.-
NULL
or missing, then aLayer
instance is returned.
Call Arguments
The call arguments for this layer are the same as those of the
wrapped RNN layer. Beware that when passing the initial_state
argument during the call of this layer, the first half in the
list of elements in the initial_state
list will be passed to
the forward RNN call and the last half in the list of elements
will be passed to the backward RNN call.
Note
instantiating a Bidirectional
layer from an existing RNN layer
instance will not reuse the weights state of the RNN layer instance – the
Bidirectional
layer will have freshly initialized weights.
Examples
model <- keras_model_sequential(input_shape = c(5, 10)) %>% layer_bidirectional(layer_lstm(units = 10, return_sequences = TRUE)) %>% layer_bidirectional(layer_lstm(units = 10)) %>% layer_dense(5, activation = "softmax") model %>% compile(loss = "categorical_crossentropy", optimizer = "rmsprop") # With custom backward layer forward_layer <- layer_lstm(units = 10, return_sequences = TRUE) backward_layer <- layer_lstm(units = 10, activation = "relu", return_sequences = TRUE, go_backwards = TRUE) model <- keras_model_sequential(input_shape = c(5, 10)) %>% bidirectional(forward_layer, backward_layer = backward_layer) %>% layer_dense(5, activation = "softmax") model %>% compile(loss = "categorical_crossentropy", optimizer = "rmsprop")
States
A Bidirectional
layer instance has property states
, which you can access
with layer$states
. You can also reset states using reset_state()
See Also
Other rnn layers:
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_gru()
layer_lstm()
layer_rnn()
layer_simple_rnn()
layer_time_distributed()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()
Other layers:
Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_group_query_attention()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_random_brightness()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()