layer_activation {keras} | R Documentation |
Apply an activation function to an output.
Description
Apply an activation function to an output.
Usage
layer_activation(
object,
activation,
input_shape = NULL,
batch_input_shape = NULL,
batch_size = NULL,
dtype = NULL,
name = NULL,
trainable = NULL,
weights = NULL
)
Arguments
object |
What to compose the new
|
activation |
Name of activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x). |
input_shape |
Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model. |
batch_input_shape |
Shapes, including the batch size. For instance,
|
batch_size |
Fixed batch size for layer |
dtype |
The data type expected by the input, as a string ( |
name |
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
trainable |
Whether the layer weights will be updated during training. |
weights |
Initial weights for layer. |
See Also
Other core layers:
layer_activity_regularization()
,
layer_attention()
,
layer_dense()
,
layer_dense_features()
,
layer_dropout()
,
layer_flatten()
,
layer_input()
,
layer_lambda()
,
layer_masking()
,
layer_permute()
,
layer_repeat_vector()
,
layer_reshape()
Other activation layers:
layer_activation_elu()
,
layer_activation_leaky_relu()
,
layer_activation_parametric_relu()
,
layer_activation_relu()
,
layer_activation_selu()
,
layer_activation_softmax()
,
layer_activation_thresholded_relu()