layer_hashing {keras3} | R Documentation |
A preprocessing layer which hashes and bins categorical features.
Description
This layer transforms categorical inputs to hashed output. It element-wise
converts a ints or strings to ints in a fixed range. The stable hash
function uses tensorflow::ops::Fingerprint
to produce the same output
consistently across all platforms.
This layer uses FarmHash64 by default, which provides a consistent hashed output across different platforms and is stable across invocations, regardless of device and context, by mixing the input bits thoroughly.
If you want to obfuscate the hashed output, you can also pass a random
salt
argument in the constructor. In that case, the layer will use the
SipHash64 hash function, with
the salt
value serving as additional input to the hash function.
Note: This layer internally uses TensorFlow. It cannot be used as part of the compiled computation graph of a model with any backend other than TensorFlow. It can however be used with any backend when running eagerly. It can also always be used as part of an input preprocessing pipeline with any backend (outside the model itself), which is how we recommend to use this layer.
Note: This layer is safe to use inside a tf.data
pipeline
(independently of which backend you're using).
Example (FarmHash64)
layer <- layer_hashing(num_bins = 3) inp <- c('A', 'B', 'C', 'D', 'E') |> array(dim = c(5, 1)) layer(inp)
## tf.Tensor( ## [[1] ## [0] ## [1] ## [1] ## [2]], shape=(5, 1), dtype=int64)
Example (FarmHash64) with a mask value
layer <- layer_hashing(num_bins=3, mask_value='') inp <- c('A', 'B', '', 'C', 'D') |> array(dim = c(5, 1)) layer(inp)
## tf.Tensor( ## [[1] ## [1] ## [0] ## [2] ## [2]], shape=(5, 1), dtype=int64)
Example (SipHash64)
layer <- layer_hashing(num_bins=3, salt=c(133, 137)) inp <- c('A', 'B', 'C', 'D', 'E') |> array(dim = c(5, 1)) layer(inp)
## tf.Tensor( ## [[1] ## [2] ## [1] ## [0] ## [2]], shape=(5, 1), dtype=int64)
Example (Siphash64 with a single integer, same as salt=[133, 133]
)
layer <- layer_hashing(num_bins=3, salt=133) inp <- c('A', 'B', 'C', 'D', 'E') |> array(dim = c(5, 1)) layer(inp)
## tf.Tensor( ## [[0] ## [0] ## [2] ## [1] ## [0]], shape=(5, 1), dtype=int64)
Usage
layer_hashing(
object,
num_bins,
mask_value = NULL,
salt = NULL,
output_mode = "int",
sparse = FALSE,
...
)
Arguments
object |
Object to compose the layer with. A tensor, array, or sequential model. |
num_bins |
Number of hash bins. Note that this includes the |
mask_value |
A value that represents masked inputs, which are mapped to
index 0. |
salt |
A single unsigned integer or |
output_mode |
Specification for the output of the layer. Values can be
|
sparse |
Boolean. Only applicable to |
... |
Keyword arguments to construct a layer. |
Value
The return value depends on the value provided for the first argument.
If object
is:
a
keras_model_sequential()
, then the layer is added to the sequential model (which is modified in place). To enable piping, the sequential model is also returned, invisibly.a
keras_input()
, then the output tensor from callinglayer(input)
is returned.-
NULL
or missing, then aLayer
instance is returned.
Input Shape
A single string, a list of strings, or an int32
or int64
tensor
of shape (batch_size, ...,)
.
Output Shape
An int32
tensor of shape (batch_size, ...)
.
Reference
See Also
Other categorical features preprocessing layers:
layer_category_encoding()
layer_hashed_crossing()
layer_integer_lookup()
layer_string_lookup()
Other preprocessing layers:
layer_category_encoding()
layer_center_crop()
layer_discretization()
layer_feature_space()
layer_hashed_crossing()
layer_integer_lookup()
layer_mel_spectrogram()
layer_normalization()
layer_random_brightness()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_rescaling()
layer_resizing()
layer_string_lookup()
layer_text_vectorization()
Other layers:
Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_group_query_attention()
layer_gru()
layer_hashed_crossing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_random_brightness()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()