layer_feature_space {keras3} | R Documentation |
One-stop utility for preprocessing and encoding structured data.
Description
Available feature types:
Note that all features can be referred to by their string name,
e.g. "integer_categorical"
. When using the string name, the default
argument values are used.
# Plain float values. feature_float(name = NULL) # Float values to be preprocessed via featurewise standardization # (i.e. via a `layer_normalization()` layer). feature_float_normalized(name = NULL) # Float values to be preprocessed via linear rescaling # (i.e. via a `layer_rescaling` layer). feature_float_rescaled(scale = 1., offset = 0., name = NULL) # Float values to be discretized. By default, the discrete # representation will then be one-hot encoded. feature_float_discretized( num_bins, bin_boundaries = NULL, output_mode = "one_hot", name = NULL ) # Integer values to be indexed. By default, the discrete # representation will then be one-hot encoded. feature_integer_categorical( max_tokens = NULL, num_oov_indices = 1, output_mode = "one_hot", name = NULL ) # String values to be indexed. By default, the discrete # representation will then be one-hot encoded. feature_string_categorical( max_tokens = NULL, num_oov_indices = 1, output_mode = "one_hot", name = NULL ) # Integer values to be hashed into a fixed number of bins. # By default, the discrete representation will then be one-hot encoded. feature_integer_hashed(num_bins, output_mode = "one_hot", name = NULL) # String values to be hashed into a fixed number of bins. # By default, the discrete representation will then be one-hot encoded. feature_string_hashed(num_bins, output_mode = "one_hot", name = NULL)
Usage
layer_feature_space(
object,
features,
output_mode = "concat",
crosses = NULL,
crossing_dim = 32L,
hashing_dim = 32L,
num_discretization_bins = 32L,
name = NULL,
feature_names = NULL
)
feature_cross(feature_names, crossing_dim, output_mode = "one_hot")
feature_custom(dtype, preprocessor, output_mode)
feature_float(name = NULL)
feature_float_rescaled(scale = 1, offset = 0, name = NULL)
feature_float_normalized(name = NULL)
feature_float_discretized(
num_bins,
bin_boundaries = NULL,
output_mode = "one_hot",
name = NULL
)
feature_integer_categorical(
max_tokens = NULL,
num_oov_indices = 1,
output_mode = "one_hot",
name = NULL
)
feature_string_categorical(
max_tokens = NULL,
num_oov_indices = 1,
output_mode = "one_hot",
name = NULL
)
feature_string_hashed(num_bins, output_mode = "one_hot", name = NULL)
feature_integer_hashed(num_bins, output_mode = "one_hot", name = NULL)
Arguments
object |
see description |
features |
see description |
output_mode |
A string.
|
crosses |
List of features to be crossed together, e.g.
|
crossing_dim |
Default vector size for hashing crossed features.
Defaults to |
hashing_dim |
Default vector size for hashing features of type
|
num_discretization_bins |
Default number of bins to be used for
discretizing features of type |
name |
String, name for the object |
feature_names |
Named list mapping the names of your features to their
type specification, e.g. |
dtype |
string, the output dtype of the feature. E.g., "float32". |
preprocessor |
A callable. |
scale , offset |
Passed on to |
num_bins , bin_boundaries |
Passed on to |
max_tokens , num_oov_indices |
Passed on to |
Value
The return value depends on the value provided for the first argument.
If object
is:
a
keras_model_sequential()
, then the layer is added to the sequential model (which is modified in place). To enable piping, the sequential model is also returned, invisibly.a
keras_input()
, then the output tensor from callinglayer(input)
is returned.-
NULL
or missing, then aLayer
instance is returned.
Examples
Basic usage with a named list of input data:
raw_data <- list( float_values = c(0.0, 0.1, 0.2, 0.3), string_values = c("zero", "one", "two", "three"), int_values = as.integer(c(0, 1, 2, 3)) ) dataset <- tfdatasets::tensor_slices_dataset(raw_data) feature_space <- layer_feature_space( features = list( float_values = "float_normalized", string_values = "string_categorical", int_values = "integer_categorical" ), crosses = list(c("string_values", "int_values")), output_mode = "concat" ) # Before you start using the feature_space(), # you must `adapt()` it on some data. feature_space |> adapt(dataset) # You can call the feature_space() on a named list of # data (batched or unbatched). output_vector <- feature_space(raw_data)
Basic usage with tf.data
:
library(tfdatasets) # Unlabeled data preprocessed_ds <- unlabeled_dataset |> dataset_map(feature_space) # Labeled data preprocessed_ds <- labeled_dataset |> dataset_map(function(x, y) tuple(feature_space(x), y))
Basic usage with the Keras Functional API:
# Retrieve a named list of Keras layer_input() objects (inputs <- feature_space$get_inputs())
## $float_values ## <KerasTensor shape=(None, 1), dtype=float32, sparse=None, name=float_values> ## ## $string_values ## <KerasTensor shape=(None, 1), dtype=string, sparse=None, name=string_values> ## ## $int_values ## <KerasTensor shape=(None, 1), dtype=int32, sparse=None, name=int_values>
# Retrieve the corresponding encoded Keras tensors (encoded_features <- feature_space$get_encoded_features())
## <KerasTensor shape=(None, 43), dtype=float32, sparse=False, name=keras_tensor_7>
# Build a Functional model outputs <- encoded_features |> layer_dense(1, activation = "sigmoid") model <- keras_model(inputs, outputs)
Customizing each feature or feature cross:
feature_space <- layer_feature_space( features = list( float_values = feature_float_normalized(), string_values = feature_string_categorical(max_tokens = 10), int_values = feature_integer_categorical(max_tokens = 10) ), crosses = list( feature_cross(c("string_values", "int_values"), crossing_dim = 32) ), output_mode = "concat" )
Returning a dict (a named list) of integer-encoded features:
feature_space <- layer_feature_space( features = list( "string_values" = feature_string_categorical(output_mode = "int"), "int_values" = feature_integer_categorical(output_mode = "int") ), crosses = list( feature_cross( feature_names = c("string_values", "int_values"), crossing_dim = 32, output_mode = "int" ) ), output_mode = "dict" )
Specifying your own Keras preprocessing layer:
# Let's say that one of the features is a short text paragraph that # we want to encode as a vector (one vector per paragraph) via TF-IDF. data <- list(text = c("1st string", "2nd string", "3rd string")) # There's a Keras layer for this: layer_text_vectorization() custom_layer <- layer_text_vectorization(output_mode = "tf_idf") # We can use feature_custom() to create a custom feature # that will use our preprocessing layer. feature_space <- layer_feature_space( features = list( text = feature_custom(preprocessor = custom_layer, dtype = "string", output_mode = "float" ) ), output_mode = "concat" ) feature_space |> adapt(tfdatasets::tensor_slices_dataset(data)) output_vector <- feature_space(data)
Retrieving the underlying Keras preprocessing layers:
# The preprocessing layer of each feature is available in `$preprocessors`. preprocessing_layer <- feature_space$preprocessors$feature1 # The crossing layer of each feature cross is available in `$crossers`. # It's an instance of layer_hashed_crossing() crossing_layer <- feature_space$crossers[["feature1_X_feature2"]]
Saving and reloading a FeatureSpace:
feature_space$save("featurespace.keras") reloaded_feature_space <- keras$models$load_model("featurespace.keras")
See Also
Other preprocessing layers:
layer_category_encoding()
layer_center_crop()
layer_discretization()
layer_hashed_crossing()
layer_hashing()
layer_integer_lookup()
layer_mel_spectrogram()
layer_normalization()
layer_random_brightness()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_rescaling()
layer_resizing()
layer_string_lookup()
layer_text_vectorization()
Other layers:
Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_group_query_attention()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_random_brightness()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()
Other utils:
audio_dataset_from_directory()
clear_session()
config_disable_interactive_logging()
config_disable_traceback_filtering()
config_enable_interactive_logging()
config_enable_traceback_filtering()
config_is_interactive_logging_enabled()
config_is_traceback_filtering_enabled()
get_file()
get_source_inputs()
image_array_save()
image_dataset_from_directory()
image_from_array()
image_load()
image_smart_resize()
image_to_array()
normalize()
pad_sequences()
set_random_seed()
split_dataset()
text_dataset_from_directory()
timeseries_dataset_from_array()
to_categorical()
zip_lists()