loss_sparse_categorical_crossentropy {keras3} | R Documentation |
Computes the crossentropy loss between the labels and predictions.
Description
Use this crossentropy loss function when there are two or more label
classes. We expect labels to be provided as integers. If you want to
provide labels using one-hot
representation, please use
CategoricalCrossentropy
loss. There should be # classes
floating point
values per feature for y_pred
and a single floating point value per
feature for y_true
.
In the snippet below, there is a single floating point value per example for
y_true
and num_classes
floating pointing values per example for
y_pred
. The shape of y_true
is [batch_size]
and the shape of y_pred
is [batch_size, num_classes]
.
Usage
loss_sparse_categorical_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
ignore_class = NULL,
axis = -1L,
...,
reduction = "sum_over_batch_size",
name = "sparse_categorical_crossentropy",
dtype = NULL
)
Arguments
y_true |
Ground truth values. |
y_pred |
The predicted values. |
from_logits |
Whether |
ignore_class |
Optional integer. The ID of a class to be ignored during
loss computation. This is useful, for example, in segmentation
problems featuring a "void" class (commonly -1 or 255) in
segmentation maps. By default ( |
axis |
Defaults to |
... |
For forward/backward compatability. |
reduction |
Type of reduction to apply to the loss. In almost all cases
this should be |
name |
Optional name for the loss instance. |
dtype |
The dtype of the loss's computations. Defaults to |
Value
Sparse categorical crossentropy loss value.
Examples
y_true <- c(1, 2) y_pred <- rbind(c(0.05, 0.95, 0), c(0.1, 0.8, 0.1)) loss <- loss_sparse_categorical_crossentropy(y_true, y_pred) loss
## tf.Tensor([0.05129339 2.30258509], shape=(2), dtype=float64)
y_true <- c(1, 2) y_pred <- rbind(c(0.05, 0.95, 0), c(0.1, 0.8, 0.1)) # Using 'auto'/'sum_over_batch_size' reduction type. scce <- loss_sparse_categorical_crossentropy() scce(op_array(y_true), op_array(y_pred))
## tf.Tensor(1.1769392, shape=(), dtype=float32)
# 1.177
# Calling with 'sample_weight'. scce(op_array(y_true), op_array(y_pred), sample_weight = op_array(c(0.3, 0.7)))
## tf.Tensor(0.8135988, shape=(), dtype=float32)
# Using 'sum' reduction type. scce <- loss_sparse_categorical_crossentropy(reduction="sum") scce(op_array(y_true), op_array(y_pred))
## tf.Tensor(2.3538785, shape=(), dtype=float32)
# 2.354
# Using 'none' reduction type. scce <- loss_sparse_categorical_crossentropy(reduction=NULL) scce(op_array(y_true), op_array(y_pred))
## tf.Tensor([0.05129344 2.3025851 ], shape=(2), dtype=float32)
# array([0.0513, 2.303], dtype=float32)
Usage with the compile()
API:
model %>% compile(optimizer = 'sgd', loss = loss_sparse_categorical_crossentropy())
See Also
Other losses:
Loss()
loss_binary_crossentropy()
loss_binary_focal_crossentropy()
loss_categorical_crossentropy()
loss_categorical_focal_crossentropy()
loss_categorical_hinge()
loss_cosine_similarity()
loss_ctc()
loss_dice()
loss_hinge()
loss_huber()
loss_kl_divergence()
loss_log_cosh()
loss_mean_absolute_error()
loss_mean_absolute_percentage_error()
loss_mean_squared_error()
loss_mean_squared_logarithmic_error()
loss_poisson()
loss_squared_hinge()
loss_tversky()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_hinge()
metric_huber()
metric_kl_divergence()
metric_log_cosh()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_poisson()
metric_sparse_categorical_crossentropy()
metric_squared_hinge()