callback_early_stopping {keras3} | R Documentation |
Stop training when a monitored metric has stopped improving.
Description
Assuming the goal of a training is to minimize the loss. With this, the
metric to be monitored would be 'loss'
, and mode would be 'min'
. A
model$fit()
training loop will check at end of every epoch whether
the loss is no longer decreasing, considering the min_delta
and
patience
if applicable. Once it's found no longer decreasing,
model$stop_training
is marked TRUE
and the training terminates.
The quantity to be monitored needs to be available in logs
list.
To make it so, pass the loss or metrics at model$compile()
.
Usage
callback_early_stopping(
monitor = "val_loss",
min_delta = 0L,
patience = 0L,
verbose = 0L,
mode = "auto",
baseline = NULL,
restore_best_weights = FALSE,
start_from_epoch = 0L
)
Arguments
monitor |
Quantity to be monitored. Defaults to |
min_delta |
Minimum change in the monitored quantity to qualify as an
improvement, i.e. an absolute change of less than min_delta, will
count as no improvement. Defaults to |
patience |
Number of epochs with no improvement after which training will
be stopped. Defaults to |
verbose |
Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays
messages when the callback takes an action. Defaults to |
mode |
One of |
baseline |
Baseline value for the monitored quantity. If not |
restore_best_weights |
Whether to restore model weights from the epoch
with the best value of the monitored quantity. If |
start_from_epoch |
Number of epochs to wait before starting to monitor
improvement. This allows for a warm-up period in which no
improvement is expected and thus training will not be stopped.
Defaults to |
Value
A Callback
instance that can be passed to fit.keras.src.models.model.Model()
.
Examples
callback <- callback_early_stopping(monitor = 'loss', patience = 3) # This callback will stop the training when there is no improvement in # the loss for three consecutive epochs. model <- keras_model_sequential() %>% layer_dense(10) model %>% compile(optimizer = optimizer_sgd(), loss = 'mse') history <- model %>% fit(x = op_ones(c(5, 20)), y = op_zeros(5), epochs = 10, batch_size = 1, callbacks = list(callback), verbose = 0) nrow(as.data.frame(history)) # Only 4 epochs are run.
## [1] 10
See Also
Other callbacks:
Callback()
callback_backup_and_restore()
callback_csv_logger()
callback_lambda()
callback_learning_rate_scheduler()
callback_model_checkpoint()
callback_reduce_lr_on_plateau()
callback_remote_monitor()
callback_swap_ema_weights()
callback_tensorboard()
callback_terminate_on_nan()