callback_reduce_lr_on_plateau {keras3} | R Documentation |
Reduce learning rate when a metric has stopped improving.
Description
Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
Usage
callback_reduce_lr_on_plateau(
monitor = "val_loss",
factor = 0.1,
patience = 10L,
verbose = 0L,
mode = "auto",
min_delta = 1e-04,
cooldown = 0L,
min_lr = 0,
...
)
Arguments
monitor |
String. Quantity to be monitored. |
factor |
Float. Factor by which the learning rate will be reduced.
|
patience |
Integer. Number of epochs with no improvement after which learning rate will be reduced. |
verbose |
Integer. 0: quiet, 1: update messages. |
mode |
String. One of |
min_delta |
Float. Threshold for measuring the new optimum, to only focus on significant changes. |
cooldown |
Integer. Number of epochs to wait before resuming normal operation after the learning rate has been reduced. |
min_lr |
Float. Lower bound on the learning rate. |
... |
For forward/backward compatability. |
Value
A Callback
instance that can be passed to fit.keras.src.models.model.Model()
.
Examples
reduce_lr <- callback_reduce_lr_on_plateau(monitor = 'val_loss', factor = 0.2, patience = 5, min_lr = 0.001) model %>% fit(x_train, y_train, callbacks = list(reduce_lr))
See Also
Other callbacks:
Callback()
callback_backup_and_restore()
callback_csv_logger()
callback_early_stopping()
callback_lambda()
callback_learning_rate_scheduler()
callback_model_checkpoint()
callback_remote_monitor()
callback_swap_ema_weights()
callback_tensorboard()
callback_terminate_on_nan()