callback_model_checkpoint {keras3} | R Documentation |
Callback to save the Keras model or model weights at some frequency.
Description
callback_model_checkpoint()
is used in conjunction with training using
model |> fit()
to save a model or weights (in a checkpoint file) at some
interval, so the model or weights can be loaded later to continue the
training from the state saved.
A few options this callback provides include:
Whether to only keep the model that has achieved the "best performance" so far, or whether to save the model at the end of every epoch regardless of performance.
Definition of "best"; which quantity to monitor and whether it should be maximized or minimized.
The frequency it should save at. Currently, the callback supports saving at the end of every epoch, or after a fixed number of training batches.
Whether only weights are saved, or the whole model is saved.
Usage
callback_model_checkpoint(
filepath,
monitor = "val_loss",
verbose = 0L,
save_best_only = FALSE,
save_weights_only = FALSE,
mode = "auto",
save_freq = "epoch",
initial_value_threshold = NULL
)
Arguments
filepath |
string, path to save the model file.
|
monitor |
The metric name to monitor. Typically the metrics are set by
the
|
verbose |
Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays messages when the callback takes an action. |
save_best_only |
if |
save_weights_only |
if TRUE, then only the model's weights will be saved
( |
mode |
one of { |
save_freq |
|
initial_value_threshold |
Floating point initial "best" value of the
metric to be monitored. Only applies if |
Value
A Callback
instance that can be passed to fit.keras.src.models.model.Model()
.
Examples
model <- keras_model_sequential(input_shape = c(10)) |> layer_dense(1, activation = "sigmoid") |> compile(loss = "binary_crossentropy", optimizer = "adam", metrics = c('accuracy')) EPOCHS <- 10 checkpoint_filepath <- tempfile('checkpoint-model-', fileext = ".keras") model_checkpoint_callback <- callback_model_checkpoint( filepath = checkpoint_filepath, monitor = 'val_accuracy', mode = 'max', save_best_only = TRUE ) # Model is saved at the end of every epoch, if it's the best seen so far. model |> fit(x = random_uniform(c(2, 10)), y = op_ones(2, 1), epochs = EPOCHS, validation_split = .5, verbose = 0, callbacks = list(model_checkpoint_callback)) # The model (that are considered the best) can be loaded as - load_model(checkpoint_filepath)
## Model: "sequential" ## +---------------------------------+------------------------+---------------+ ## | Layer (type) | Output Shape | Param # | ## +=================================+========================+===============+ ## | dense (Dense) | (None, 1) | 11 | ## +---------------------------------+------------------------+---------------+ ## Total params: 35 (144.00 B) ## Trainable params: 11 (44.00 B) ## Non-trainable params: 0 (0.00 B) ## Optimizer params: 24 (100.00 B)
# Alternatively, one could checkpoint just the model weights as - checkpoint_filepath <- tempfile('checkpoint-', fileext = ".weights.h5") model_checkpoint_callback <- callback_model_checkpoint( filepath = checkpoint_filepath, save_weights_only = TRUE, monitor = 'val_accuracy', mode = 'max', save_best_only = TRUE ) # Model weights are saved at the end of every epoch, if it's the best seen # so far. # same as above model |> fit(x = random_uniform(c(2, 10)), y = op_ones(2, 1), epochs = EPOCHS, validation_split = .5, verbose = 0, callbacks = list(model_checkpoint_callback)) # The model weights (that are considered the best) can be loaded model |> load_model_weights(checkpoint_filepath)
See Also
Other callbacks:
Callback()
callback_backup_and_restore()
callback_csv_logger()
callback_early_stopping()
callback_lambda()
callback_learning_rate_scheduler()
callback_reduce_lr_on_plateau()
callback_remote_monitor()
callback_swap_ema_weights()
callback_tensorboard()
callback_terminate_on_nan()