optim_adam {torch}R Documentation

Implements Adam algorithm.

Description

It has been proposed in Adam: A Method for Stochastic Optimization.

Usage

optim_adam(
  params,
  lr = 0.001,
  betas = c(0.9, 0.999),
  eps = 1e-08,
  weight_decay = 0,
  amsgrad = FALSE
)

Arguments

params

(iterable): iterable of parameters to optimize or dicts defining parameter groups

lr

(float, optional): learning rate (default: 1e-3)

betas

(Tuple[float, float], optional): coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))

eps

(float, optional): term added to the denominator to improve numerical stability (default: 1e-8)

weight_decay

(float, optional): weight decay (L2 penalty) (default: 0)

amsgrad

(boolean, optional): whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: FALSE)

Warning

If you need to move a model to GPU via ⁠$cuda()⁠, please do so before constructing optimizers for it. Parameters of a model after ⁠$cuda()⁠ will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

Examples

if (torch_is_installed()) {
## Not run: 
optimizer <- optim_adam(model$parameters(), lr = 0.1)
optimizer$zero_grad()
loss_fn(model(input), target)$backward()
optimizer$step()

## End(Not run)

}

[Package torch version 0.13.0 Index]