autograd_function {torch} | R Documentation |
Records operation history and defines formulas for differentiating ops.
Description
Every operation performed on Tensor's creates a new function object, that
performs the computation, and records that it happened. The history is
retained in the form of a DAG of functions, with edges denoting data
dependencies (input <- output). Then, when backward is called, the graph is
processed in the topological ordering, by calling backward()
methods of each
Function object, and passing returned gradients on to next Function's.
Usage
autograd_function(forward, backward)
Arguments
forward |
Performs the operation. It must accept a context |
backward |
Defines a formula for differentiating the operation. It must accept
a context |
Examples
if (torch_is_installed()) {
exp2 <- autograd_function(
forward = function(ctx, i) {
result <- i$exp()
ctx$save_for_backward(result = result)
result
},
backward = function(ctx, grad_output) {
list(i = grad_output * ctx$saved_variable$result)
}
)
}