steep_descent {pracma} | R Documentation |
Steepest Descent Minimization
Description
Function minimization by steepest descent.
Usage
steep_descent(x0, f, g = NULL, info = FALSE,
maxiter = 100, tol = .Machine$double.eps^(1/2))
Arguments
x0 |
start value. |
f |
function to be minimized. |
g |
gradient function of |
info |
logical; shall information be printed on every iteration? |
maxiter |
max. number of iterations. |
tol |
relative tolerance, to be used as stopping rule. |
Details
Steepest descent is a line search method that moves along the downhill direction.
Value
List with following components:
xmin |
minimum solution found. |
fmin |
value of |
niter |
number of iterations performed. |
Note
Used some Matlab code as described in the book “Applied Numerical Analysis Using Matlab” by L. V.Fausett.
References
Nocedal, J., and S. J. Wright (2006). Numerical Optimization. Second Edition, Springer-Verlag, New York, pp. 22 ff.
See Also
Examples
## Rosenbrock function: The flat valley of the Rosenbruck function makes
## it infeasible for a steepest descent approach.
# rosenbrock <- function(x) {
# n <- length(x)
# x1 <- x[2:n]
# x2 <- x[1:(n-1)]
# sum(100*(x1-x2^2)^2 + (1-x2)^2)
# }
# steep_descent(c(1, 1), rosenbrock)
# Warning message:
# In steep_descent(c(0, 0), rosenbrock) :
# Maximum number of iterations reached -- not converged.
## Sphere function
sph <- function(x) sum(x^2)
steep_descent(rep(1, 10), sph)
# $xmin 0 0 0 0 0 0 0 0 0 0
# $fmin 0
# $niter 2
[Package pracma version 2.4.4 Index]