opm {optimx} | R Documentation |
General-purpose optimization
Description
General-purpose optimization wrapper function that calls multiple other R tools for optimization, including the existing optim() function tools.
Because SANN does not return a meaningful convergence code
(conv), opm()
does not call the SANN method, but it can be invoked
in optimr()
.
There is a pseudo-method "ALL" that runs all available methods. Note that
this is upper-case. This function is a replacement for optimx() from the
optimx package. opm()
calls the optimr()
function for each
solver in the method
list.
Usage
opm(par, fn, gr=NULL, hess=NULL, lower=-Inf, upper=Inf,
method=c("Nelder-Mead","BFGS"), hessian=FALSE,
control=list(),
...)
Arguments
par |
a vector of initial values for the parameters for which optimal values are to be found. Names on the elements of this vector are preserved and used in the results data frame. |
fn |
A function to be minimized (or maximized), with a first argument the vector of parameters over which minimization is to take place. It should return a scalar result. |
gr |
A function to return (as a vector) the gradient for those methods that can use this information. If 'gr' is If 'gr' is a character string, this character string will be taken to be the name
of an available gradient approximation function. Examples are "grfwd", "grback",
"grcentral" and "grnd", with the last name referring to the default method of
package |
hess |
A function to return (as a symmetric matrix) the Hessian of the objective function for those methods that can use this information. |
lower , upper |
Bounds on the variables for methods such as |
method |
A vector of the methods to be used, each as a character string.
Note that this is an important change from optim() that allows
just one method to be specified. See ‘Details’. If |
hessian |
A logical control that if TRUE forces the computation of an approximation
to the Hessian at the final set of parameters. If FALSE (default), the hessian is
calculated if needed to provide the KKT optimality tests (see |
control |
A list of control parameters. See ‘Details’. There is
a spreadsheet |
... |
For |
Details
Note that arguments after ...
must be matched exactly.
For details of how opm()
calls the methods, see the documentation
and code for optimr()
. The documentation and code for individual
methods may also be useful. Note that some simplification of the calls
may have been necessary, for example, to provide reasonable default values
for method controls that are consistent across several methods, though this
is not always possible to guarantee. The documentation for optimr
and the source
code of the quite simple routine ctrldefault.R
may be useful.
Some of the commonly useful elements of the control
list are:
trace
Non-negative integer. If positive, tracing information on the progress of the optimization is produced. Higher values may produce more tracing information: for method
"L-BFGS-B"
there are six levels of tracing. trace = 0 gives no output (To understand exactly what these do see the source code: higher levels give more detail.)maxfeval
For methods that can use this control, a limit on the number of function evaluations. This control is simply passed through. It is not checked by
opm
.maxit
For methods that can use this control, a limit on the number of gradient evaluations or major iterations.
fnscale
An overall scaling to be applied to the value of
fn
andgr
during optimization. If negative, turns the problem into a maximization problem. Optimization is performed onfn(par)/fnscale
. For methods from the set inoptim()
. Note potential conflicts with the controlmaximize
.parscale
A vector of scaling values for the parameters. Optimization is performed on
par/parscale
and these should be comparable in the sense that a unit change in any element produces about a unit change in the scaled value.Foroptim
.save.failures
= TRUE (default) if we wish to keep "answers" from runs where the method does not return convergence==0. FALSE otherwise.
maximize
= TRUE if we want to maximize rather than minimize a function. (Default FALSE). Methods nlm, nlminb, ucminf cannot maximize a function, so the user must explicitly minimize and carry out the adjustment externally. However, there is a check to avoid usage of these codes when maximize is TRUE. See
fnscale
below for the method used inoptim
that we deprecate.all.methods
= TRUE if we want to use all available (and suitable) methods. This is equivalent to setting
method="ALL"
kkt
=FALSE if we do NOT want to test the Kuhn, Karush, Tucker optimality conditions. The default is generally TRUE. However, because the Hessian computation may be very slow, we set
kkt
to be FALSE if there are more than than 50 parameters when the gradient functiongr
is not provided, and more than 500 parameters when such a function is specified. We return logical valuesKKT1
andKKT2
TRUE if first and second order conditions are satisfied approximately. Note, however, that the tests are sensitive to scaling, and users may need to perform additional verification. Ifhessian
is TRUE, this overrides controlkkt
.all.methods
= TRUE if we want to use all available (and suitable) methods.
kkttol
= value to use to check for small gradient and negative Hessian eigenvalues. Default = .Machine$double.eps^(1/3)
kkt2tol
= Tolerance for eigenvalue ratio in KKT test of positive definite Hessian. Default same as for kkttol
dowarn
= FALSE if we want to suppress warnings generated by
opm()
oroptimr()
. Default is TRUE.badval
= The value to set for the function value when try(fn()) fails. The value is then a signal of failure when execution continues with other methods. It may also, in non-standard usage, be helpful in heuristic search methods like "Nelder-Mead" to avoid parameter regions that are unwanted or inadmissible. It is inappropriate for gradient methods. Default is (0.5)*.Machine$double.xmax
There may be control
elements that apply only to some of the methods. Using these
may or may not "work" with opm()
, and errors may occur with methods for which
the controls have no meaning.
However, it should be possible to call the underlying optimr()
function with
these method-specific controls.
Any names given to par
will be copied to the vectors passed to
fn
and gr
. Note that no other attributes of par
are copied over. (We have not verified this as at 2009-07-29.)
Value
If there are npar
parameters, then the result is a dataframe having one row
for each method for which results are reported, using the method as the row name,
with columns
par_1, .., par_npar, value, fevals, gevals, niter, convergence, kkt1, kkt2, xtimes
where
- par_1
..
- par_npar
The best set of parameters found.
- value
The value of
fn
corresponding topar
.- fevals
The number of calls to
fn
. NOT reported for methodlbfgs
.- gevals
The number of calls to
gr
. This excludes those calls needed to compute the Hessian, if requested, and any calls tofn
to compute a finite-difference approximation to the gradient. NOT reported for methodlbfgs
.- convergence
An integer code.
0
indicates successful convergence. Various methods may or may not return sufficient information to allow all the codes to be specified. An incomplete list of codes includes1
indicates that the iteration limit
maxit
had been reached.20
indicates that the initial set of parameters is inadmissible, that is, that the function cannot be computed or returns an infinite, NULL, or NA value.
21
indicates that an intermediate set of parameters is inadmissible.
10
indicates degeneracy of the Nelder–Mead simplex.
51
indicates a warning from the
"L-BFGS-B"
method; see componentmessage
for further details.52
indicates an error from the
"L-BFGS-B"
method; see componentmessage
for further details.9998
indicates that the method has been called with a NULL 'gr' function, and the method requires that such a function be supplied.
9999
indicates the method has failed.
- kkt1
A logical value returned TRUE if the solution reported has a “small” gradient.
- kkt2
A logical value returned TRUE if the solution reported appears to have a positive-definite Hessian.
- xtimes
The reported execution time of the calculations for the particular method.
The attribute "details" to the returned answer object contains information,
if computed, on the gradient (ngatend
) and Hessian matrix (nhatend
)
at the supposed optimum, along with the eigenvalues of the Hessian (hev
),
as well as the message
, if any, returned by the computation for each method
,
which is included for each row of the details
.
If the returned object from optimx() is ans
, this is accessed
via the construct
attr(ans, "details")
This object is a matrix based on a list so that if ans is the output of optimx then attr(ans, "details")[1, ] gives the first row and attr(ans,"details")["Nelder-Mead", ] gives the Nelder-Mead row. There is one row for each method that has been successful or that has been forcibly saved by save.failures=TRUE.
There are also attributes
- maximize
to indicate we have been maximizing the objective
- npar
to provide the number of parameters, thereby facilitating easy extraction of the parameters from the results data frame
- follow.on
to indicate that the results have been computed sequentially, using the order provided by the user, with the best parameters from one method used to start the next. There is an example (
ans9
) in the scriptox.R
in the demo directory of the package.
Note
Most methods in optimx
will work with one-dimensional par
s, but such
use is NOT recommended. Use optimize
or other one-dimensional methods instead.
There are a series of demos available. Once the package is loaded (via require(optimx)
or
library(optimx)
, you may see available demos via
demo(package="optimx")
The demo 'brown_test' may be run with the command demo(brown_test, package="optimx")
The package source contains several functions that are not exported in the NAMESPACE. These are
optimx.setup()
which establishes the controls for a given run;
optimx.check()
which performs bounds and gradient checks on the supplied parameters and functions;
optimx.run()
which actually performs the optimization and post-solution computations;
scalechk()
which actually carries out a check on the relative scaling of the input parameters.
Knowledgeable users may take advantage of these functions if they are carrying out production calculations where the setup and checks could be run once.
Source
See the manual pages for optim()
and the packages the DESCRIPTION suggests
.
References
See the manual pages for optim()
and the packages the DESCRIPTION suggests
.
Nash JC, and Varadhan R (2011). Unifying Optimization Algorithms to Aid Software System Users: optimx for R., Journal of Statistical Software, 43(9), 1-14., URL http://www.jstatsoft.org/v43/i09/.
Nash JC (2014). On Best Practice Optimization Methods in R., Journal of Statistical Software, 60(2), 1-14., URL http://www.jstatsoft.org/v60/i02/.
See Also
spg
, nlm
, nlminb
,
bobyqa
,
ucminf
,
nmkb
,
hjkb
.
optimize
for one-dimensional minimization;
constrOptim
or spg
for linearly constrained optimization.
Examples
require(graphics)
cat("Note possible demo(ox) for extended examples\n")
## Show multiple outputs of optimx using all.methods
# genrose function code
genrose.f<- function(x, gs=NULL){ # objective function
## One generalization of the Rosenbrock banana valley function (n parameters)
n <- length(x)
if(is.null(gs)) { gs=100.0 }
fval<-1.0 + sum (gs*(x[1:(n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2)
return(fval)
}
genrose.g <- function(x, gs=NULL){
# vectorized gradient for genrose.f
# Ravi Varadhan 2009-04-03
n <- length(x)
if(is.null(gs)) { gs=100.0 }
gg <- as.vector(rep(0, n))
tn <- 2:n
tn1 <- tn - 1
z1 <- x[tn] - x[tn1]^2
z2 <- 1 - x[tn]
gg[tn] <- 2 * (gs * z1 - z2)
gg[tn1] <- gg[tn1] - 4 * gs * x[tn1] * z1
return(gg)
}
genrose.h <- function(x, gs=NULL) { ## compute Hessian
if(is.null(gs)) { gs=100.0 }
n <- length(x)
hh<-matrix(rep(0, n*n),n,n)
for (i in 2:n) {
z1<-x[i]-x[i-1]*x[i-1]
z2<-1.0-x[i]
hh[i,i]<-hh[i,i]+2.0*(gs+1.0)
hh[i-1,i-1]<-hh[i-1,i-1]-4.0*gs*z1-4.0*gs*x[i-1]*(-2.0*x[i-1])
hh[i,i-1]<-hh[i,i-1]-4.0*gs*x[i-1]
hh[i-1,i]<-hh[i-1,i]-4.0*gs*x[i-1]
}
return(hh)
}
startx<-4*seq(1:10)/3.
ans8<-opm(startx,fn=genrose.f,gr=genrose.g, hess=genrose.h,
method="ALL", control=list(save.failures=TRUE, trace=0), gs=10)
# Set trace=1 for output of individual solvers
ans8
ans8[, "gevals"]
ans8["spg", ]
summary(ans8, par.select = 1:3)
summary(ans8, order = value)[1, ] # show best value
head(summary(ans8, order = value)) # best few
## head(summary(ans8, order = "value")) # best few -- alternative syntax
## order by value. Within those values the same to 3 decimals order by fevals.
## summary(ans8, order = list(round(value, 3), fevals), par.select = FALSE)
summary(ans8, order = "list(round(value, 3), fevals)", par.select = FALSE)
## summary(ans8, order = rownames, par.select = FALSE) # order by method name
summary(ans8, order = "rownames", par.select = FALSE) # same
summary(ans8, order = NULL, par.select = FALSE) # use input order
## summary(ans8, par.select = FALSE) # same