stogo {nloptr}R Documentation

Stochastic Global Optimization

Description

StoGO is a global optimization algorithm that works by systematically dividing the search space—which must be bound-constrained—into smaller hyper-rectangles via a branch-and-bound technique, and searching them using a gradient-based local-search algorithm (a BFGS variant), optionally including some randomness.

Usage

stogo(
  x0,
  fn,
  gr = NULL,
  lower = NULL,
  upper = NULL,
  maxeval = 10000,
  xtol_rel = 1e-06,
  randomized = FALSE,
  nl.info = FALSE,
  ...
)

Arguments

x0

initial point for searching the optimum.

fn

objective function that is to be minimized.

gr

optional gradient of the objective function.

lower, upper

lower and upper bound constraints.

maxeval

maximum number of function evaluations.

xtol_rel

stopping criterion for relative change reached.

randomized

logical; shall a randomizing variant be used?

nl.info

logical; shall the original NLopt info be shown.

...

additional arguments passed to the function.

Value

List with components:

par

the optimal solution found so far.

value

the function value corresponding to par.

iter

number of (outer) iterations, see maxeval.

convergence

integer code indicating successful completion (> 0) or a possible error number (< 0).

message

character string produced by NLopt and giving additional information.

Note

Only bounds-constrained problems are supported by this algorithm.

Author(s)

Hans W. Borchers

References

S. Zertchaninov and K. Madsen, “A C++ Programme for Global Optimization,” IMM-REP-1998-04, Department of Mathematical Modelling, Technical University of Denmark.

Examples


## Rosenbrock Banana objective function

rbf <- function(x) {(1 - x[1]) ^ 2 + 100 * (x[2] - x[1] ^ 2) ^ 2}

x0 <- c(-1.2, 1)
lb <- c(-3, -3)
ub <- c(3,  3)

## The function as written above has a minimum of 0 at (1, 1)

stogo(x0 = x0, fn = rbf, lower = lb, upper = ub)


[Package nloptr version 2.1.1 Index]