easyEGO {DiceOptim} | R Documentation |
User-friendly wrapper of the functions fastEGO.nsteps
and TREGO.nsteps
.
Generates initial DOEs and kriging models (objects of class km
),
and executes nsteps
iterations of either EGO or TREGO.
Description
User-friendly wrapper of the functions fastEGO.nsteps
and TREGO.nsteps
.
Generates initial DOEs and kriging models (objects of class km
),
and executes nsteps
iterations of either EGO or TREGO.
Usage
easyEGO(
fun,
budget,
lower,
upper,
X = NULL,
y = NULL,
control = list(trace = 1, seed = 42),
n.cores = 1,
...
)
Arguments
fun |
scalar function to be minimized, |
budget |
total number of calls to the objective and constraint functions, |
lower |
vector of lower bounds for the variables to be optimized over, |
upper |
vector of upper bounds for the variables to be optimized over, |
X |
initial design of experiments. If not provided, X is taken as a maximin LHD with budget/3 points |
y |
initial set of objective observations |
control |
an optional list of control parameters. See "Details". |
n.cores |
number of cores for parallel computation |
... |
additional parameters to be given to |
Details
Does not require knowledge on kriging models (objects of class km
)
The control
argument is a list that can supply any of the following components:
-
trace
: between -1 and 3 -
seed
: to fix the seed of the run -
cov.reestim
: Boolean, if TRUE (default) the covariance parameters are re-estimated at each iteration -
model.trend
: trend for the GP model -
lb, ub
: lower and upper bounds for the GP covariance ranges -
nugget
: optional nugget effect -
covtype
: covariance of the GP model (default "matern5_2") -
optim.method
: optimisation of the GP hyperparameters (default "BFGS") -
multistart
: number of restarts of BFGS -
gpmean.trick, gpmean.freq
: Boolean and integer, resp., for the gpmean trick -
scaling
: Boolean, activates input scaling -
warping
: Boolean, activates output warping -
TR
: Boolean, activates TREGO instead of EGO -
trcontrol
: list of parameters of the trust region, seeTREGO.nsteps
-
always.sample
: Boolean, activates force observation even if it leads to poor conditioning
Value
A list with components:
par
: the best feasible pointvalues
: a vector of the objective and constraints at the point given inpar
,history
: a list containing all the points visited by the algorithm (X
) and their corresponding objectives (y
).model
: the last GP model, classkm
control
: full list of control values, see "Details"res
: the output of eitherfastEGO.nsteps
orTREGO.nsteps
Author(s)
Victor Picheny
References
D.R. Jones, M. Schonlau, and W.J. Welch (1998), Efficient global optimization of expensive black-box functions, Journal of Global Optimization, 13, 455-492.
Examples
library(parallel)
library(DiceOptim)
set.seed(123)
#########################################################
### 10 ITERATIONS OF TREGO ON THE BRANIN FUNCTION, ####
### STARTING FROM A 9-POINTS FACTORIAL DESIGN ####
########################################################
# a 9-points factorial design, and the corresponding response
ylim=NULL
fun <- branin; d <- 2
budget <- 5*d
lower <- rep(0,d)
upper <- rep(1,d)
n.init <- 2*d
control <- list(n.init=2*d, TR=TRUE, nugget=1e-5, trcontrol=list(algo="TREGO"), multistart=1)
res1 <- easyEGO(fun=fun, budget=budget, lower=lower, upper=upper, control=control, n.cores=1)
par(mfrow=c(3,1))
y <- res1$history$y
steps <- res1$res$all.steps
success <- res1$res$all.success
sigma <- res1$res$all.sigma
ymin <- cummin(y)
pch <- rep(1, length(sigma))
col <- rep("red", length(sigma))
pch[which(!steps)] <- 2
col[which(success)] <- "darkgreen"
pch2 <- c(rep(3, n.init), pch)
col2 <- c(rep("black", n.init), col)
plot(y, col=col2, ylim=ylim, pch=pch2, lwd=2, xlim=c(0, budget))
lines(ymin, col="darkgreen")
abline(v=n.init+.5)
plot(n.init + (1:length(sigma)), sigma, xlim=c(0, budget), ylim=c(0, max(sigma)),
pch=pch, col=col, lwd=2, main="TR size")
lines(n.init + (1:length(sigma)), sigma, xlim=c(0, budget))
abline(v=n.init+.5)
plot(NA, xlim=c(0, budget), ylim=c(0, 1), main="x0 (coordinates)")
for (i in 1:d) {
lines(n.init + (1:nrow(res1$res$all.x0)), res1$res$all.x0[,i])
points(n.init + (1:nrow(res1$res$all.x0)), res1$res$all.x0[,i], pch=pch, col=col, lwd=2)
}
abline(v=n.init+.5)
par(mfrow=c(1,1))
pairs(res1$model@X, pch=pch2, col=col2)