DEopt {NMOF} | R Documentation |
Optimisation with Differential Evolution
Description
The function implements the standard Differential Evolution algorithm.
Usage
DEopt(OF, algo = list(), ...)
Arguments
OF |
The objective function, to be minimised. See Details. |
algo |
A list with the settings for algorithm. See Details and Examples. |
... |
Other pieces of data required to evaluate the objective function. See Details and Examples. |
Details
The function implements the standard Differential Evolution (no jittering or other features). Differential Evolution (DE) is a population-based optimisation heuristic proposed by Storn and Price (1997). DE evolves several solutions (collected in the ‘population’) over a number of iterations (‘generations’). In a given generation, new solutions are created and evaluated; better solutions replace inferior ones in the population. Finally, the best solution of the population is returned. See the references for more details on the mechanisms.
To allow for constraints, the evaluation works as follows: after a new
solution is created, it is (i) repaired, (ii) evaluated through the
objective function, (iii) penalised. Step (ii) is done by a call to
OF
; steps (i) and (iii) by calls to algo$repair
and
algo$pen
. Step (i) and (iii) are optional, so the respective
functions default to NULL
. A penalty is a positive number added
to the ‘clean’ objective function value, so it can also be
directly written in the OF
. Writing a separate penalty function
is often clearer; it can be more efficient if either only the objective
function or only the penalty function can be vectorised. (Constraints
can also be added without these mechanisms. Solutions that violate
constraints can, for instance, be mapped to feasible solutions, but
without actually changing them. See Maringer and Oyewumi, 2007, for an
example.)
Conceptually, DE consists of two loops: one loop across the
generations and, in any given generation, one loop across the solutions.
DEopt
indeed uses, as the default, two loops. But it does not
matter in what order the solutions are evaluated (or repaired or
penalised), so the second loop can be vectorised. This is controlled by
the variables algo$loopOF
, algo$loopRepair
and
algo$loopPen
, which all default to TRUE
. Examples are
given in the vignettes and in the book. The respective
algo$loopFun
must then be set to FALSE
.
All objects that are passed through ...
will be passed to the
objective function, to the repair function and to the penalty function.
The list algo
collects the the settings for the
algorithm. Strictly necessary are only min
and max
(to
initialise the population). Here are all possible arguments:
CR
probability for crossover. Defaults to 0.9. Using default settings may not be a good idea.
F
The step size. Typically a numeric vector of length one; default is 0.5. Using default settings may not be a good idea. (
F
can also be a vector with different values for each decision variable.)nP
population size. Defaults to 50. Using default settings may not be a good idea.
nG
number of generations. Defaults to 300. Using default settings may not be a good idea.
min
,max
vectors of minimum and maximum parameter values. The vectors
min
andmax
are used to determine the dimension of the problem and to randomly initialise the population. Per default, they are no constraints: a solution may well be outside these limits. Only ifalgo$minmaxConstr
isTRUE
will the algorithm repair solutions outside themin
andmax
range.minmaxConstr
if
TRUE
,algo$min
andalgo$max
are considered constraints. Default isFALSE
.pen
a penalty function. Default is
NULL
(no penalty).initP
optional: the initial population. A matrix of size
length(algo$min)
timesalgo$nP
, or a function that creates such a matrix. If a function, it should take no arguments.repair
a repair function. Default is
NULL
(no repairing).loopOF
logical. Should the
OF
be evaluated through a loop? Defaults toTRUE
.loopPen
logical. Should the penalty function (if specified) be evaluated through a loop? Defaults to
TRUE
.loopRepair
logical. Should the repair function (if specified) be evaluated through a loop? Defaults to
TRUE
.printDetail
If
TRUE
(the default), information is printed. If an integeri
greater then one, information is printed at veryi
th generation.printBar
If
TRUE
(the default), atxtProgressBar
is printed.storeF
if
TRUE
(the default), the objective function values for every solution in every generation are stored and returned as matrixFmat
.storeSolutions
default is
FALSE
. IfTRUE
, the solutions (ie, decision variables) in every generation are stored and returned as a listP
in listxlist
(see Value section below). To check, for instance, the solutions at the end of thei
th generation, retrievexlist[[c(1L, i)]]
. This will be a matrix of sizelength(algo$min)
timesalgo$nP
. (To be consistent with other functions,xlist
is itself a list. In the case ofDEopt
, it contains just one element.)classify
Logical; default is
FALSE
. IfTRUE
, the result will have a class attributeTAopt
attached. This feature is experimental: the supported methods may change without warning.drop
-
If
FALSE
(the default), the dimension is not dropped from a single solution when it is passed to a function. (That is, the function will receive a single-column matrix.)
Value
A list:
xbest |
the solution (the best member of the population), which is a numeric vector |
OFvalue |
objective function value of best solution |
popF |
a vector. The objective function values in the final population. |
Fmat |
if |
xlist |
if |
initial.state |
the value of |
Author(s)
Enrico Schumann
References
Gilli, M., Maringer, D. and Schumann, E. (2019) Numerical Methods and Optimization in Finance. 2nd edition. Elsevier. doi:10.1016/C2017-0-01621-X
Maringer, D. and Oyewumi, O. (2007). Index Tracking with Constrained Portfolios. Intelligent Systems in Accounting, Finance and Management, 15(1), pp. 57–71.
Schumann, E. (2012) Remarks on 'A comparison of some heuristic optimization methods'. http://enricoschumann.net/R/remarks.htm
Schumann, E. (2023) Financial Optimisation with R (NMOF Manual). http://enricoschumann.net/NMOF.htm#NMOFmanual
Storn, R., and Price, K. (1997) Differential Evolution – a Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization, 11(4), pp. 341–359.
See Also
Examples
## Example 1: Trefethen's 100-digit challenge (problem 4)
## http://people.maths.ox.ac.uk/trefethen/hundred.html
OF <- tfTrefethen ### see ?testFunctions
algo <- list(nP = 50L, ### population size
nG = 300L, ### number of generations
F = 0.6, ### step size
CR = 0.9, ### prob of crossover
min = c(-10, -10), ### range for initial population
max = c( 10, 10))
sol <- DEopt(OF = OF, algo = algo)
## correct answer: -3.30686864747523
format(sol$OFvalue, digits = 12)
## check convergence of population
sd(sol$popF)
ts.plot(sol$Fmat, xlab = "generations", ylab = "OF")
## Example 2: vectorising the evaluation of the population
OF <- tfRosenbrock ### see ?testFunctions
size <- 3L ### define dimension
x <- rep.int(1, size) ### the known solution ...
OF(x) ### ... should give zero
algo <- list(printBar = FALSE,
nP = 30L,
nG = 300L,
F = 0.6,
CR = 0.9,
min = rep(-100, size),
max = rep( 100, size))
## run DEopt
(t1 <- system.time(sol <- DEopt(OF = OF, algo = algo)))
sol$xbest
sol$OFvalue ### should be zero (with luck)
## a vectorised Rosenbrock function: works only with a *matrix* x
OF2 <- function(x) {
n <- dim(x)[1L]
xi <- x[seq_len(n - 1L), ]
colSums(100 * (x[2L:n, ] - xi * xi)^2 + (1 - xi)^2)
}
## random solutions (every column of 'x' is one solution)
x <- matrix(rnorm(size * algo$nP), size, algo$nP)
all.equal(OF2(x)[1:3],
c(OF(x[ ,1L]), OF(x[ ,2L]), OF(x[ ,3L])))
## run DEopt and compare computing time
algo$loopOF <- FALSE
(t2 <- system.time(sol2 <- DEopt(OF = OF2, algo = algo)))
sol2$xbest
sol2$OFvalue ### should be zero (with luck)
t1[[3L]]/t2[[3L]] ### speedup