| SAopt {NMOF} | R Documentation |
Optimisation with Simulated Annealing
Description
The function implements a Simulated-Annealing algorithm.
Usage
SAopt(OF, algo = list(), ...)
Arguments
OF |
The objective function, to be minimised. Its first argument
needs to be a solution |
algo |
A list of settings for the algorithm. See Details. |
... |
other variables passed to |
Details
Simulated Annealing (SA) changes an initial solution
iteratively; the algorithm stops after a fixed number of
iterations. Conceptually, SA consists of a loop than runs
for a number of iterations. In each iteration, a current solution
xc is changed through a function algo$neighbour. If this
new (or neighbour) solution xn is not worse than xc, ie,
if OF(xn,...) <= OF(xc,...), then xn replaces
xc. If xn is worse, it still replaces xc, but
only with a certain probability. This probability is a function of the
degree of the deterioration (the greater, the less likely the new
solution is accepted) and the current iteration (the longer the
algorithm has already run, the less likely the new
solution is accepted).
The list algo contains the following items.
nSThe number of steps per temperature. The default is 1000; but this setting depends very much on the problem.
nTThe number of temperatures. Default is 10.
nI-
Total number of iterations, with default
NULL. If specified, it will overridenSwithceiling(nI/nT). Using this option makes it easier to compare and switch between functionsLSopt,TAoptandSAopt. nDThe number of random steps to calibrate the temperature. Defaults to 2000.
initTInitial temperature. Defaults to
NULL, in which case it is automatically chosen so thatinitProbis achieved.finalTFinal temperature. Defaults to 0.
alphaThe cooling constant. The current temperature is multiplied by this value. Default is 0.9.
mStepStep multiplier. The default is 1, which implies constant number of steps per temperature. If greater than 1, the step number
nSis increased tom*nS(and rounded).x0The initial solution. If this is a function, it will be called once without arguments to compute an initial solution, ie,
x0 <- algo$x0(). This can be useful when the routine is called in a loop of restarts, and each restart is to have its own starting value.neighbourThe neighbourhood function, called as
neighbour(x, ...). Its first argument must be a solutionx; it must return a changed solution.printDetailIf
TRUE(the default), information is printed. If an integerigreater then one, information is printed at veryith iteration.printBarIf
TRUE(default isFALSE), atxtProgressBar(from package utils) is printed. The progress bar is not shown ifprintDetailis an integer greater than 1.storeFif
TRUE(the default), the objective function values for every solution in every generation are stored and returned as matrixFmat.storeSolutionsDefault is
FALSE. IfTRUE, the solutions (ie, decision variables) in every generation are stored and returned in listxlist(see Value section below). To check, for instance, the current solution at the end of theith generation, retrievexlist[[c(2L, i)]].classifyLogical; default is
FALSE. IfTRUE, the result will have a class attributeSAoptattached.OF.targetNumeric; when specified, the algorithm will stop when an objective-function value as low as
OF.target(or lower) is achieved. This is useful when an optimal objective-function value is known: the algorithm will then stop and not waste time searching for a better solution.
At the minimum, algo needs to contain an initial solution
x0 and a neighbour function.
The total number of iterations equals algo$nT times
algo$nS (plus possibly algo$nD).
Value
SAopt returns a list with five components:
xbest |
the solution |
OFvalue |
objective function value of the solution, ie,
|
Fmat |
if |
xlist |
if |
initial.state |
the value of |
If algo$classify was set to TRUE, the resulting list
will have a class attribute TAopt.
Note
If the ... argument is used, then all the objects passed
with ... need to go into the objective function and the
neighbourhood function. It is recommended to collect all information
in a list myList and then write OF and neighbour
so that they are called as OF(x, myList) and neighbour(x,
myList). Note that x need not be a vector but can be any data
structure (eg, a matrix or a list).
Using an initial and final temperature of zero means that
SA will be equivalent to a Local Search. The function
LSopt may be preferred then because of smaller
overhead.
Author(s)
Enrico Schumann
References
Gilli, M., Maringer, D. and Schumann, E. (2019) Numerical Methods and Optimization in Finance. 2nd edition. Elsevier. doi:10.1016/C2017-0-01621-X
Kirkpatrick, S., Gelatt, C.D. and Vecchi, M.P. (1983). Optimization with Simulated Annealing. Science. 220 (4598), 671–680.
Schumann, E. (2023) Financial Optimisation with R (NMOF Manual). http://enricoschumann.net/NMOF.htm#NMOFmanual
See Also
Examples
## Aim: given a matrix x with n rows and 2 columns,
## divide the rows of x into two subsets such that
## in one subset the columns are highly correlated,
## and in the other lowly (negatively) correlated.
## constraint: a single subset should have at least 40 rows
## create data with specified correlation
n <- 100L
rho <- 0.7
C <- matrix(rho, 2L, 2L); diag(C) <- 1
x <- matrix(rnorm(n * 2L), n, 2L) %*% chol(C)
## collect data
data <- list(x = x, n = n, nmin = 40L)
## a random initial solution
x0 <- runif(n) > 0.5
## a neighbourhood function
neighbour <- function(xc, data) {
xn <- xc
p <- sample.int(data$n, size = 1L)
xn[p] <- abs(xn[p] - 1L)
# reject infeasible solution
c1 <- sum(xn) >= data$nmin
c2 <- sum(xn) <= (data$n - data$nmin)
if (c1 && c2) res <- xn else res <- xc
as.logical(res)
}
## check (should be 1 FALSE and n-1 TRUE)
x0 == neighbour(x0, data)
## objective function
OF <- function(xc, data)
-abs(cor(data$x[xc, ])[1L, 2L] - cor(data$x[!xc, ])[1L, 2L])
## check
OF(x0, data)
## check
OF(neighbour(x0, data), data)
## plot data
par(mfrow = c(1,3), bty = "n")
plot(data$x,
xlim = c(-3,3), ylim = c(-3,3),
main = "all data", col = "darkgreen")
## *Local Search*
algo <- list(nS = 3000L,
neighbour = neighbour,
x0 = x0,
printBar = FALSE)
sol1 <- LSopt(OF, algo = algo, data=data)
sol1$OFvalue
## *Simulated Annealing*
algo$nT <- 10L
algo$nS <- ceiling(algo$nS/algo$nT)
sol <- SAopt(OF, algo = algo, data = data)
sol$OFvalue
c1 <- cor(data$x[ sol$xbest, ])[1L, 2L]
c2 <- cor(data$x[!sol$xbest, ])[1L, 2L]
lines(data$x[ sol$xbest, ], type = "p", col = "blue")
plot(data$x[ sol$xbest, ], col = "blue",
xlim = c(-3, 3), ylim = c(-3, 3),
main = paste("subset 1, corr.", format(c1, digits = 3)))
plot(data$x[!sol$xbest, ], col = "darkgreen",
xlim = c(-3,3), ylim = c(-3,3),
main = paste("subset 2, corr.", format(c2, digits = 3)))
## compare LS/SA
par(mfrow = c(1, 1), bty = "n")
plot(sol1$Fmat[ , 2L],type = "l", ylim=c(-1.5, 0.5),
ylab = "OF", xlab = "Iterations")
lines(sol$Fmat[ , 2L],type = "l", col = "blue")
legend(x = "topright", legend = c("LS", "SA"),
lty = 1, lwd = 2, col = c("black", "blue"))