| PSopt {NMOF} | R Documentation |
Particle Swarm Optimisation
Description
The function implements Particle Swarm Optimisation.
Usage
PSopt(OF, algo = list(), ...)
Arguments
OF |
the objective function to be minimised. See Details. |
algo |
a list with the settings for algorithm. See Details and Examples. |
... |
pieces of data required to evaluate the objective function. See Details. |
Details
The function implements Particle Swarm Optimisation (PS); see the references for details on the implementation. PS is a population-based optimisation heuristic. It develops several solutions (a ‘population’) over a number of iterations. PS is directly applicable to continuous problems since the population is stored in real-valued vectors. In each iteration, a solution is updated by adding another vector called velocity. Think of a solution as a position in the search space, and of velocity as the direction into which this solution moves. Velocity changes over the course of the optimization: it is biased towards the best solution found by the particular solution and the best overall solution. The algorithm stops after a fixed number of iterations.
To allow for constraints, the evaluation works as follows: after a new
solution is created, it is (i) repaired, (ii) evaluated through the
objective function, (iii) penalised. Step (ii) is done by a call to
OF; steps (i) and (iii) by calls to algo$repair and
algo$pen. Step (i) and (iii) are optional, so the respective
functions default to NULL. A penalty can also be directly
written in the OF, since it amounts to a positive number added
to the ‘clean’ objective function value. It can be
advantageous to write a separate penalty function if either only the
objective function or only the penalty function can be vectorised.
(Constraints can also be added without these mechanisms. Solutions
that violate constraints can, for instance, be mapped to feasible
solutions, but without actually changing them. See Maringer and
Oyewumi, 2007, for an example with Differential Evolution.)
Conceptually, PS consists of two loops: one loop across the
iterations and, in any given generation, one loop across the
solutions. This is the default, controlled by the variables
algo$loopOF, algo$loopRepair, algo$loopPen and
loopChangeV which all default to TRUE. But it does not
matter in what order the solutions are evaluated (or repaired or
penalised), so the second loop can be vectorised. Examples are given
in the vignettes and in the book. The respective algo$loopFun
must then be set to FALSE.
The objective function, the repair function and and the penalty
function will be called as fun(solution, ...).
The list algo contains the following items:
nPpopulation size. Defaults to 100. Using default settings may not be a good idea.
nGnumber of iterations. Defaults to 500. Using default settings may not be a good idea.
c1the weight towards the individual's best solution. Typically between 0 and 2; defaults to 1. Using default settings may not be a good idea. In some cases, even negative values work well: the solution is then driven off its past best position. For ‘simple’ problems, setting
c1to zero may work well: the population moves then towards the best overall solution.c2the weight towards the populations's best solution. Typically between 0 and 2; defaults to 1. Using default settings may not be a good idea. In some cases, even negative values work well: the solution is then driven off the population's past best position.
inerthe inertia weight (a scalar), which reduces velocity. Typically between 0 and 1. Default is 0.9.
initVthe standard deviation of the initial velocities. Defaults to 1.
maxVthe maximum (absolute) velocity. Setting limits to velocity is sometimes called velocity clamping. Velocity is the change in a given solution in a given iteration. A maximum velocity can be set so to prevent unreasonable velocities (‘overshooting’): for instance, if a decision variable may lie between 0 and 1, then an absolute velocity much greater than 1 makes rarely sense.
min,maxvectors of minimum and maximum parameter values. The vectors
minandmaxare used to determine the dimension of the problem and to randomly initialise the population. Per default, they are no constraints: a solution may well be outside these limits. Only ifalgo$minmaxConstrisTRUEwill the algorithm repair solutions outside theminandmaxrange.minmaxConstrif
TRUE,algo$minandalgo$maxare considered constraints. Default isFALSE.pena penalty function. Default is
NULL(no penalty).repaira repair function. Default is
NULL(no repairing).changeVa function to change velocity. Default is
NULL(no change). This function is called before the velocity is added to the current solutions; it can be used to impose restrictions like changing only a number of decision variables.initPoptional: the initial population. A matrix of size
length(algo$min)timesalgo$nP, or a function that creates such a matrix. If a function, it should take no arguments.loopOFlogical. Should the
OFbe evaluated through a loop? Defaults toTRUE.loopPenlogical. Should the penalty function (if specified) be evaluated through a loop? Defaults to
TRUE.loopRepairlogical. Should the repair function (if specified) be evaluated through a loop? Defaults to
TRUE.loopChangeVlogical. Should the
changeVfunction (if specified) be evaluated through a loop? Defaults toTRUE.printDetailIf
TRUE(the default), information is printed. If an integerigreater then one, information is printed at veryith iteration.printBarIf
TRUE(the default), atxtProgressBar(from package utils) is printed).storeFIf
TRUE(the default), the objective function values for every solution in every generation are stored and returned as matrixFmat.storeSolutionsdefault is
FALSE. IfTRUE, the solutions (ie, decision variables) in every generation are stored as listsPandPbest, both stored in the listxlistwhich the function returns. To check, for instance, the solutions at the end of theith iteration, retrievexlist[[c(1L, i)]]; the best solutions at the end of this iteration are inxlist[[c(2L, i)]].P[[i]]andPbest[[i]]will be matrices of sizelength(algo$min)timesalgo$nP.classifyLogical; default is
FALSE. IfTRUE, the result will have a class attributeTAoptattached. This feature is experimental: the supported methods may change without warning.drop-
Default is
TRUE. IfFALSE, the dimension is not dropped from a single solution when it is passed to a function. (That is, the function will receive a single-column matrix.)
Value
Returns a list:
xbest |
the solution |
OFvalue |
objective function value of best solution |
popF |
a vector: the objective function values in the final population |
Fmat |
if |
xlist |
if |
initial.state |
the value of |
Author(s)
Enrico Schumann
References
Eberhart, R.C. and Kennedy, J. (1995) A New Optimizer using Particle Swarm theory. Proceedings of the Sixth International Symposium on Micromachine and Human Science, pp. 39–43.
Gilli, M., Maringer, D. and Schumann, E. (2019) Numerical Methods and Optimization in Finance. 2nd edition. Elsevier. doi:10.1016/C2017-0-01621-X
Schumann, E. (2023) Financial Optimisation with R (NMOF Manual). http://enricoschumann.net/NMOF.htm#NMOFmanual
See Also
Examples
## Least Median of Squares (LMS) estimation
genData <- function(nP, nO, ol, dy) {
## create dataset as in Salibian-Barrera & Yohai 2006
## nP = regressors, nO = number of obs
## ol = number of outliers, dy = outlier size
mRN <- function(m, n) array(rnorm(m * n), dim = c(m, n))
y <- mRN(nO, 1)
X <- cbind(as.matrix(numeric(nO) + 1), mRN(nO, nP - 1L))
zz <- sample(nO)
z <- cbind(1, 100, array(0, dim = c(1L, nP - 2L)))
for (i in seq_len(ol)) {
X[zz[i], ] <- z
y[zz[i]] <- dy
}
list(X = X, y = y)
}
OF <- function(param, data) {
X <- data$X
y <- data$y
aux <- as.vector(y) - X %*% param
## as.vector(y) for recycling (param is a matrix)
aux <- aux * aux
aux <- apply(aux, 2, sort, partial = data$h)
aux[h, ]
}
nP <- 2L; nO <- 100L; ol <- 10L; dy <- 150
aux <- genData(nP,nO,ol,dy); X <- aux$X; y <- aux$y
h <- (nO + nP + 1L) %/% 2
data <- list(y = y, X = X, h = h)
algo <- list(min = rep(-10, nP), max = rep( 10, nP),
c1 = 1.0, c2 = 2.0,
iner = 0.7, initV = 1, maxV = 3,
nP = 100L, nG = 300L, loopOF = FALSE)
system.time(sol <- PSopt(OF = OF, algo = algo, data = data))
if (require("MASS", quietly = TRUE)) {
## for nsamp = "best", in this case, complete enumeration
## will be tried. See ?lqs
system.time(test1 <- lqs(data$y ~ data$X[, -1L],
adjust = TRUE,
nsamp = "best",
method = "lqs",
quantile = data$h))
}
## check
x1 <- sort((y - X %*% as.matrix(sol$xbest))^2)[h]
cat("Particle Swarm\n",x1,"\n\n")
if (require("MASS", quietly = TRUE)) {
x2 <- sort((y - X %*% as.matrix(coef(test1)))^2)[h]
cat("lqs\n", x2, "\n\n")
}