PL {plgp} | R Documentation |
Particle Learning Skeleton Method
Description
Implements the Particle Learning sequential Monte Carlo algorithm on the data sequence provided, using re-sample and propagate steps
Usage
PL(dstream, start, end, init, lpredprob, propagate, prior = NULL,
addpall = NULL, params = NULL, save = NULL, P = 100,
progress = 10, cont = FALSE, verb = 1)
Arguments
dstream |
function generating the data stream; for examples see |
start |
a scalar |
end |
a scalar |
init |
function used to initialize the particles at the start of PL;
for examples see |
lpredprob |
function used to calculate the predictive probability of an
observation (usually the next one in “time”) given a
particle. This is the primary function used in the PL re-sample
step; for examples see |
propagate |
function used to propagate particles given an observation (usually
the next one in “time”); for examples see
|
prior |
function used to generate prior parameters that may be
passed into the |
addpall |
an optional function that adds the new observation (usually
the next one in “time”) to the |
params |
an optional function called each |
save |
an option function that is called every round to save some information about the particles |
P |
number of particles to use |
progress |
number of PL rounds after which to collect |
cont |
if |
verb |
if nonzero, then screen prints will indicate the proportion of PL
updates finished so far; |
Details
Uses the PL SMC algorithm via the functions provided. This function is just a skeleton framework. The hard work is in specifying the arguments/functions which execute the calculations needed in the re-sample and propagate steps.
PL and uses the variables stored in the PL.env
environment:
pall
, containing
sufficient information common to all particles, peach
,
containing sufficient information particular to each of the P
particles, and psave
containing any saved information.
These variables may be accessed as PL.env$psave
, for example.
Note that PL is designed to be fast for sequential updating (of GPs) when new data arrive. This facilitates efficient sequential design of experiments by active learning techniques, e.g., optimization by expected improvement and sequential exploration of classification label boundaries by the predictive entropy. PL is not optimized for static inference when all of the data arrive at once, in batch
Value
PL modifies the PL.env$peach
variable, containing sufficient
information particular to each (of the P
) particles
Author(s)
Robert B. Gramacy, rbg@vt.edu
References
Carvalho, C., Johannes, M., Lopes, H., and Polson, N. (2008). “Particle Learning and Smoothing.” Discussion Paper 2008-32, Duke University Dept. of Statistical Science.
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
See Also
papply
, draw.GP
,
data.GP
, lpredprob.GP
,
propagate.GP
, params.GP
,
pred.GP
Examples
## See the demos via demo(package="plgp"); it is important to
## run them with the ask=FALSE argument so that the
## automatically generated plots may refresh automatically
## (without requiring the user to press RETURN)
## Not run:
## Illustrates regression GPs on a simple 1-d sinusoidal
## data generating mechanism
demo("plgp_sin1d", ask=FALSE)
## Illustrates classification GPs on a simple 2-d exponential
## data generating mechanism
demo("plcgp_exp", ask=FALSE)
## Illustrates classification GPs on Ripley's Cushings data
demo("plcgp_cush", ask=FALSE)
## Illustrates active learning via the expected improvement
## statistic on a simple 1-d data generating mechanism
demo("plgp_exp_ei", ask=FALSE)
## Illustrates active learning via entropy with classification
## GPs on a simple 2-d exponential data generating mechanism
demo("plcgp_exp_entropy", ask=FALSE)
## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization
## under known constraints on a simple 1-d data generating
## mechanism
demo("plgp_1d_ieci", ask=FALSE)
## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization under
## unknown constraints on a simple 1-d data generating
## mechanism
demo("plconstgp_1d_ieci", ask=FALSE)
## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization under
## unknokn constraints on a simple 2-d data generating
## mechanism
demo("plconstgp_2d_ieci", ask=FALSE)
## End(Not run)