ENNreg {evreg} | R Documentation |
Training the ENNreg model
Description
ENNreg
trains the ENNreg model using batch or minibatch learning procedures.
Usage
ENNreg(
X,
y,
init = NULL,
K = NULL,
batch = TRUE,
nstart = 100,
c = 1,
lambda = 0.9,
xi = 0,
rho = 0,
eps = NULL,
nu = 1e-16,
optimProto = TRUE,
verbose = TRUE,
options = list(maxiter = 1000, rel.error = 1e-04, print = 10),
opt.rmsprop = list(batch_size = 100, epsi = 0.001, rho = 0.9, delta = 1e-08, Dtmax =
100)
)
Arguments
X |
Input matrix of size n x p, where n is the number of objects and p the number of attributes. |
y |
Vector of length n containing observations of the response variable. |
init |
Initial model generated by |
K |
Number of prototypes (default=NULL; must be supplied if initial model is not supplied). |
batch |
If TRUE (default), batch learning is used; otherwise, online learning is used. |
nstart |
Number of random starts of the k-means algorithm (default: 100, used only if initial model is not supplied). |
c |
Multiplicative coefficient applied to scale parameter gamma (defaut: 1, used only if initial model is not supplied) |
lambda |
Parameter of the loss function (default=0.9) |
xi |
Regularization coefficient penalizing precision (default=0). |
rho |
Regularization coefficient shrinking the solution towards a linear model (default=0). |
eps |
Parameter of the loss function (if NULL, set to 0.01 times the standard deviation of y). |
nu |
Parameter of the loss function to avoid a division par zero (default=1e-16). |
optimProto |
If TRUE (default), the initial prototypes are optimized. |
verbose |
If TRUE (default) intermediate results are displayed. |
options |
Parameters of the optimization procedure (see details). |
opt.rmsprop |
Parameters of the RMSprop algorithm (see details). |
Details
If batch=TRUE
, function harris
from package evclust
is used for
optimization. Otherwise, the RMSprop minibatch learning algorithm is used. The three
parameters in list options
are:
- maxiter
Maximum number of iterations (default: 100).
- rel.error
Relative error for stopping criterion (default: 1e-4).
Number of iterations between two displays (default: 10).
Additional parameters for the RMSprop, used only if batch=FALSE
, are contained in
list opt.rmsprop
. They are:
'
- batch_size
Minibatch size.
- epsi
Global learning rate.
- rho
Decay rate.
- delta
Small constant to stabilize division by small numbers.
- Dtmax
The algorithm stops when the loss has not decreased in the last Dtmax iterations.
Value
An object of class "ENNreg" with the following components:
- loss
Value of the loss function.
- param
Parameter values.
- K
Number of prototypes.
- pred
Predictions on the training set (a list containing the prototype unit activations, the output means, variances and precisions, as well as the lower and upper expectations).
References
Thierry Denoeux. An evidential neural network model for regression based on random fuzzy numbers. In "Belief functions: Theory and applications (proc. of BELIEF 2022)", pages 57-66, Springer, 2022.
Thierry Denoeux. Quantifying prediction uncertainty in regression using random fuzzy sets: the ENNreg model. IEEE Transactions on Fuzzy Systems, Vol. 31, Issue 10, pages 3690-3699, 2023.
See Also
predict.ENNreg
, ENNreg_init
, ENNreg_cv
,
ENNreg_holdout
Examples
# Boston dataset
library(MASS)
X<-as.matrix(scale(Boston[,1:13]))
y<-Boston[,14]
set.seed(220322)
n<-nrow(Boston)
ntrain<-round(0.7*n)
train <-sample(n,ntrain)
fit <- ENNreg(X[train,],y[train],K=30)
plot(y[train],fit$pred$mux,xlab="observed response",ylab="predicted response")