ica.elm_train {ICompELM} | R Documentation |
Training of ICA based ELM model for time series forecasting
Description
An Extreme Learning Machine is trained by utilizing the concept of Independent Component Analysis.
Usage
ica.elm_train(train_data, lags, comps = lags, bias = TRUE, actfun = "sig")
Arguments
train_data |
A univariate time series data. |
lags |
Number of lags to be considered. |
comps |
Number of independent components to be considered. Corresponds
to number of hidden nodes. Defaults to maximum value, i.e., |
bias |
Whether to include bias term while computing output weights.
Defaults to |
actfun |
Activation function for the hidden layer. Defaults to
|
Details
An Extreme Learning Machine (ELM) is trained wherein the weights connecting the input layer and hidden layer are obtained using Independent Component Analysis (ICA), instead of being chosen randomly. The number of hidden nodes is determined by the number of independent components.
Value
A list containing the trained ICA-ELM model with the following components.
inp_weights |
Weights connecting the input layer to hidden layer,
obtained from the unmixing matrix |
out_weights |
Weights connecting the hidden layer to output layer. |
fitted.values |
Fitted values of the model. |
residuals |
Residuals of the model. |
h.out |
A data frame containing the hidden layer outputs (activation function applied) with columns representing hidden nodes and rows representing observations. |
data |
The univariate |
lags |
Number of lags used during training. |
comps |
Number of independent components considered for training. It determines the number of hidden nodes. |
bias |
Whether bias node was included during training. |
actfun |
Activation function for the hidden layer.
See |
Activation functions
The activation function for the hidden layer must be one of the following.
sig
Sigmoid function:
(1 + e^{-x})^{-1}
radbas
Radial basis function:
e^{-x^2}
hardlim
Hard-limit function:
\begin{cases} 1, & if\:x \geq 0 \\ 0, & if\:x<0 \end{cases}
hardlims
Symmetric hard-limit function:
\begin{cases}1, & if\:x \geq 0 \\ -1, & if\:x<0 \end{cases}
satlins
Symmetric saturating linear function:
\begin{cases}1, & if\:x \geq 1 \\ x, & if\:-1<x<1 \\ -1, & if\:x \leq -1 \end{cases}
tansig
Tan-sigmoid function:
2(1 + e^{-2x})^{-1}-1
tribas
Triangular basis function:
\begin{cases} 1-|x|, & if \: -1 \leq x \leq 1 \\ 0, & otherwise \end{cases}
poslin
Postive linear function:
\begin{cases} x, & if\: x \geq 0 \\ 0, & otherwise \end{cases}
References
Huang, G. B., Zhu, Q. Y., & Siew, C. K. (2006). Extreme learning machine: theory and applications. Neurocomputing, 70(1-3), 489-501. doi:10.1016/j.neucom.2005.12.126.
Hyvarinen, A. (1999). Fast and robust fixed-point algorithms for independent component analysis. IEEE transactions on Neural Networks, 10(3), 626-634. doi:10.1109/72.761722.
See Also
ica.elm_forecast()
for forecasting from trained ICA based ELM
model.
Examples
train_set <- head(price, 12*12)
ica.model <- ica.elm_train(train_data = train_set, lags = 12)