BP-class {nnlib2Rcpp} | R Documentation |
Class "BP"
Description
Supervised Back-Propagation (BP) NN module, for encoding input-output mappings.
Extends
Class "RcppClass"
, directly.
All reference classes extend and inherit methods from "envRefClass"
.
Fields
.CppObject
:Object of class
C++Object
~~.CppClassDef
:Object of class
activeBindingFunction
~~.CppGenerator
:Object of class
activeBindingFunction
~~
Methods
encode( data_in, data_out, learning_rate, training_epochs, hidden_layers, hidden_layer_size )
:Setup a new BP NN and encode input-output data pairs. Parameters are:
data_in
: numeric matrix, containing input vectors as rows. . It is recommended that these values are in 0 to 1 range.data_out
: numeric matrix, containing corresponding (desired) output vectors. It is recommended that these values are in 0 to 1 range.learning_rate
: a number (preferably greater than 0 and less than 1) used in training.training_epochs
: number of training epochs, aka single presentation iterations of all training data pairs to the NN during training.hidden_layers
: number of hidden layers to be created between input and output layers.hidden_layer_size
: number of nodes (processing elements or PEs) in each of the hidden layers (all hidden layers are of the same length in this implementation of BP).
Note: to encode additional input-output vector pairs in an existing BP, use
train_single
ortrain_multiple
methods (see below).recall(data_in)
:Get output for a dataset (numeric matrix
data_in
) from the (trained) BP NN.setup(input_dim, output_dim, learning_rate, hidden_layers, hidden_layer_size)
:Setup the BP NN so it can be trained and used. Note: this is not needed if using
encode
. Parameters are:input_dim
: integer length of input vectors.output_dim
: integer length of output vectors.learning_rate
: a number (preferably greater than 0 and less than 1) used in training.hidden_layers
: number of hidden layers to be created between input and output layers.hidden_layer_size
: number of nodes (processing elements or PEs) in each of the hidden layers (all hidden layers are of the same length in this implementation of BP).
train_single (data_in, data_out)
:Encode an input-output vector pair in the BP NN. Only performs a single training iteration (multiple may be required for proper encoding). Vector sizes should be compatible to the current NN (as resulted from the
encode
orsetup
methods). Returns error level indicator value.train_multiple (data_in, data_out, training_epochs)
:Encode multiple input-output vector pairs stored in corresponding datasets. Performs multiple iterations in epochs (see
encode
). Vector sizes should be compatible to the current NN (as resulted from theencode
orsetup
methods). Returns error level indicator value.set_error_level(error_type, acceptable_error_level)
:Set options that stop training when an acceptable error level has been reached (when a subsequent
encode
ortrain_multiple
is performed). Parameters are:error_type
: string, error type to display and use to stop training (must be 'MSE' or 'MAE').acceptable_error_level
: training stops when error is below this level.
mute(on)
:Disable output of current error level when training (if parameter
on
is TRUE).print()
:Print NN structure.
show()
:Print NN structure.
load(filename)
:Retrieve the NN from specified file.
save(filename)
:Save the NN to specified file.
The following methods are inherited (from the corresponding class): objectPointer ("RcppClass"), initialize ("RcppClass"), show ("RcppClass")
Note
This R module maintains an internal Back-Propagation (BP) multilayer perceptron NN (described in Simpson (1991) as the vanilla back-propagation algorithm), which can be used to store input-output vector pairs. Since the nodes (PEs) in computing layers of this BP implementation apply the logistic sigmoid threshold function, their output is in [0 1] range (and so should the desired output vector values).
(This object uses Rcpp to employ 'bp_nn' class in nnlib2.)
Author(s)
Vasilis N. Nikolaidis <vnnikolaidis@gmail.com>
References
Simpson, P. K. (1991). Artificial neural systems: Foundations, paradigms, applications, and implementations. New York: Pergamon Press.
See Also
Examples
# create some data...
iris_s <- as.matrix(scale(iris[1:4]))
# use a randomly picked subset of (scaled) iris data for training.
training_cases <- sample(1:nrow(iris_s), nrow(iris_s)/2,replace=FALSE)
train_set <- iris_s[training_cases,]
train_class_ids <- as.integer(iris$Species[training_cases])
train_num_cases <- nrow(train_set)
train_num_variables <- ncol(train_set)
train_num_classes <- max(train_class_ids)
# create output dataset to be used for training.
# Here we encode class as 0s and 1s (one-hot encoding).
train_set_data_out <- matrix(
data = 0,
nrow = train_num_cases,
ncol = train_num_classes)
# now for each case, assign a 1 to the column corresponding to its class, 0 otherwise
# (note: there are better R ways to do this in R)
for(r in 1:train_num_cases) train_set_data_out[r,train_class_ids[r]]=1
# done with data, let's use BP...
bp<-new("BP")
bp$encode(train_set,train_set_data_out,0.8,10000,2,4)
# let's test by recalling the original training set...
bp_output <- bp$recall(train_set)
cat("- Using this demo's encoding, recalled class is:\n")
print(apply(bp_output,1,which.max))
cat("- BP success in recalling correct class is: ",
sum(apply(bp_output,1,which.max)==train_class_ids)," out of ",
train_num_cases, "\n")
# Let's see how well it recalls the entire Iris set:
bp_output <- bp$recall(iris_s)
# show output
cat("\n- Recalling entire Iris set returns:\n")
print(bp_output)
cat("- Using this demo's encoding, original class is:\n")
print(as.integer(iris$Species))
cat("- Using this demo's encoding, recalled class is:\n")
bp_classification <- apply(bp_output,1,which.max)
print(bp_classification)
cat("- BP success in recalling correct class is: ",
sum(apply(bp_output,1,which.max)==as.integer(iris$Species)),
"out of ", nrow(iris_s), "\n")
plot(iris_s, pch=bp_classification, main="Iris classified by a partialy trained BP (module)")