deepnet {deepdive} | R Documentation |
Build and train an Artificial Neural Network of any size
Description
Build and train Artifical Neural Network of any depth in a single line code. Choose the hyperparameters to improve the accuracy or generalisation of model.
Usage
deepnet(
x,
y,
hiddenLayerUnits = c(2, 2),
activation = c("sigmoid", "relu"),
reluLeak = 0,
modelType = c("regress"),
iterations = 500,
eta = 10^-2,
seed = 2,
gradientClip = 0.8,
regularisePar = 0,
optimiser = "adam",
parMomentum = 0.9,
inputSizeImpact = 1,
parRmsPropZeroAdjust = 10^-8,
parRmsProp = 0.9999,
printItrSize = 100,
showProgress = TRUE,
stopError = 0.01,
miniBatchSize = NA,
useBatchProgress = FALSE,
ignoreNAerror = FALSE,
normalise = TRUE
)
Arguments
x |
a data frame with input variables |
y |
a data frame with ouptut variable |
a numeric vector, length of vector indicates number of hidden layers and each element in vector indicates corresponding hidden units Eg: c(6,4) for two layers, one with 6 hiiden units and other with 4 hidden units. Note: Output layer is automatically created. | |
activation |
one of "sigmoid","relu","sin","cos","none". The default is "sigmoid". Choose a activation per hidden layer |
reluLeak |
numeric. Applicable when activation is "relu". Specify value between 0 any number close to zero below 1. Eg: 0.01,0.001 etc |
modelType |
one of "regress","binary","multiClass". "regress" for regression will create a linear single unit output layer. "binary" will create a single unit sigmoid activated layer. "multiClass" will create layer with units corresponding to number of output classes with softmax activation. |
iterations |
integer. This indicates number of iteratios or epochs in backpropagtion .The default value is 500. |
eta |
numeric.Hyperparameter,sets the Learning rate for backpropagation. Eta determines the convergence ability and speed of convergence. |
seed |
numeric. Set seed with this parameter. Incase of sin activation sometimes changing seed can yeild better results. Default is 2 |
gradientClip |
numeric. Hyperparameter numeric value which limits gradient size for weight update operation in backpropagation. Default is 0.8 . It can take any postive value. |
regularisePar |
numeric. L2 Regularisation Parameter . |
optimiser |
one of "gradientDescent","momentum","rmsProp","adam". Default value "adam" |
parMomentum |
numeric. Applicable for optimiser "mometum" and "adam" |
inputSizeImpact |
numeric. Adjusts the gradient size by factor of percentage of rows in input. For very small data set setting this to 0 could yeild faster result. Default is 1. |
parRmsPropZeroAdjust |
numeric. Applicable for optimiser "rmsProp" and "adam" |
parRmsProp |
numeric.Applicable for optimiser "rmsProp" and "adam" |
printItrSize |
numeric. Number of iterations after which progress message should be shown. Default value 100 and for iterations below 100 atleast 5 messages will be seen |
showProgress |
logical. True will show progress and F will not show progress |
stopError |
Numeric. Rmse at which iterations can be stopped. Default is 0.01, can be set as NA in case all iterations needs to run. |
miniBatchSize |
integer. Set the mini batch size for mini batch gradient |
useBatchProgress |
logical. Applicable for miniBatch , setting T will use show rmse in Batch and F will show error on full dataset. For large dataset set T |
ignoreNAerror |
logical. Set T if iteration needs to be stopped when predictions become NA |
normalise |
logical. Set F if normalisation not required.Default T |
Value
returns model object which can be passed into predict.deepnet
Examples
require(deepdive)
x <- data.frame(x1 = runif(10),x2 = runif(10))
y<- data.frame(y=20*x$x1 +30*x$x2+10)
#train
modelnet<-deepnet(x,y,c(2,2),
activation = c('relu',"sigmoid"),
reluLeak = 0.01,
modelType = "regress",
iterations =5,
eta=0.8,
optimiser="adam")
#predict
predDeepNet<-predict.deepnet(modelnet,newData=x)
#evaluate
sqrt(mean((predDeepNet$ypred-y$y)^2))