hpar {automl}R Documentation

Deep Neural Net parameters and hyperparameters

Description

List of Neural Network parameters and hyperparameters to train with gradient descent or particle swarm optimization
Not mandatory (the list is preset and all arguments are initialized with default value) but it is advisable to adjust some important arguments for performance reasons (including processing time)

Arguments

modexec

‘trainwgrad’ (the default value) to train with gradient descent (suitable for all volume of data)
‘trainwpso’ to train using Particle Swarm Optimization, each particle represents a set of neural network weights (CAUTION: suitable for low volume of data, time consuming for medium to large volume of data)


Below specific arguments to ‘trainwgrad’ execution mode

learningrate

learningrate alpha (default value 0.001)
#tuning priority 1

beta1

see below

beta2

‘Momentum’ if beta1 different from 0 and beta2 equal 0 )
‘RMSprop’ if beta1 equal 0 and beta2 different from 0
‘adam optimization’ if beta1 different from 0 and beta2 different from 0 (default)
(default value beta1 equal 0.9 and beta2 equal 0.999)
#tuning priority 2

lrdecayrate

learning rate decay value (default value 0, no learning rate decay, 1e-6 should be a good value to start with)
#tuning priority 4

chkgradevery

epoch interval to run gradient check function (default value 0, for debug only)

chkgradepsilon

epsilon value for derivative calculations and threshold test in gradient check function (default 0.0000001)


Below specific arguments to ‘trainwpso’ execution mode

psoxxx

see pso for PSO specific arguments details

costcustformul

custom cost formula (default ‘’, no custom cost function)
standard input variables: yhat (prediction), y (target actual value)
custom input variables: any variable declared in hpar may be used via alias mydl (ie: hpar(list = (foo = 1.5)) will be used in custom cost formula as mydl$foo))
result: J
see ‘automl_train_manual’ example using Mean Average Percentage Error cost function
nb: X and Y matrices used as input into automl_train_manual or automl_train_manual functions are transposed (features in rows and cases in columns)


Below arguments for both execution modes

numiterations

number of training epochs (default value 50))
#tuning priority 1

seed

seed for reproductibility (default 4)

minibatchsize

mini batch size, 2 to the power 0 for stochastic gradient descent (default 2 to the power 5) #tuning priority 3

layersshape

number of nodes per layer, each nodes number initialize a hidden layer
output layer nodes number, may be left to 0 it will be automatically set by Y matrix shape
default value one hidden layer with 10 nodes: c(10, 0)
#tuning priority 4

layersacttype

activation function for each layer; ‘linear’ for no activation or ‘sigmoid’, ‘relu’ or ‘reluleaky’ or ‘tanh’ or ‘softmax’ (softmax for output layer only supported in trainwpso exec mode)
output layer activation function may be left to ‘’, default value ‘linear’ for regression, ‘sigmoid’ for classification
nb: layersacttype parameter vector must have same length as layersshape parameter vector
default value c(‘relu’, ‘’)
#tuning priority 4

layersdropoprob

drop out probability for each layer, continuous value from 0 to less than 1 (give the percentage of matrix weight values to drop out randomly)
nb: layersdropoprob parameter vector must have same length as layersshape parameter vector
default value no drop out: c(0, 0)
#tuning priority for regularization

printcostevery

epoch interval to test and print costs (train and cross validation cost: default value 10, for 1 test every 10 epochs)

testcvsize

size of cross validation sample, 0 for no cross validation sample (default 10, for 10 percent)

testgainunder

threshold to stop the training if the gain between last train or cross validation cost is smaller than the threshold, 0 for no stop test (default 0.000001)

costtype

cost type function name ‘mse’ or ‘crossentropy’ or ‘custom’
‘mse’ for Mean Squared Error, set automatically for continuous target type (‘mape’ Mean Absolute Percentage Error may be specified)
‘crossentropy’ set automatically for binary target type
‘custom’ set automatically if ‘costcustformul’ different from ‘’

lambda

regularization term added to cost function (default value 0, no regularization)

batchnor_mom

batch normalization momentum for j and B (default 0, no batch normalization, may be set to 0.9 for deep neural net)

epsil

epsilon the low value to avoid dividing by 0 or log(0) in cost function, etc ... (default value 1e-12)

verbose

to display or not the costs and the shapes (default TRUE)


back to automl_train, automl_train_manual

See Also

Deep Learning specialization from Andrew NG on Coursera


[Package automl version 1.3.2 Index]