dnnFit {dnn} | R Documentation |
Fitting a Deep Learning model with a given loss function
Description
dnnFit is used to train a deep learning neural network model based on a specified loss function.
Usage
dnnFit(x, y, model, control)
Arguments
x |
covariates for the neural network model |
y |
output (target) value for neural network model |
model |
the neural network model, see below for details |
control |
a list of control values, in the format produced by 'dnnControl'. The default value is dnnControl(loss='mse') |
Details
The 'dnnFit' function takes the input data, the target values, the network architecture, and the loss function as arguments, and returns a trained model that minimizes the loss function. The function also supports various options for regularization and optimization of the model.
See dNNmodel
for details on how to specify a deep learning model.
Parameters in dnnControl
will be used to control the model fit process. The loss function can be specified as dnnControl(loss = "lossFunction"). Currently, the following loss functions are supported:
'mse': Mean square error loss = 0.5*sum(dy^2)
'cox': Cox partial likelihood loss = -sum(delta*(yhat - log(S0)))
'bin': Cross-entropy = -sum(y*log(p) + (1-y)*log(1-p))
'log': Log linear cost = -sum(y*log(lambda)-lambda)
'mae': Mean absolute error loss = sum(abs(dy))
Additional loss functions will be added to the library in the future.
{ dnnFit2 } is a C++ version of dnnFit, which runs about 20% faster, however, only loss = 'mse' and 'cox' are currently supported.
When the variance for covariance matrix X is too large, please use xbar = scale(x) to standardize X.
Value
An object of class "dnnFit" is returned. The dnnFit object contains the following list components:
cost |
cost at the final epoch. |
dW |
the gradient at the final epoch dW = dL/dW. |
fitted.values |
predictor value mu = f(x). |
history |
a cost history at each epoch. |
lp |
predictor value mu = f(x). |
logLik |
-2*log Likelihood = cost. |
model |
a dNNmodel object. |
residuals |
raw residual dy = d log(L)/dmu |
dvi |
deviance dvi = dy*dy |
Author(s)
Chen, B. E. and Norman P.
References
Buckley, J. and James, I. (1979). Linear regression with censored data. Biometrika, 66, page 429-436.
Norman, P. and Chen, B. E. (2019). DeepAFAT: A nonparametric accelerated failure time model with artificial neural network. Manuscript to be submitted.
Chollet, F. and Allaire J. J. (2017). Deep learning with R. Manning.
See Also
deepAFT
, deepGlm
, deepSurv
, dnnControl
Examples
## Example for dnnFit with MSE loss function to do a non-linear regression
set.seed(101)
### define model layers
model = dNNmodel(units = c(4, 3, 1), activation = c("elu", "sigmoid", "sigmoid"),
input_shape = 3)
x = matrix(runif(15), nrow = 5, ncol = 3)
y = exp(x[, 1])
control = dnnControl(loss='mse')
fit = dnnFit(x, y, model, control)