mlr_learners.mlp {mlr3torch} | R Documentation |
My Little Pony
Description
Fully connected feed forward network with dropout after each activation function.
The features can either be a single lazy_tensor
or one or more numeric columns (but not both).
Dictionary
This Learner can be instantiated using the sugar function lrn()
:
lrn("classif.mlp", ...) lrn("regr.mlp", ...)
Properties
Supported task types: 'classif', 'regr'
Predict Types:
classif: 'response', 'prob'
regr: 'response'
Feature Types: “integer”, “numeric”, “lazy_tensor”
Parameters
Parameters from LearnerTorch
, as well as:
-
activation
::[nn_module]
The activation function. Is initialized tonn_relu
. -
activation_args
:: namedlist()
A named list with initialization arguments for the activation function. This is intialized to an empty list. -
neurons
::integer()
The number of neurons per hidden layer. By default there is no hidden layer. Setting this toc(10, 20)
would have a the first hidden layer with 10 neurons and the second with 20. -
p
::numeric(1)
The dropout probability. Is initialized to0.5
. -
shape
::integer()
orNULL
The input shape of length 2, e.g.c(NA, 5)
. Only needs to be present when there is a lazy tensor input with unknown shape (NULL
). Otherwise the input shape is inferred from the number of numeric features.
Super classes
mlr3::Learner
-> mlr3torch::LearnerTorch
-> LearnerTorchMLP
Methods
Public methods
Inherited methods
mlr3::Learner$base_learner()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$reset()
mlr3::Learner$train()
mlr3torch::LearnerTorch$dataset()
mlr3torch::LearnerTorch$format()
mlr3torch::LearnerTorch$marshal()
mlr3torch::LearnerTorch$print()
mlr3torch::LearnerTorch$unmarshal()
Method new()
Creates a new instance of this R6 class.
Usage
LearnerTorchMLP$new( task_type, optimizer = NULL, loss = NULL, callbacks = list() )
Arguments
task_type
(
character(1)
)
The task type, either"classif
" or"regr"
.optimizer
(
TorchOptimizer
)
The optimizer to use for training. Per default, adam is used.loss
(
TorchLoss
)
The loss used to train the network. Per default, mse is used for regression and cross_entropy for classification.callbacks
(
list()
ofTorchCallback
s)
The callbacks. Must have unique ids.
Method clone()
The objects of this class are cloneable with this method.
Usage
LearnerTorchMLP$clone(deep = FALSE)
Arguments
deep
Whether to make a deep clone.
See Also
Other Learner:
mlr_learners.tab_resnet
,
mlr_learners.torch_featureless
,
mlr_learners_torch
,
mlr_learners_torch_image
,
mlr_learners_torch_model
Examples
# Define the Learner and set parameter values
learner = lrn("classif.mlp")
learner$param_set$set_values(
epochs = 1, batch_size = 16, device = "cpu",
neurons = 10
)
# Define a Task
task = tsk("iris")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()