mlr_pipeops_torch {mlr3torch}R Documentation

Base Class for Torch Module Constructor Wrappers

Description

PipeOpTorch is the base class for all PipeOps that represent neural network layers in a Graph. During training, it generates a PipeOpModule that wraps an nn_module and attaches it to the architecture, which is also represented as a Graph consisting mostly of PipeOpModules an PipeOpNOPs.

While the former Graph operates on ModelDescriptors, the latter operates on tensors.

The relationship between a PipeOpTorch and a PipeOpModule is similar to the relationshop between a nn_module_generator (like nn_linear) and a nn_module (like the output of nn_linear(...)). A crucial difference is that the PipeOpTorch infers auxiliary parameters (like in_features for nn_linear) automatically from the intermediate tensor shapes that are being communicated through the ModelDescriptor.

During prediction, PipeOpTorch takes in a Task in each channel and outputs the same new Task resulting from their feature union in each channel. If there is only one input and output channel, the task is simply piped through.

Inheriting

When inheriting from this class, one should overload either the private$.shapes_out() and the private$.shape_dependent_params() methods, or overload private$.make_module().

Input and Output Channels

During training, all inputs and outputs are of class ModelDescriptor. During prediction, all input and output channels are of class Task.

State

The state is the value calculated by the public method shapes_out().

Parameters

The ParamSet is specified by the child class inheriting from PipeOpTorch. Usually the parameters are the arguments of the wrapped nn_module minus the auxiliary parameter that can be automatically inferred from the shapes of the input tensors.

Internals

During training, the PipeOpTorch creates a PipeOpModule for the given parameter specification and the input shapes from the incoming ModelDescriptors using the private method .make_module(). The input shapes are provided by the slot pointer_shape of the incoming ModelDescriptors. The channel names of this PipeOpModule are identical to the channel names of the generating PipeOpTorch.

A model descriptor union of all incoming ModelDescriptors is then created. Note that this modifies the graph of the first ModelDescriptor in place for efficiency. The PipeOpModule is added to the graph slot of this union and the the edges that connect the sending PipeOpModules to the input channel of this PipeOpModule are addeded to the graph. This is possible because every incoming ModelDescriptor contains the information about the id and the channel name of the sending PipeOp in the slot pointer.

The new graph in the model_descriptor_union represents the current state of the neural network architecture. It is structurally similar to the subgraph that consists of all pipeops of class PipeOpTorch and PipeOpTorchIngress that are ancestors of this PipeOpTorch.

For the output, a shallow copy of the ModelDescriptor is created and the pointer and pointer_shape are updated accordingly. The shallow copy means that all ModelDescriptors point to the same Graph which allows the graph to be modified by-reference in different parts of the code.

Super class

mlr3pipelines::PipeOp -> PipeOpTorch

Public fields

module_generator

(nn_module_generator or NULL)
The module generator wrapped by this PipeOpTorch. If NULL, the private method private$.make_module(shapes_in, param_vals) must be overwritte, see section 'Inheriting'. Do not change this after construction.

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
PipeOpTorch$new(
  id,
  module_generator,
  param_set = ps(),
  param_vals = list(),
  inname = "input",
  outname = "output",
  packages = "torch",
  tags = NULL
)
Arguments
id

(character(1))
Identifier of the resulting object.

module_generator

(nn_module_generator)
The torch module generator.

param_set

(ParamSet)
The parameter set.

param_vals

(list())
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.

inname

(character())
The names of the PipeOp's input channels. These will be the input channels of the generated PipeOpModule. Unless the wrapped module_generator's forward method (if present) has the argument ..., inname must be identical to those argument names in order to avoid any ambiguity.
If the forward method has the argument ..., the order of the input channels determines how the tensors will be passed to the wrapped nn_module.
If left as NULL (default), the argument module_generator must be given and the argument names of the modue_generator's forward function are set as inname.

outname

(character())
The names of the output channels channels. These will be the ouput channels of the generated PipeOpModule and therefore also the names of the list returned by its ⁠$train()⁠. In case there is more than one output channel, the nn_module that is constructed by this PipeOp during training must return a named list(), where the names of the list are the names out the output channels. The default is "output".

packages

(character())
The R packages this object depends on.

tags

(character())
The tags of the PipeOp. The tags "torch" is always added.


Method shapes_out()

Calculates the output shapes for the given input shapes, parameters and task.

Usage
PipeOpTorch$shapes_out(shapes_in, task = NULL)
Arguments
shapes_in

(list() of integer())
The input input shapes, which must be in the same order as the input channel names of the PipeOp.

task

(Task or NULL)
The task, which is very rarely used (default is NULL). An exception is PipeOpTorchHead.

Returns

A named list() containing the output shapes. The names are the names of the output channels of the PipeOp.

See Also

Other Graph Network: ModelDescriptor(), TorchIngressToken(), mlr_learners_torch_model, mlr_pipeops_module, mlr_pipeops_torch_ingress, mlr_pipeops_torch_ingress_categ, mlr_pipeops_torch_ingress_ltnsr, mlr_pipeops_torch_ingress_num, model_descriptor_to_learner(), model_descriptor_to_module(), model_descriptor_union(), nn_graph()

Examples


## Creating a neural network
# In torch

task = tsk("iris")

network_generator = torch::nn_module(
  initialize = function(task, d_hidden) {
    d_in = length(task$feature_names)
    self$linear = torch::nn_linear(d_in, d_hidden)
    self$output = if (task$task_type == "regr") {
      torch::nn_linear(d_hidden, 1)
    } else if (task$task_type == "classif") {
      torch::nn_linear(d_hidden, length(task$class_names))
    }
  },
  forward = function(x) {
    x = self$linear(x)
    x = torch::nnf_relu(x)
    self$output(x)
  }
)

network = network_generator(task, d_hidden = 50)
x = torch::torch_tensor(as.matrix(task$data(1, task$feature_names)))
y = torch::with_no_grad(network(x))


# In mlr3torch
network_generator = po("torch_ingress_num") %>>%
  po("nn_linear", out_features = 50) %>>%
  po("nn_head")
md = network_generator$train(task)[[1L]]
network = model_descriptor_to_module(md)
y = torch::with_no_grad(network(torch_ingress_num.input = x))



## Implementing a custom PipeOpTorch

# defining a custom module
nn_custom = nn_module("nn_custom",
  initialize = function(d_in1, d_in2, d_out1, d_out2, bias = TRUE) {
    self$linear1 = nn_linear(d_in1, d_out1, bias)
    self$linear2 = nn_linear(d_in2, d_out2, bias)
  },
  forward = function(input1, input2) {
    output1 = self$linear1(input1)
    output2 = self$linear1(input2)

    list(output1 = output1, output2 = output2)
  }
)

# wrapping the module into a custom PipeOpTorch

library(paradox)

PipeOpTorchCustom = R6::R6Class("PipeOpTorchCustom",
  inherit = PipeOpTorch,
  public = list(
    initialize = function(id = "nn_custom", param_vals = list()) {
      param_set = ps(
        d_out1 = p_int(lower = 1, tags = c("required", "train")),
        d_out2 = p_int(lower = 1, tags = c("required", "train")),
        bias = p_lgl(default = TRUE, tags = "train")
      )
      super$initialize(
        id = id,
        param_vals = param_vals,
        param_set = param_set,
        inname = c("input1", "input2"),
        outname = c("output1", "output2"),
        module_generator = nn_custom
      )
    }
  ),
  private = list(
    .shape_dependent_params = function(shapes_in, param_vals, task) {
      c(param_vals,
        list(d_in1 = tail(shapes_in[["input1"]], 1)), d_in2 = tail(shapes_in[["input2"]], 1)
      )
    },
    .shapes_out = function(shapes_in, param_vals, task) {
      list(
        input1 = c(head(shapes_in[["input1"]], -1), param_vals$d_out1),
        input2 = c(head(shapes_in[["input2"]], -1), param_vals$d_out2)
      )
    }
  )
)

## Training

# generate input
task = tsk("iris")
task1 = task$clone()$select(paste0("Sepal.", c("Length", "Width")))
task2 = task$clone()$select(paste0("Petal.", c("Length", "Width")))
graph = gunion(list(po("torch_ingress_num_1"), po("torch_ingress_num_2")))
mds_in = graph$train(list(task1, task2), single_input = FALSE)

mds_in[[1L]][c("graph", "task", "ingress", "pointer", "pointer_shape")]
mds_in[[2L]][c("graph", "task", "ingress", "pointer", "pointer_shape")]

# creating the PipeOpTorch and training it
po_torch = PipeOpTorchCustom$new()
po_torch$param_set$values = list(d_out1 = 10, d_out2 = 20)
train_input = list(input1 = mds_in[[1L]], input2 = mds_in[[2L]])
mds_out = do.call(po_torch$train, args = list(input = train_input))
po_torch$state

# the new model descriptors

# the resulting graphs are identical
identical(mds_out[[1L]]$graph, mds_out[[2L]]$graph)
# not that as a side-effect, also one of the input graphs is modified in-place for efficiency
mds_in[[1L]]$graph$edges

# The new task has both Sepal and Petal features
identical(mds_out[[1L]]$task, mds_out[[2L]]$task)
mds_out[[2L]]$task

# The new ingress slot contains all ingressors
identical(mds_out[[1L]]$ingress, mds_out[[2L]]$ingress)
mds_out[[1L]]$ingress

# The pointer and pointer_shape slots are different
mds_out[[1L]]$pointer
mds_out[[2L]]$pointer

mds_out[[1L]]$pointer_shape
mds_out[[2L]]$pointer_shape

## Prediction
predict_input = list(input1 = task1, input2 = task2)
tasks_out = do.call(po_torch$predict, args = list(input = predict_input))
identical(tasks_out[[1L]], tasks_out[[2L]])


[Package mlr3torch version 0.1.0 Index]