materialize_internal {mlr3torch}R Documentation

Materialize a Lazy Tensor

Description

Convert a lazy_tensor to a torch_tensor.

Usage

materialize_internal(x, device = "cpu", cache = NULL, rbind)

Arguments

x

(lazy_tensor())
The lazy tensor to materialize.

device

(character(1L))
The device to put the materialized tensor on (after running the preprocessing graph).

cache

(NULL or environment())
Whether to cache the (intermediate) results of the materialization. This can make data loading faster when multiple lazy_tensors reference the same dataset or graph.

rbind

(logical(1))
Whtether to rbind the resulting tensors (TRUE) or return them as a list of tensors (FALSE).

Details

Materializing a lazy tensor consists of:

  1. Loading the data from the internal dataset of the DataDescriptor.

  2. Processing these batches in the preprocessing Graphs.

  3. Returning the result of the PipeOp pointed to by the DataDescriptor (pointer).

When materializing multiple lazy_tensor columns, caching can be useful because: a) Output(s) from the dataset might be input to multiple graphs. (in task_dataset this is shoudl rarely be the case because because we try to merge them). b) Different lazy tensors might be outputs from the same graph.

For this reason it is possible to provide a cache environment. The hash key for a) is the hash of the indices and the dataset. The hash key for b) is the hash of the indices dataset and preprocessing graph.

Value

lazy_tensor()


[Package mlr3torch version 0.1.0 Index]