tar_option_set {targets} | R Documentation |
Set target options.
Description
Set target options, including default arguments to
tar_target()
such as packages, storage format,
iteration type, and cue. Only the non-null arguments are actually
set as options. See currently set options with tar_option_get()
.
To use tar_option_set()
effectively, put it in your workflow's
target script file (default: _targets.R
)
before calls to tar_target()
or tar_target_raw()
.
Usage
tar_option_set(
tidy_eval = NULL,
packages = NULL,
imports = NULL,
library = NULL,
envir = NULL,
format = NULL,
repository = NULL,
repository_meta = NULL,
iteration = NULL,
error = NULL,
memory = NULL,
garbage_collection = NULL,
deployment = NULL,
priority = NULL,
backoff = NULL,
resources = NULL,
storage = NULL,
retrieval = NULL,
cue = NULL,
description = NULL,
debug = NULL,
workspaces = NULL,
workspace_on_error = NULL,
seed = NULL,
controller = NULL,
trust_object_timestamps = NULL
)
Arguments
tidy_eval |
Logical, whether to enable tidy evaluation
when interpreting |
packages |
Character vector of packages to load right before
the target runs or the output data is reloaded for
downstream targets. Use |
imports |
Character vector of package names.
For every package listed, There are several important limitations:
1. Namespaced calls, e.g. |
library |
Character vector of library paths to try
when loading |
envir |
Environment containing functions and global objects
common to all targets in the pipeline.
The If If Package environments should not be assigned to |
format |
Optional storage format for the target's return value.
With the exception of |
repository |
Character of length 1, remote repository for target storage. Choices:
Note: if |
repository_meta |
Character of length 1 with the same values as
|
iteration |
Character of length 1, name of the iteration mode of the target. Choices:
|
error |
Character of length 1, what to do if the target stops and throws an error. Options:
|
memory |
Character of length 1, memory strategy.
If |
garbage_collection |
Logical, whether to run |
deployment |
Character of length 1. If |
priority |
Numeric of length 1 between 0 and 1. Controls which
targets get deployed first when multiple competing targets are ready
simultaneously. Targets with priorities closer to 1 get dispatched earlier
(and polled earlier in |
backoff |
An object from |
resources |
Object returned by |
storage |
Character of length 1, only relevant to
|
retrieval |
Character of length 1, only relevant to
|
cue |
An optional object from |
description |
Character of length 1, a custom free-form human-readable
text description of the target. Descriptions appear as target labels
in functions like |
debug |
Character vector of names of targets to run in debug mode.
To use effectively, you must set |
workspaces |
Character vector of target names.
Could be non-branching targets, whole dynamic branching targets,
or individual branch names. |
workspace_on_error |
Logical of length 1, whether to save
a workspace file for each target that throws an error.
Workspace files help with debugging.
See |
seed |
Integer of length 1, seed for generating
target-specific pseudo-random number generator seeds.
These target-specific seeds are deterministic and depend on
Either the user or third-party packages built on top of The |
controller |
A controller or controller group object
produced by the |
trust_object_timestamps |
Logical of length 1, whether to use
file system modification timestamps to check whether the target output
data files in If However, timestamp precision varies from a few
nanoseconds at best to 2 entire seconds at worst, and timestamps
with poor precision should not be fully trusted if there is any
possibility that you will manually change the file within 2 seconds
after the pipeline finishes.
If the data store is on a file system with low-precision timestamps,
then you may
consider setting To check if your
file system has low-precision timestamps, you can run
|
Value
NULL
(invisibly).
Storage formats
-
"rds"
: Default, usessaveRDS()
andreadRDS()
. Should work for most objects, but slow. -
"qs"
: Usesqs::qsave()
andqs::qread()
. Should work for most objects, much faster than"rds"
. Optionally set the preset forqsave()
throughtar_resources()
andtar_resources_qs()
. -
"feather"
: Usesarrow::write_feather()
andarrow::read_feather()
(version 2.0). Much faster than"rds"
, but the value must be a data frame. Optionally setcompression
andcompression_level
inarrow::write_feather()
throughtar_resources()
andtar_resources_feather()
. Requires thearrow
package (not installed by default). -
"parquet"
: Usesarrow::write_parquet()
andarrow::read_parquet()
(version 2.0). Much faster than"rds"
, but the value must be a data frame. Optionally setcompression
andcompression_level
inarrow::write_parquet()
throughtar_resources()
andtar_resources_parquet()
. Requires thearrow
package (not installed by default). -
"fst"
: Usesfst::write_fst()
andfst::read_fst()
. Much faster than"rds"
, but the value must be a data frame. Optionally set the compression level forfst::write_fst()
throughtar_resources()
andtar_resources_fst()
. Requires thefst
package (not installed by default). -
"fst_dt"
: Same as"fst"
, but the value is adata.table
. Deep copies are made as appropriate in order to protect against the global effects of in-place modification. Optionally set the compression level the same way as for"fst"
. -
"fst_tbl"
: Same as"fst"
, but the value is atibble
. Optionally set the compression level the same way as for"fst"
. -
"keras"
: superseded bytar_format()
and incompatible witherror = "null"
(intar_target()
ortar_option_set()
). Useskeras::save_model_hdf5()
andkeras::load_model_hdf5()
. The value must be a Keras model. Requires thekeras
package (not installed by default). -
"torch"
: superseded bytar_format()
and incompatible witherror = "null"
(intar_target()
ortar_option_set()
). Usestorch::torch_save()
andtorch::torch_load()
. The value must be an object from thetorch
package such as a tensor or neural network module. Requires thetorch
package (not installed by default). -
"file"
: A dynamic file. To use this format, the target needs to manually identify or save some data and return a character vector of paths to the data (must be a single file path ifrepository
is not"local"
). (These paths must be existing files and nonempty directories.) Then,targets
automatically checks those files and cues the appropriate run/skip decisions if those files are out of date. Those paths must point to files or directories, and they must not contain characters|
or*
. All the files and directories you return must actually exist, or elsetargets
will throw an error. (And ifstorage
is"worker"
,targets
will first stall out trying to wait for the file to arrive over a network file system.) If the target does not create any files, the return value should becharacter(0)
.If
repository
is not"local"
andformat
is"file"
, then the character vector returned by the target must be of length 1 and point to a single file. (Directories and vectors of multiple file paths are not supported for dynamic files on the cloud.) That output file is uploaded to the cloud and tracked for changes where it exists in the cloud. The local file is deleted after the target runs.To check if the file is up to date,
targets
avoids timestamps and always recomputes the hash. If you find this to be too slow, and if you trust the time stamps on your file system (see thetrust_object_timestamps
argument oftar_option_set()
), then considerformat = "file_fast"
instead. -
"file_fast"
: same asformat = "file"
, except thattargets
uses time stamps to check if a file is up to date. If the time stamp of the file agrees with the time stamp in the metadata, the file is considered up to date. Otherwise,targets
recomputes the hash of the file to make a final determination. Low-precision timestamps are not reliable for this, and some file systems have timestamp precision as poor as 2 seconds. See thetrust_object_timestamps
argument oftar_option_set()
for advice on this. -
"url"
: A dynamic input URL. For this storage format,repository
is implicitly"local"
, URL format is likeformat = "file"
except the return value of the target is a URL that already exists and serves as input data for downstream targets. Optionally supply a customcurl
handle throughtar_resources()
andtar_resources_url()
. innew_handle()
,nobody = TRUE
is important because it ensurestargets
just downloads the metadata instead of the entire data file when it checks time stamps and hashes. The data file at the URL needs to have an ETag or a Last-Modified time stamp, or else the target will throw an error because it cannot track the data. Also, use extreme caution when trying to useformat = "url"
to track uploads. You must be absolutely certain the ETag and Last-Modified time stamp are fully updated and available by the time the target's command finishes running.targets
makes no attempt to wait for the web server. A custom format can be supplied with
tar_format()
. For this choice, it is the user's responsibility to provide methods for (un)serialization and (un)marshaling the return value of the target.The formats starting with
"aws_"
are deprecated as of 2022-03-13 (targets
version > 0.10.0). For cloud storage integration, use therepository
argument instead.
See Also
Other configuration:
tar_config_get()
,
tar_config_projects()
,
tar_config_set()
,
tar_config_unset()
,
tar_config_yaml()
,
tar_envvars()
,
tar_option_get()
,
tar_option_reset()
Examples
tar_option_get("format") # default format before we set anything
tar_target(x, 1)$settings$format
tar_option_set(format = "fst_tbl") # new default format
tar_option_get("format")
tar_target(x, 1)$settings$format
tar_option_reset() # reset the format
tar_target(x, 1)$settings$format
if (identical(Sys.getenv("TAR_EXAMPLES"), "true")) { # for CRAN
tar_dir({ # tar_dir() runs code from a temp dir for CRAN.
tar_script({
tar_option_set(cue = tar_cue(mode = "always")) # All targets always run.
list(tar_target(x, 1), tar_target(y, 2))
})
tar_make()
tar_make()
})
}