A C D F G H I L M N O R S T U W Z
all_nominal | Find all nominal variables. |
all_numeric | Speciy all numeric variables. |
as.array.tensorflow.python.data.ops.dataset_ops.DatasetV2 | Get the single element of the dataset. |
as_array_iterator | Convert tf_dataset to an iterator that yields R arrays. |
as_tensor.tensorflow.python.data.ops.dataset_ops.DatasetV2 | Get the single element of the dataset. |
choose_from_datasets | Creates a dataset that deterministically chooses elements from datasets. |
csv_record_spec | Specification for reading a record from a text file with delimited values |
cur_info_env | Selectors |
dataset_batch | Combines consecutive elements of this dataset into batches. |
dataset_bucket_by_sequence_length | A transformation that buckets elements in a 'Dataset' by length |
dataset_cache | Caches the elements in this dataset. |
dataset_collect | Collects a dataset |
dataset_concatenate | Creates a dataset by concatenating given dataset with this dataset. |
dataset_decode_delim | Transform a dataset with delimted text lines into a dataset with named columns |
dataset_enumerate | Enumerates the elements of this dataset |
dataset_filter | Filter a dataset by a predicate |
dataset_flat_map | Maps map_func across this dataset and flattens the result. |
dataset_group_by_window | Group windows of elements by key and reduce them |
dataset_interleave | Maps map_func across this dataset, and interleaves the results |
dataset_map | Map a function across a dataset. |
dataset_map_and_batch | Fused implementation of dataset_map() and dataset_batch() |
dataset_options | Get or Set Dataset Options |
dataset_padded_batch | Combines consecutive elements of this dataset into padded batches. |
dataset_prefetch | Creates a Dataset that prefetches elements from this dataset. |
dataset_prefetch_to_device | A transformation that prefetches dataset values to the given 'device' |
dataset_prepare | Prepare a dataset for analysis |
dataset_reduce | Reduces the input dataset to a single element. |
dataset_rejection_resample | A transformation that resamples a dataset to a target distribution. |
dataset_repeat | Repeats a dataset count times. |
dataset_scan | A transformation that scans a function across an input dataset |
dataset_shard | Creates a dataset that includes only 1 / num_shards of this dataset. |
dataset_shuffle | Randomly shuffles the elements of this dataset. |
dataset_shuffle_and_repeat | Shuffles and repeats a dataset returning a new permutation for each epoch. |
dataset_skip | Creates a dataset that skips count elements from this dataset |
dataset_snapshot | Persist the output of a dataset |
dataset_take | Creates a dataset with at most count elements from this dataset |
dataset_take_while | A transformation that stops dataset iteration based on a predicate. |
dataset_unbatch | Unbatch a dataset |
dataset_unique | A transformation that discards duplicate elements of a Dataset. |
dataset_use_spec | Transform the dataset using the provided spec. |
dataset_window | Combines input elements into a dataset of windows. |
delim_record_spec | Specification for reading a record from a text file with delimited values |
dense_features | Dense Features |
feature_spec | Creates a feature specification. |
file_list_dataset | A dataset of all files matching a pattern |
fit.FeatureSpec | Fits a feature specification. |
fixed_length_record_dataset | A dataset of fixed-length records from one or more binary files. |
get_single_element | Get the single element of the dataset. |
has_type | Identify the type of the variable. |
hearts | Heart Disease Data Set |
input_fn | Construct a tfestimators input function from a dataset |
input_fn.tf_dataset | Construct a tfestimators input function from a dataset |
iterator_get_next | Get next element from iterator |
iterator_initializer | An operation that should be run to initialize this iterator. |
iterator_make_initializer | Create an operation that can be run to initialize this iterator |
iterator_string_handle | String-valued tensor that represents this iterator |
layer_input_from_dataset | Creates a list of inputs from a dataset |
length.tensorflow.python.data.ops.dataset_ops.DatasetV2 | Get Dataset length |
length.tf_dataset | Get Dataset length |
make-iterator | Creates an iterator for enumerating the elements of this dataset. |
make_csv_dataset | Reads CSV files into a batched dataset |
make_iterator_from_string_handle | Creates an iterator for enumerating the elements of this dataset. |
make_iterator_from_structure | Creates an iterator for enumerating the elements of this dataset. |
make_iterator_initializable | Creates an iterator for enumerating the elements of this dataset. |
make_iterator_one_shot | Creates an iterator for enumerating the elements of this dataset. |
next_batch | Tensor(s) for retrieving the next batch from a dataset |
output_shapes | Output types and shapes |
output_types | Output types and shapes |
out_of_range_handler | Execute code that traverses a dataset until an out of range condition occurs |
random_integer_dataset | Creates a 'Dataset' of pseudorandom values |
range_dataset | Creates a dataset of a step-separated range of values. |
read_files | Read a dataset from a set of files |
sample_from_datasets | Samples elements at random from the datasets in 'datasets'. |
scaler | List of pre-made scalers |
scaler_min_max | Creates an instance of a min max scaler |
scaler_standard | Creates an instance of a standard scaler |
selectors | Selectors |
sparse_tensor_slices_dataset | Splits each rank-N 'tf$SparseTensor' in this dataset row-wise. |
sqlite_dataset | A dataset consisting of the results from a SQL query |
sql_dataset | A dataset consisting of the results from a SQL query |
sql_record_spec | A dataset consisting of the results from a SQL query |
steps | Steps for feature columns specification. |
step_bucketized_column | Creates bucketized columns |
step_categorical_column_with_hash_bucket | Creates a categorical column with hash buckets specification |
step_categorical_column_with_identity | Create a categorical column with identity |
step_categorical_column_with_vocabulary_file | Creates a categorical column with vocabulary file |
step_categorical_column_with_vocabulary_list | Creates a categorical column specification |
step_crossed_column | Creates crosses of categorical columns |
step_embedding_column | Creates embeddings columns |
step_indicator_column | Creates Indicator Columns |
step_numeric_column | Creates a numeric column specification |
step_remove_column | Creates a step that can remove columns |
step_shared_embeddings_column | Creates shared embeddings for categorical columns |
tensors_dataset | Creates a dataset with a single element, comprising the given tensors. |
tensor_slices_dataset | Creates a dataset whose elements are slices of the given tensors. |
text_line_dataset | A dataset comprising lines from one or more text files. |
tfrecord_dataset | A dataset comprising records from one or more TFRecord files. |
tsv_record_spec | Specification for reading a record from a text file with delimited values |
until_out_of_range | Execute code that traverses a dataset until an out of range condition occurs |
with_dataset | Execute code that traverses a dataset |
zip_datasets | Creates a dataset by zipping together the given datasets. |