Interface to the 'Azure Machine Learning' 'SDK'


[Up] [Top]

Documentation for package ‘azuremlsdk’ version 1.10.0

Help Pages

A B C D E F G H I K L M N P Q R S T U V W

-- A --

aci_webservice_deployment_config Create a deployment config for deploying an ACI web service
aks_webservice_deployment_config Create a deployment config for deploying an AKS web service
attach_aks_compute Attach an existing AKS cluster to a workspace
azureml azureml module User can access functions/modules in azureml that are not exposed through the exported R functions.

-- B --

bandit_policy Define a Bandit policy for early termination of HyperDrive runs
bayesian_parameter_sampling Define Bayesian sampling over a hyperparameter search space

-- C --

cancel_run Cancel a run
choice Specify a discrete set of options to sample from
complete_run Mark a run as completed.
container_registry Specify Azure Container Registry details
convert_to_dataset_with_csv_files Convert the current dataset into a FileDataset containing CSV files.
convert_to_dataset_with_parquet_files Convert the current dataset into a FileDataset containing Parquet files.
cran_package Specifies a CRAN package to install in environment
create_aks_compute Create an AksCompute cluster
create_aml_compute Create an AmlCompute cluster
create_child_run Create a child run
create_child_runs Create one or many child runs
create_file_dataset_from_files Create a FileDataset to represent file streams.
create_tabular_dataset_from_delimited_files Create an unregistered, in-memory Dataset from delimited files.
create_tabular_dataset_from_json_lines_files Create a TabularDataset to represent tabular data in JSON Lines files (http://jsonlines.org/).
create_tabular_dataset_from_parquet_files Create an unregistered, in-memory Dataset from parquet files.
create_tabular_dataset_from_sql_query Create a TabularDataset to represent tabular data in SQL databases.
create_workspace Create a new Azure Machine Learning workspace

-- D --

dataset_consumption_config Represent how to deliver the dataset to a compute target.
data_path Represents a path to data in a datastore.
data_type_bool Configure conversion to bool.
data_type_datetime Configure conversion to datetime.
data_type_double Configure conversion to 53-bit double.
data_type_long Configure conversion to 64-bit integer.
data_type_string Configure conversion to string.
define_timestamp_columns_for_dataset Define timestamp columns for the dataset.
delete_compute Delete a cluster
delete_local_webservice Delete a local web service from the local machine
delete_model Delete a model from its associated workspace
delete_secrets Delete secrets from a keyvault
delete_webservice Delete a web service from a given workspace
delete_workspace Delete a workspace
deploy_model Deploy a web service from registered model(s)
detach_aks_compute Detach an AksCompute cluster from its associated workspace
download_files_from_run Download files from a run
download_file_from_run Download a file from a run
download_from_datastore Download data from a datastore to the local file system
download_from_file_dataset Download file streams defined by the dataset as local files.
download_model Download a model to the local file system
drop_columns_from_dataset Drop the specified columns from the dataset.

-- E --

estimator Create an estimator
experiment Create an Azure Machine Learning experiment

-- F --

filter_dataset_after_time Filter Tabular Dataset with time stamp columns after a specified start time.
filter_dataset_before_time Filter Tabular Dataset with time stamp columns before a specified end time.
filter_dataset_between_time Filter Tabular Dataset between a specified start and end time.
filter_dataset_from_recent_time Filter Tabular Dataset to contain only the specified duration (amount) of recent data.

-- G --

generate_entry_script Generates the control script for the experiment.
generate_new_webservice_key Regenerate one of a web service's keys
get_aks_compute_credentials Get the credentials for an AksCompute cluster
get_best_run_by_primary_metric Return the best performing run amongst all completed runs
get_child_runs Get all children for the current run selected by specified filters
get_child_runs_sorted_by_primary_metric Get the child runs sorted in descending order by best primary metric
get_child_run_hyperparameters Get the hyperparameters for all child runs
get_child_run_metrics Get the metrics from all child runs
get_compute Get an existing compute cluster
get_current_run Get the context object for a run
get_dataset_by_id Get Dataset by ID.
get_dataset_by_name Get a registered Dataset from the workspace by its registration name.
get_datastore Get an existing datastore
get_default_datastore Get the default datastore for a workspace
get_default_keyvault Get the default keyvault for a workspace
get_environment Get an existing environment
get_file_dataset_paths Get a list of file paths for each file stream defined by the dataset.
get_input_dataset_from_run Return the named list for input datasets.
get_model Get a registered model
get_model_package_container_registry Get the Azure container registry that a packaged model uses
get_model_package_creation_logs Get the model package creation logs
get_run Get an experiment run
get_runs_in_experiment Return a generator of the runs for an experiment
get_run_details Get the details of a run
get_run_details_with_logs Get the details of a run along with the log files' contents
get_run_file_names List the files that are stored in association with a run
get_run_metrics Get the metrics logged to a run
get_secrets Get secrets from a keyvault
get_secrets_from_run Get secrets from the keyvault associated with a run's workspace
get_webservice Get a deployed web service
get_webservice_keys Retrieve auth keys for a web service
get_webservice_logs Retrieve the logs for a web service
get_webservice_token Retrieve the auth token for a web service
get_workspace Get an existing workspace
get_workspace_details Get the details of a workspace
github_package Specifies a Github package to install in environment
grid_parameter_sampling Define grid sampling over a hyperparameter search space

-- H --

hyperdrive_config Create a configuration for a HyperDrive run

-- I --

inference_config Create an inference configuration for model deployments
install_azureml Install azureml sdk package
interactive_login_authentication Manages authentication and acquires an authorization token in interactive login workflows.
invoke_webservice Call a web service with the provided input

-- K --

keep_columns_from_dataset Keep the specified columns and drops all others from the dataset.

-- L --

list_nodes_in_aml_compute Get the details (e.g IP address, port etc) of all the compute nodes in the compute target
list_secrets List the secrets in a keyvault
list_supported_vm_sizes List the supported VM sizes in a region
list_workspaces List all workspaces that the user has access to in a subscription ID
load_dataset_into_data_frame Load all records from the dataset into a dataframe.
load_workspace_from_config Load workspace configuration details from a config file
local_webservice_deployment_config Create a deployment config for deploying a local web service
lognormal Specify a normal distribution of the form 'exp(normal(mu, sigma))'
loguniform Specify a log uniform distribution
log_accuracy_table_to_run Log an accuracy table metric to a run
log_confusion_matrix_to_run Log a confusion matrix metric to a run
log_image_to_run Log an image metric to a run
log_list_to_run Log a vector metric value to a run
log_metric_to_run Log a metric to a run
log_predictions_to_run Log a predictions metric to a run
log_residuals_to_run Log a residuals metric to a run
log_row_to_run Log a row metric to a run
log_table_to_run Log a table metric to a run

-- M --

median_stopping_policy Define a median stopping policy for early termination of HyperDrive runs
merge_results Combine the results from the parallel training.
mount_file_dataset Create a context manager for mounting file streams defined by the dataset as local files.

-- N --

normal Specify a real value that is normally-distributed with mean 'mu' and standard deviation 'sigma'

-- P --

package_model Create a model package that packages all the assets needed to host a model as a web service
plot_run_details Generate table of run details
primary_metric_goal Define supported metric goals for hyperparameter tuning
promote_headers_behavior Defines options for how column headers are processed when reading data from files to create a dataset.
pull_model_package_image Pull the Docker image from a 'ModelPackage' to your local Docker environment

-- Q --

qlognormal Specify a normal distribution of the form 'round(exp(normal(mu, sigma)) / q) * q'
qloguniform Specify a uniform distribution of the form round(exp(uniform(min_value, max_value) / q) * q
qnormal Specify a normal distribution of the form round(normal(mu, sigma) / q) * q
quniform Specify a uniform distribution of the form 'round(uniform(min_value, max_value) / q) * q'

-- R --

randint Specify a set of random integers in the range [0, upper)
random_parameter_sampling Define random sampling over a hyperparameter search space
random_split_dataset Split file streams in the dataset into two parts randomly and approximately by the percentage specified.
register_azure_blob_container_datastore Register an Azure blob container as a datastore
register_azure_data_lake_gen2_datastore Initialize a new Azure Data Lake Gen2 Datastore.
register_azure_file_share_datastore Register an Azure file share as a datastore
register_azure_postgre_sql_datastore Initialize a new Azure PostgreSQL Datastore.
register_azure_sql_database_datastore Initialize a new Azure SQL database Datastore.
register_dataset Register a Dataset in the workspace
register_do_azureml_parallel Registers AMLCompute as a parallel backend with the foreach package.
register_environment Register an environment in the workspace
register_model Register a model to a given workspace
register_model_from_run Register a model for operationalization.
reload_local_webservice_assets Reload a local web service's entry script and dependencies
resource_configuration Initialize the ResourceConfiguration.
r_environment Create an environment

-- S --

save_model_package_files Save a Dockerfile and dependencies from a 'ModelPackage' to your local file system
service_principal_authentication Manages authentication using a service principle instead of a user identity.
set_default_datastore Set the default datastore for a workspace
set_secrets Add secrets to a keyvault
skip_from_dataset Skip file streams from the top of the dataset by the specified count.
split_tasks Splits the job into parallel tasks.
start_logging_run Create an interactive logging run
submit_child_run Submit an experiment and return the active child run
submit_experiment Submit an experiment and return the active created run

-- T --

take_from_dataset Take a sample of file streams from top of the dataset by the specified count.
take_sample_from_dataset Take a random sample of file streams in the dataset approximately by the probability specified.
truncation_selection_policy Define a truncation selection policy for early termination of HyperDrive runs

-- U --

uniform Specify a uniform distribution of options to sample from
unregister_all_dataset_versions Unregister all versions under the registration name of this dataset from the workspace.
unregister_datastore Unregister a datastore from its associated workspace
update_aci_webservice Update a deployed ACI web service
update_aks_webservice Update a deployed AKS web service
update_aml_compute Update scale settings for an AmlCompute cluster
update_local_webservice Update a local web service
upload_files_to_datastore Upload files to the Azure storage a datastore points to
upload_files_to_run Upload files to a run
upload_folder_to_run Upload a folder to a run
upload_to_datastore Upload a local directory to the Azure storage a datastore points to

-- V --

view_run_details Initialize run details widget

-- W --

wait_for_deployment Wait for a web service to finish deploying
wait_for_model_package_creation Wait for a model package to finish creating
wait_for_provisioning_completion Wait for a cluster to finish provisioning
wait_for_run_completion Wait for the completion of a run
write_workspace_config Write out the workspace configuration details to a config file