deploy_databricks {pysparklyr} | R Documentation |
Deploys Databricks backed content to publishing server
Description
This is a convenience function that is meant to make it easier for you to publish your Databricks backed content to a publishing server. It is meant to be primarily used with Posit Connect.
Usage
deploy_databricks(
appDir = NULL,
python = NULL,
account = NULL,
server = NULL,
lint = FALSE,
forceGeneratePythonEnvironment = TRUE,
version = NULL,
cluster_id = NULL,
host = NULL,
token = NULL,
confirm = interactive(),
...
)
Arguments
appDir |
A directory containing an application (e.g. a Shiny app or plumber API)
Defaults to NULL. If left NULL, and if called within RStudio, it will attempt
to use the folder of the currently opened document within the IDE. If there are
no opened documents, or not working in the RStudio IDE, then it will use
|
python |
Full path to a python binary for use by
|
account |
The name of the account to use to publish |
server |
The name of the target server to publish |
lint |
Lint the project before initiating the project? Default to FALSE. It has been causing issues for this type of content. |
forceGeneratePythonEnvironment |
If an existing requirements.txt file is found, it will be overwritten when this argument is TRUE. |
version |
The Databricks Runtime (DBR) version. Use if |
cluster_id |
The Databricks cluster ID. Use if |
host |
The Databricks host URL. Defaults to NULL. If left NULL, it will
use the environment variable |
token |
The Databricks authentication token. Defaults to NULL. If left NULL, it will
use the environment variable |
confirm |
Should the user be prompted to confirm that the correct
information is being used for deployment? Defaults to |
... |
Additional named arguments passed to |
Value
No value is returned to R. Only output to the console.