bulk_import {AzureCosmosR} | R Documentation |
Import a set of documents to an Azure Cosmos DB container
Description
Import a set of documents to an Azure Cosmos DB container
Usage
bulk_import(container, ...)
## S3 method for class 'cosmos_container'
bulk_import(
container,
data,
init_chunksize = 1000,
verbose = TRUE,
procname = "_AzureCosmosR_bulkImport",
...
)
Arguments
container |
A Cosmos DB container object, as obtained by |
... |
Optional arguments passed to lower-level functions. |
data |
The data to import. Can be a data frame, or a string containing JSON text. |
init_chunksize |
The number of rows to import per chunk. |
verbose |
Whether to print updates to the console as the import progresses. |
procname |
The stored procedure name to use for the server-side import code. Change this if, for some reason, the default name is taken. |
Details
This is a convenience function to import a dataset into a container. It works by creating a stored procedure and then calling it in a loop, passing the to-be-imported data in chunks. The dataset must include a column for the container's partition key or an error will result.
Note that this function is not meant for production use. In particular, if the import fails midway through, it will not clean up after itself: you should call bulk_delete
to remove the remnants of a failed import.
Value
A list containing the number of rows imported, for each value of the partition key.
See Also
Examples
## Not run:
endp <- cosmos_endpoint("https://myaccount.documents.azure.com:443/", key="mykey")
db <- get_cosmos_database(endp, "mydatabase")
cont <- create_cosmos_container(db, "mycontainer", partition_key="sex")
# importing the Star Wars data from dplyr
# notice that rows with sex=NA are not imported
bulk_import(cont, dplyr::starwars)
# importing from a JSON file
writeLines(jsonlite::toJSON(dplyr::starwars), "starwars.json")
bulk_import(cont, "starwars.json")
## End(Not run)