list_azure_files {AzureStor}R Documentation

Operations on a file share


Upload, download, or delete a file; list files in a directory; create or delete directories; check file existence.


list_azure_files(share, dir = "/", info = c("all", "name"),
  prefix = NULL, recursive = FALSE)

upload_azure_file(share, src, dest = basename(src), create_dir = FALSE,
  blocksize = 2^22, put_md5 = FALSE, use_azcopy = FALSE)

multiupload_azure_file(share, src, dest, recursive = FALSE,
  create_dir = recursive, blocksize = 2^22, put_md5 = FALSE,
  use_azcopy = FALSE, max_concurrent_transfers = 10)

download_azure_file(share, src, dest = basename(src), blocksize = 2^22,
  overwrite = FALSE, check_md5 = FALSE, use_azcopy = FALSE)

multidownload_azure_file(share, src, dest, recursive = FALSE,
  blocksize = 2^22, overwrite = FALSE, check_md5 = FALSE,
  use_azcopy = FALSE, max_concurrent_transfers = 10)

delete_azure_file(share, file, confirm = TRUE)

create_azure_dir(share, dir, recursive = FALSE)

delete_azure_dir(share, dir, recursive = FALSE, confirm = TRUE)

azure_file_exists(share, file)

azure_dir_exists(share, dir)



A file share object.

dir, file

A string naming a directory or file respectively.


Whether to return names only, or all information in a directory listing.


For list_azure_files, filters the result to return only files and directories whose name begins with this prefix.


For the multiupload/download functions, whether to recursively transfer files in subdirectories. For list_azure_dir, whether to include the contents of any subdirectories in the listing. For create_azure_dir, whether to recursively create each component of a nested directory path. For delete_azure_dir, whether to delete a subdirectory's contents first. Note that in all cases this can be slow, so try to use a non-recursive solution if possible.

src, dest

The source and destination files for uploading and downloading. See 'Details' below.


For the uploading functions, whether to create the destination directory if it doesn't exist. Again for the file storage API this can be slow, hence is optional.


The number of bytes to upload/download per HTTP(S) request.


For uploading, whether to compute the MD5 hash of the file(s). This will be stored as part of the file's properties.


Whether to use the AzCopy utility from Microsoft to do the transfer, rather than doing it in R.


For multiupload_azure_file and multidownload_azure_file, the maximum number of concurrent file transfers. Each concurrent file transfer requires a separate R process, so limit this if you are low on memory.


When downloading, whether to overwrite an existing destination file.


For downloading, whether to verify the MD5 hash of the downloaded file(s). This requires that the file's Content-MD5 property is set. If this is TRUE and the Content-MD5 property is missing, a warning is generated.


Whether to ask for confirmation on deleting a file or directory.


upload_azure_file and download_azure_file are the workhorse file transfer functions for file storage. They each take as inputs a single filename as the source for uploading/downloading, and a single filename as the destination. Alternatively, for uploading, src can be a textConnection or rawConnection object; and for downloading, dest can be NULL or a rawConnection object. If dest is NULL, the downloaded data is returned as a raw vector, and if a raw connection, it will be placed into the connection. See the examples below.

multiupload_azure_file and multidownload_azure_file are functions for uploading and downloading multiple files at once. They parallelise file transfers by using the background process pool provided by AzureRMR, which can lead to significant efficiency gains when transferring many small files. There are two ways to specify the source and destination for these functions:

upload_azure_file and download_azure_file can display a progress bar to track the file transfer. You can control whether to display this with options(azure_storage_progress_bar=TRUE|FALSE); the default is TRUE.

azure_file_exists and azure_dir_exists test for the existence of a file and directory, respectively.


For list_azure_files, if info="name", a vector of file/directory names. If info="all", a data frame giving the file size and whether each object is a file or directory.

For download_azure_file, if dest=NULL, the contents of the downloaded file as a raw vector.

For azure_file_exists, either TRUE or FALSE.


upload_azure_file and download_azure_file have the ability to use the AzCopy commandline utility to transfer files, instead of native R code. This can be useful if you want to take advantage of AzCopy's logging and recovery features; it may also be faster in the case of transferring a very large number of small files. To enable this, set the use_azcopy argument to TRUE.

Note that AzCopy only supports SAS and AAD (OAuth) token as authentication methods. AzCopy also expects a single filename or wildcard spec as its source/destination argument, not a vector of filenames or a connection.

See Also

file_share, az_storage, storage_download, call_azcopy

AzCopy version 10 on GitHub


## Not run: 

share <- file_share("", key="access_key")

list_azure_files(share, "/")
list_azure_files(share, "/", recursive=TRUE)

create_azure_dir(share, "/newdir")

upload_azure_file(share, "~/", dest="/newdir/")
download_azure_file(share, "/newdir/", dest="~/")

delete_azure_file(share, "/newdir/")
delete_azure_dir(share, "/newdir")

# uploading/downloading multiple files at once
multiupload_azure_file(share, "/data/logfiles/*.zip")
multidownload_azure_file(share, "/monthly/jan*.*", "/data/january")

# you can also pass a vector of file/pathnames as the source and destination
src <- c("file1.csv", "file2.csv", "file3.csv")
dest <- paste0("uploaded_", src)
multiupload_azure_file(share, src, dest)

# uploading serialized R objects via connections
json <- jsonlite::toJSON(iris, pretty=TRUE, auto_unbox=TRUE)
con <- textConnection(json)
upload_azure_file(share, con, "iris.json")

rds <- serialize(iris, NULL)
con <- rawConnection(rds)
upload_azure_file(share, con, "iris.rds")

# downloading files into memory: as a raw vector, and via a connection
rawvec <- download_azure_file(share, "iris.json", NULL)

con <- rawConnection(raw(0), "r+")
download_azure_file(share, "iris.rds", con)

## End(Not run)

[Package AzureStor version 3.6.1 Index]