list_blobs {AzureStor} R Documentation

## Operations on a blob container or blob

### Description

Upload, download, or delete a blob; list blobs in a container; create or delete directories; check blob availability.

### Usage

list_blobs(container, dir = "/", info = c("partial", "name", "all"),
prefix = NULL, recursive = TRUE)

upload_blob(container, src, dest = basename(src), type = c("BlockBlob",
"AppendBlob"), blocksize = if (type == "BlockBlob") 2^24 else 2^22,
lease = NULL, put_md5 = FALSE, append = FALSE, use_azcopy = FALSE)

multiupload_blob(container, src, dest, recursive = FALSE,
type = c("BlockBlob", "AppendBlob"), blocksize = if (type == "BlockBlob")
2^24 else 2^22, lease = NULL, put_md5 = FALSE, append = FALSE,
use_azcopy = FALSE, max_concurrent_transfers = 10)

overwrite = FALSE, lease = NULL, check_md5 = FALSE,
use_azcopy = FALSE)

blocksize = 2^24, overwrite = FALSE, lease = NULL, check_md5 = FALSE,
use_azcopy = FALSE, max_concurrent_transfers = 10)

delete_blob(container, blob, confirm = TRUE)

create_blob_dir(container, dir)

delete_blob_dir(container, dir, recursive = FALSE, confirm = TRUE)

blob_exists(container, blob)

blob_dir_exists(container, dir)

copy_url_to_blob(container, src, dest, lease = NULL, async = FALSE,

multicopy_url_to_blob(container, src, dest, lease = NULL, async = FALSE,
max_concurrent_transfers = 10, auth_header = NULL)


### Arguments

 container A blob container object. dir For list_blobs, a string naming the directory. Note that blob storage does not support real directories; this argument simply filters the result to return only blobs whose names start with the given value. info For list_blobs, level of detail about each blob to return: a vector of names only; the name, size, blob type, and whether this blob represents a directory; or all information. prefix For list_blobs, an alternative way to specify the directory. recursive For the multiupload/download functions, whether to recursively transfer files in subdirectories. For list_blobs, whether to include the contents of any subdirectories in the listing. For delete_blob_dir, whether to recursively delete subdirectory contents as well. src, dest The source and destination files for uploading and downloading. See 'Details' below. type When uploading, the type of blob to create. Currently only block and append blobs are supported. blocksize The number of bytes to upload/download per HTTP(S) request. lease The lease for a blob, if present. put_md5 For uploading, whether to compute the MD5 hash of the blob(s). This will be stored as part of the blob's properties. Only used for block blobs. append When uploading, whether to append the uploaded data to the destination blob. Only has an effect if type="AppendBlob". If this is FALSE (the default) and the destination append blob exists, it is overwritten. If this is TRUE and the destination does not exist or is not an append blob, an error is thrown. use_azcopy Whether to use the AzCopy utility from Microsoft to do the transfer, rather than doing it in R. max_concurrent_transfers For multiupload_blob and multidownload_blob, the maximum number of concurrent file transfers. Each concurrent file transfer requires a separate R process, so limit this if you are low on memory. overwrite When downloading, whether to overwrite an existing destination file. check_md5 For downloading, whether to verify the MD5 hash of the downloaded blob(s). This requires that the blob's Content-MD5 property is set. If this is TRUE and the Content-MD5 property is missing, a warning is generated. blob A string naming a blob. confirm Whether to ask for confirmation on deleting a blob. async For copy_url_to_blob and multicopy_url_to_blob, whether the copy operation should be asynchronous (proceed in the background). auth_header For copy_url_to_blob and multicopy_url_to_blob, an optional Authorization HTTP header to send to the source. This allows copying files that are not publicly available or otherwise have access restrictions.

### Details

upload_blob and download_blob are the workhorse file transfer functions for blobs. They each take as inputs a single filename as the source for uploading/downloading, and a single filename as the destination. Alternatively, for uploading, src can be a textConnection or rawConnection object; and for downloading, dest can be NULL or a rawConnection object. If dest is NULL, the downloaded data is returned as a raw vector, and if a raw connection, it will be placed into the connection. See the examples below.

multiupload_blob and multidownload_blob are functions for uploading and downloading multiple files at once. They parallelise file transfers by using the background process pool provided by AzureRMR, which can lead to significant efficiency gains when transferring many small files. There are two ways to specify the source and destination for these functions:

• Both src and dest can be vectors naming the individual source and destination pathnames.

• The src argument can be a wildcard pattern expanding to one or more files, with dest naming a destination directory. In this case, if recursive is true, the file transfer will replicate the source directory structure at the destination.

upload_blob and download_blob can display a progress bar to track the file transfer. You can control whether to display this with options(azure_storage_progress_bar=TRUE|FALSE); the default is TRUE.

multiupload_blob can upload files either as all block blobs or all append blobs, but not a mix of both.

blob_exists and blob_dir_exists test for the existence of a blob and directory, respectively.

copy_url_to_blob transfers the contents of the file at the specified HTTP[S] URL directly to blob storage, without requiring a temporary local copy to be made. multicopy_url_to_blob does the same, for multiple URLs at once. These functions have a current file size limit of 256MB.

### Value

For list_blobs, details on the blobs in the container. For download_blob, if dest=NULL, the contents of the downloaded blob as a raw vector. For blob_exists a flag whether the blob exists.

### AzCopy

upload_blob and download_blob have the ability to use the AzCopy commandline utility to transfer files, instead of native R code. This can be useful if you want to take advantage of AzCopy's logging and recovery features; it may also be faster in the case of transferring a very large number of small files. To enable this, set the use_azcopy argument to TRUE.

The following points should be noted about AzCopy:

• It only supports SAS and AAD (OAuth) token as authentication methods. AzCopy also expects a single filename or wildcard spec as its source/destination argument, not a vector of filenames or a connection.

• Currently, it does not support appending data to existing blobs.

### Directories

Blob storage does not have true directories, instead using filenames containing a separator character (typically '/') to mimic a directory structure. This has some consequences:

• The isdir column in the data frame output of list_blobs is a best guess as to whether an object represents a file or directory, and may not always be correct. Currently, list_blobs assumes that any object with a file size of zero is a directory.

• Zero-length files can cause problems for the blob storage service as a whole (not just AzureStor). Try to avoid uploading such files.

• create_blob_dir and delete_blob_dir are guaranteed to function as expected only for accounts with hierarchical namespaces enabled. When this feature is disabled, directories do not exist as objects in their own right: to create a directory, simply upload a blob to that directory. To delete a directory, delete all the blobs within it; as far as the blob storage service is concerned, the directory then no longer exists.

• Similarly, the output of list_blobs(recursive=TRUE) can vary based on whether the storage account has hierarchical namespaces enabled.

• blob_exists will return FALSE for a directory when the storage account does not have hierarchical namespaces enabled.

### Examples

## Not run:

cont <- blob_container("https://mystorage.blob.core.windows.net/mycontainer", key="access_key")

list_blobs(cont)

delete_blob(cont, "bigfile.zip")

# append blob: concatenating multiple files into one

# you can also pass a vector of file/pathnames as the source and destination
src <- c("file1.csv", "file2.csv", "file3.csv")

json <- jsonlite::toJSON(iris, pretty=TRUE, auto_unbox=TRUE)
con <- textConnection(json)

rds <- serialize(iris, NULL)
con <- rawConnection(rds)

rawToChar(rawvec)

con <- rawConnection(raw(0), "r+")
unserialize(con)

# copy from a public URL: Iris data from UCI machine learning repository
copy_url_to_blob(cont,
"https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data",
"iris.csv")

## End(Not run)


[Package AzureStor version 3.6.1 Index]