create_table_operation {AzureTableStor}R Documentation

Batch transactions for table storage


Batch transactions for table storage


  options = list(),
  headers = list(),
  body = NULL,
  metadata = c("none", "minimal", "full"),
  http_verb = c("GET", "PUT", "POST", "PATCH", "DELETE", "HEAD")

create_batch_transaction(endpoint, operations)

do_batch_transaction(transaction, ...)

## S3 method for class 'batch_transaction'
  batch_status_handler = c("warn", "stop", "message", "pass"),
  num_retries = 10,



A table storage endpoint, of class table_endpoint.


The path component of the operation.


A named list giving the query parameters for the operation.


A named list giving any additional HTTP headers to send to the host. AzureCosmosR will handle authentication details, so you don't have to specify these here.


The request body for a PUT/POST/PATCH operation.


The level of ODATA metadata to include in the response.


The HTTP verb (method) for the operation.


A list of individual table operation objects, each of class table_operation.


For do_batch_transaction, an object of class batch_transaction.


Arguments passed to lower-level functions.


For do_batch_transaction, what to do if one or more of the batch operations fails. The default is to signal a warning and return a list of response objects, from which the details of the failure(s) can be determined. Set this to "pass" to ignore the failure.


The number of times to retry the call, if the response is a HTTP error 429 (too many requests). The Cosmos DB endpoint tends to be aggressive at rate-limiting requests, to maintain the desired level of latency. This will generally not affect calls to an endpoint provided by a storage account.


Table storage supports batch transactions on entities that are in the same table and belong to the same partition group. Batch transactions are also known as entity group transactions.

You can use create_table_operation to produce an object corresponding to a single table storage operation, such as inserting, deleting or updating an entity. Multiple such objects can then be passed to create_batch_transaction, which bundles them into a single atomic transaction. Call do_batch_transaction to send the transaction to the endpoint.

Note that batch transactions are subject to some limitations imposed by the REST API:


create_table_operation returns an object of class table_operation.

Assuming the batch transaction did not fail due to rate-limiting, do_batch_transaction returns a list of objects of class table_operation_response, representing the results of each individual operation. Each object contains elements named status, headers and body containing the respective parts of the response. Note that the number of returned objects may be smaller than the number of operations in the batch, if the transaction failed.

See Also

import_table_entities, which uses (multiple) batch transactions under the hood

Performing entity group transactions


## Not run: 

endp <- table_endpoint("", key="mykey")
tab <- create_storage_table(endp, "mytable")

## a simple batch insert
ir <- subset(iris, Species == "setosa")

# property names must be valid C# variable names
names(ir) <- sub("\\.", "_", names(ir))

# create the PartitionKey and RowKey properties
ir$PartitionKey <- ir$Species
ir$RowKey <- sprintf("%03d", seq_len(nrow(ir)))

# generate the array of insert operations: 1 per row
ops <- lapply(seq_len(nrow(ir)), function(i)
    create_table_operation(endp, "mytable", body=ir[i, ], http_verb="POST")))

# create a batch transaction and send it to the endpoint
bat <- create_batch_transaction(endp, ops)

## End(Not run)

[Package AzureTableStor version 1.0.0 Index]