allreduce-method {pbdMPI} | R Documentation |
All Ranks Receive a Reduction of Objects from Every Rank
Description
This method lets all ranks receive a reduction of objects from every rank in the same communicator based on a given operation. The default return is an object like the input and the default operation is the sum.
Usage
allreduce(x, x.buffer = NULL, op = .pbd_env$SPMD.CT$op,
comm = .pbd_env$SPMD.CT$comm)
Arguments
x |
an object to be reduced from all ranks. |
x.buffer |
for atomic vectors, a buffer to hold the return object which
has the same size and the same type as |
op |
the reduction operation to apply to |
comm |
a communicator number. |
Details
All ranks are presumed to have x
of the same size and type.
Normally, x.buffer
is NULL
or unspecified, and is computed
for you. If specified for atomic vectors, the type should be one of integer,
double, or raw and be the same type as x
.
The allgather
is efficient due to the underlying MPI parallel
communication and recursive doubling reduction algorithm that results in
a sublinear (log2(comm.size(comm))
) number of reduction and
communication steps.
See methods{"allreduce"}
for S4 dispatch cases and the source code for
further details.
Value
The reduced object of the same type as x
is returned to all ranks
by default.
Author(s)
Wei-Chen Chen wccsnow@gmail.com, George Ostrouchov, Drew Schmidt, Pragneshkumar Patel, and Hao Yu.
References
Programming with Big Data in R Website: https://pbdr.org/
See Also
allgather()
, gather()
, reduce()
.
Examples
### Save code in a file "demo.r" and run with 2 processors by
### SHELL> mpiexec -np 2 Rscript demo.r
spmd.code <- "
### Initialize
suppressMessages(library(pbdMPI, quietly = TRUE))
.comm.size <- comm.size()
.comm.rank <- comm.rank()
### Examples.
N <- 5
x <- (1:N) + N * .comm.rank
y <- allreduce(matrix(x, nrow = 1), op = \"sum\")
comm.print(y)
y <- allreduce(x, double(N), op = \"prod\")
comm.print(y)
comm.set.seed(1234, diff = TRUE)
x <- as.logical(round(runif(N)))
y <- allreduce(x, logical(N), op = \"land\")
comm.print(y)
### Finish.
finalize()
"
pbdMPI::execmpi(spmd.code = spmd.code, nranks = 2L)