startMPIcluster {doMPI} | R Documentation |
Create and start an MPI cluster
Description
The startMPIcluster
function is used to start an MPI cluster.
Usage
startMPIcluster(count, verbose=FALSE, workdir=getwd(), logdir=workdir,
maxcores=1, includemaster=TRUE, bcast=TRUE,
comm=if (mpi.comm.size(0) > 1) 0 else 3,
intercomm=comm + 1, mtag=10, wtag=11,
defaultopts=list())
Arguments
count |
Number of workers to spawn. If you start your
script using mpirun, then you don't really need to use
the |
verbose |
Indicates if verbose messages should be enabled.
Defaults to |
workdir |
Working directory of the cluster workers. Defaults to the master's working directory. |
logdir |
Directory to put the worker log files.
Defaults to |
maxcores |
Maximum number of cores for workers to use.
Defaults to |
includemaster |
Indicates if the master process should be counted
as a load on the CPU.
This will effect how many cores will be used on the local machine by
mclapply, if a worker process is started on the local machine.
Defaults to |
bcast |
Indicates if a true MPI broadcast should be used to send
shared “job” data to the workers. If |
comm |
Communicator number to use. A value of 0 means to use
non-spawn mode, which means the cluster workers are started using
mpirun/ortrun with more than one worker. A value of 1 or more
forces spawn mode. Multiple clusters can be started by using different
values for |
intercomm |
Inter-communicator number. Defaults to |
mtag |
Tag to use for messages sent to the master.
Do not use this option unless you know what you're doing, or
your program will very likely hang.
Defaults to |
wtag |
Tag to use for messages sent to the workers.
Do not use this option unless you know what you're doing, or
your program will very likely hang.
Defaults to |
defaultopts |
A list containing default values to use for some
of the |
Note
The startMPIcluster
function will return an MPI cluster object of
different classes, depending on the bcast
option. This is
because broadcasting is implemented as a method on the MPI cluster
object, and that method is implemented differently in the different
classes.
Also note that the bcast
option has no effect if the
backend-specific forcePiggyback
option is used with
foreach
, since “piggy-backing” is an alternative way to send
the job data to the workers in separate messages.
So there are currently three ways that the job data can be sent to the workers: piggy-backed with the first task to each worker, broadcast, or sent in separate messages. Which method is best will presumably depend on your hardware and your MPI implementation.
Examples
## Not run:
# start and register an MPI cluster with two workers in verbose mode:
cl <- startMPIcluster(count=2, verbose=TRUE)
registerDoMPI(cl)
# and shut it down
closeCluster(cl)
# set the working directory to /tmp:
cl <- startMPIcluster(count=2, workdir='/tmp')
registerDoMPI(cl)
# and shut it down
closeCluster(cl)
## End(Not run)