cpr_iter_sim {canaper} | R Documentation |
For randomization algorithms that involve swapping (iterations), there is no
way to know a-priori how many iterations are needed to sufficiently "mix"
the community data matrix. cpr_iter_sim()
records the percentage similarity
between the original matrix and a matrix that has been randomized with
successive swapping iterations, at each iteration.
cpr_iter_sim(
comm,
null_model = "curveball",
n_iterations = 100,
thin = 1,
seed = NULL
)
comm |
Dataframe or matrix; input community data with sites (communities) as rows and species as columns. Values of each cell are the presence/absence (0 or 1) or number of individuals (abundance) of each species in each site. |
null_model |
Character vector of length 1 or object of class |
n_iterations |
Numeric vector of length 1; maximum number of iterations to conduct. |
thin |
Numeric vector of length 1; frequency to record percentage
similarity between original matrix and randomized matrix. Results will
be recorded every |
seed |
Integer vector of length 1 or NULL; random seed that will be used
in a call to |
The user should inspect the results to determine at what number of iterations the original matrix and randomized matrix reach maximum dissimilarity (see Examples). This number will strongly depend on the size and structure of the original matrix. Large matrices with many zeros will likely take more iterations, and even then still retain relatively high similarity between the original matrix and the randomized matrix.
Available memory may be quickly exhausted if many (e.g., tens or hundreds of
thousands, or more) of iterations are used with no thinning on large
matrices; use thin
to only record a portion of the results and save on
memory.
Of course, cpr_iter_sim()
only makes sense for randomization algorithms
that use iterations.
Only presence/absence information is used to calculate percentage similarity between community matrices.
Tibble (dataframe) with the following columns:
iteration
: Number of iterations used to generate random community
similarity
: Percentage similarity between original community and
random community
# Simulate generation of a random community with maximum of 10,000
# iterations, recording similarity every 100 iterations
(res <- cpr_iter_sim(
comm = biod_example$comm,
null_model = "swap",
n_iterations = 10000,
thin = 100,
seed = 123
))
# Plot reveals that ca. 1000 iterations are sufficient to
# completely mix random community
plot(res$iteration, res$similarity, type = "l")