mtscr_prepare {mtscr} | R Documentation |
Prepare database for MTS
Description
Prepare database for MTS analysis.
Usage
mtscr_prepare(
df,
id_column,
item_column = NULL,
score_column,
top = 1,
minimal = FALSE,
ties_method = c("random", "average"),
normalise = TRUE,
self_ranking = NULL
)
Arguments
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
top |
Integer or vector of integers (see examples), number of top answers to prepare indicators for. Default is 1, i.e. only the top answer. |
minimal |
Logical, append columns to df ( |
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
Value
The input data frame with additional columns:
.z_score
Numerical, z-score of the creativity score
.ordering
Numerical, ranking of the answer relative to participant and item
.ordering_topX
Numerical, 0 for X top answers, otherwise value of
.ordering
Number of .ordering_topX
columns depends on the top
argument. If minimal = TRUE
,
only the new columns and the item and id columns are returned. The values are
relative to the participant AND item, so the values for different
participants scored for different tasks (e.g. uses for "brick" and "can") are distinct.
Examples
data("mtscr_creativity", package = "mtscr")
# Indicators for top 1 and top 2 answers
mtscr_prepare(mtscr_creativity, id, item, SemDis_MEAN, top = 1:2, minimal = TRUE)