word_classification_data {AcousticNDLCodeR} | R Documentation |
Data of PLoS ONE paper
Description
Dataset of a subject and modeling data for an auditory word identification task.
Usage
data(word_classification_data)
Format
Data from the four experiments and model estimates
ExperimentNumber
Experiment identifier
PresentationMethod
Method of presentation in the experiment: loudspeaker, headphones 3. Trial: Trial number in the experimental list
TrialScaled
scaled Trial
Subject
anonymized subject identifier
Item
word identifier -german umlaute and special character coded as 'ae' 'oe' 'ue' and 'ss'
Activation
NDL activation
LogActivation
log(activation+epsilon)
L1norm
L1-norm (lexicality)
LogL1norm
log of L1-norm
RecognitionDecision
recognition decision (yes/no)
RecognitionRT
latency for recognition decision
LogRecognitionRT
log recognition RT
DictationAccuracy
dictation accuracy (TRUE: correct word reported, FALSE otherwise) 15. DictationRT: response latency to typing onset
References
Denis Arnold, Fabian Tomaschek, Konstantin Sering, Florence Lopez, and R. Harald Baayen (2017). Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit PLoS ONE 12(4):e0174623. https://doi.org/10.1371/journal.pone.0174623