revealPrefModel {ramchoice} | R Documentation |
Model Falsification with Random Limited Attention
Description
Given a collection of choice problems and corresponding
choice probabilities, revealPrefModel
determines if they are compatible with
the Random Attention Model (RAM) of
Cattaneo, Ma, Masatlioglu, and Suleymanov (2020)
and/or the Attention Overload Model (AOM) of
Cattaneo, Cheung, Ma, and Masatlioglu (2024).
See revealPref
for revealed preference analysis with empirical choice data.
Usage
revealPrefModel(
menu,
prob,
pref_list = NULL,
RAM = TRUE,
AOM = TRUE,
limDataCorr = TRUE,
attBinary = 1
)
Arguments
menu |
Numeric matrix of 0s and 1s, the collection of choice problems. |
prob |
Numeric matrix, the collection of choice probabilities |
pref_list |
Numeric matrix, each row corresponds to one preference. For example, |
RAM |
Boolean, whether the restrictions implied by the RAM of
Cattaneo et al. (2020) should be incorporated, that is, their monotonic attention assumption (default is |
AOM |
Boolean, whether the restrictions implied by the AOM of
Cattaneo et al. (2024) should be incorporated, that is, their attention overload assumption (default is |
limDataCorr |
Boolean, whether assuming limited data (default is |
attBinary |
Numeric, between 1/2 and 1 (default is |
Value
constraints |
Matrices of constraints, generated by |
inequalities |
The moment inequalities. Positive numbers indicate that the RAM/AOM restrictions are rejected by the given choice probabilities. |
Author(s)
Matias D. Cattaneo, Princeton University. cattaneo@princeton.edu.
Paul Cheung, University of Maryland. hycheung@umd.edu
Xinwei Ma (maintainer), University of California San Diego. x1ma@ucsd.edu
Yusufcan Masatlioglu, University of Maryland. yusufcan@umd.edu
Elchin Suleymanov, Purdue University. esuleyma@purdue.edu
References
M. D. Cattaneo, X. Ma, Y. Masatlioglu, and E. Suleymanov (2020). A Random Attention Model. Journal of Political Economy 128(7): 2796-2836. doi:10.1086/706861
M. D. Cattaneo, P. Cheung, X. Ma, and Y. Masatlioglu (2024). Attention Overload. Working paper.
Examples
# Logit attention with parameter 2
# True preference: 1 2 3 4 5 6
menu <- prob <- matrix(c(1, 1, 1, 1, 1, 1,
0, 1, 1, 1, 1, 1,
1, 0, 1, 1, 1, 1,
1, 1, 0, 1, 1, 1,
1, 1, 1, 0, 1, 1,
1, 1, 1, 1, 0, 1,
1, 1, 1, 1, 1, 0), ncol=6, byrow=TRUE)
for (i in 1:nrow(prob)) prob[i, menu[i, ]==1] <- logitAtte(sum(menu[i, ]), 2)$choiceProb
# List of preferences to be tested
pref_list <- matrix(c(1, 2, 3, 4, 5, 6,
2, 3, 4, 5, 6, 1), ncol=6, byrow=TRUE)
# RAM only
result1 <- revealPrefModel(menu = menu, prob = prob, pref_list = pref_list, RAM = TRUE, AOM = FALSE)
summary(result1)
# AOM only
result2 <- revealPrefModel(menu = menu, prob = prob, pref_list = pref_list, RAM = FALSE, AOM = TRUE)
summary(result2)
# Both RAM and AOM
result3 <- revealPrefModel(menu = menu, prob = prob, pref_list = pref_list, RAM = TRUE, AOM = TRUE)
summary(result3)