gen_data_mapl {policytree}R Documentation

Example data generating process from Offline Multi-Action Policy Learning: Generalization and Optimization

Description

The DGP from section 6.4.1 in Zhou, Athey, and Wager (2023): There are d=3d=3 actions (a0,a1,a2)(a_0,a_1,a_2) which depend on 3 regions the covariates XU[0,1]pX \sim U[0,1]^p reside in. Observed outcomes: YN(μai(Xi),4)Y \sim N(\mu_{a_i}(X_i), 4)

Usage

gen_data_mapl(n, p = 10, sigma2 = 4)

Arguments

n

Number of observations XX.

p

Number of features (minimum 7). Default is 10.

sigma2

Noise variance. Default is 4.

Value

A list with realized action aia_i, region rir_i, conditional mean μ\mu, outcome YY and covariates XX

References

Zhou, Zhengyuan, Susan Athey, and Stefan Wager. "Offline multi-action policy learning: Generalization and optimization." Operations Research 71.1 (2023).


[Package policytree version 1.2.3 Index]