get_policy_functions.blip {polle} | R Documentation |
Get Policy Functions
Description
get_policy_functions()
returns a function defining the policy at
the given stage. get_policy_functions()
is useful when implementing
the learned policy.
Usage
## S3 method for class 'blip'
get_policy_functions(object, stage, include_g_values = FALSE, ...)
## S3 method for class 'drql'
get_policy_functions(object, stage, include_g_values = FALSE, ...)
get_policy_functions(object, stage, ...)
## S3 method for class 'ptl'
get_policy_functions(object, stage, ...)
## S3 method for class 'ql'
get_policy_functions(object, stage, include_g_values = FALSE, ...)
Arguments
object |
Object of class "policy_object" or "policy_eval", see policy_learn and policy_eval. |
stage |
Integer. Stage number. |
include_g_values |
If TRUE, the g-values are included as an attribute. |
... |
Additional arguments. |
Value
Functions with arguments:
H
data.table containing the variables needed to evaluate the policy (and g-function).
Examples
library("polle")
### Two stages:
d <- sim_two_stage(5e2, seed=1)
pd <- policy_data(d,
action = c("A_1", "A_2"),
baseline = "BB",
covariates = list(L = c("L_1", "L_2"),
C = c("C_1", "C_2")),
utility = c("U_1", "U_2", "U_3"))
pd
### Realistic V-restricted Policy Tree Learning
# specifying the learner:
pl <- policy_learn(type = "ptl",
control = control_ptl(policy_vars = list(c("C_1", "BB"),
c("L_1", "BB"))),
full_history = TRUE,
alpha = 0.05)
# evaluating the learner:
pe <- policy_eval(policy_data = pd,
policy_learn = pl,
q_models = q_glm(),
g_models = g_glm())
# getting the policy function at stage 2:
pf2 <- get_policy_functions(pe, stage = 2)
args(pf2)
# applying the policy function to new data:
set.seed(1)
L_1 <- rnorm(n = 10)
new_H <- data.frame(C = rnorm(n = 10),
L = L_1,
L_1 = L_1,
BB = "group1")
d2 <- pf2(H = new_H)
head(d2)
[Package polle version 1.4 Index]