BasicBernoulliBandit {contextual}R Documentation

Bandit: BasicBernoulliBandit


Context-free Bernoulli or Binary multi-armed bandit.


Simulates k Bernoulli arms where each arm issues a reward of one with uniform probability p, and otherwise a reward of zero.

In a bandit scenario, this can be used to simulate a hit or miss event, such as if a user clicks on a headline, ad, or recommended product.


  bandit <- BasicBernoulliBandit$new(weights)



numeric vector; probability of reward values for each of the bandit's k arms



generates and instantializes a new BasicBernoulliBandit instance.



  • t: integer, time step t.

returns a named list containing the current d x k dimensional matrix context$X, the number of arms context$k and the number of features context$d.

get_reward(t, context, action)


  • t: integer, time step t.

  • context: list, containing the current context$X (d x k context matrix), context$k (number of arms) and context$d (number of context features) (as set by bandit).

  • action: list, containing action$choice (as set by policy).

returns a named list containing reward$reward and, where computable, reward$optimal (used by "oracle" policies and to calculate regret).

See Also

Core contextual classes: Bandit, Policy, Simulator, Agent, History, Plot

Bandit subclass examples: BasicBernoulliBandit, ContextualLogitBandit, OfflineReplayEvaluatorBandit

Policy subclass examples: EpsilonGreedyPolicy, ContextualLinTSPolicy


## Not run: 

horizon            <- 100
sims               <- 100

policy             <- EpsilonGreedyPolicy$new(epsilon = 0.1)

bandit             <- BasicBernoulliBandit$new(weights = c(0.6, 0.1, 0.1))
agent              <- Agent$new(policy,bandit)

history            <- Simulator$new(agent, horizon, sims)$run()

plot(history, type = "cumulative", regret = TRUE)

## End(Not run)

[Package contextual version Index]