update_belief {pomdp} | R Documentation |
Belief Update
Description
Update the belief given a taken action and observation.
Usage
update_belief(
model,
belief = NULL,
action = NULL,
observation = NULL,
episode = 1,
digits = 7,
drop = TRUE
)
Arguments
model |
a POMDP object. |
belief |
the current belief state. Defaults to the start belief state specified in the model or "uniform". |
action |
the taken action. Can also be a vector of multiple actions or, if missing, then all actions are evaluated. |
observation |
the received observation. Can also be a vector of multiple observations or, if missing, then all observations are evaluated. |
episode |
Use transition and observation matrices for the given episode for time-dependent POMDPs (see POMDP). |
digits |
round decimals. |
drop |
logical; drop the result to a vector if only a single belief state is returned. |
Details
Update the belief state (
belief
) with an action and observation
using the update
defined so that
where normalizes the new belief state so the probabilities add up to one.
Value
returns the updated belief state as a named vector.
If action
or observations
is a vector with multiple elements ot missing, then a matrix with all
resulting belief states is returned.
Author(s)
Michael Hahsler
See Also
Other POMDP:
MDP2POMDP
,
POMDP()
,
accessors
,
actions()
,
add_policy()
,
plot_belief_space()
,
projection()
,
reachable_and_absorbing
,
regret()
,
sample_belief_space()
,
simulate_POMDP()
,
solve_POMDP()
,
solve_SARSOP()
,
transition_graph()
,
value_function()
,
write_POMDP()
Examples
data(Tiger)
update_belief(c(.5,.5), model = Tiger)
update_belief(c(.5,.5), action = "listen", observation = "tiger-left", model = Tiger)
update_belief(c(.15,.85), action = "listen", observation = "tiger-right", model = Tiger)