beta1hat.fun {calibrator} R Documentation

beta1 estimator

Description

Least squares estimator for beta1

Usage

beta1hat.fun(D1, H1, y, phi)


Arguments

 D1 code run points H1 regressor basis funs y code outputs phi hyperparameters

Author(s)

Robin K. S. Hankin

References

• M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464

• M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps

• R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)

beta2hat.fun

Examples

data(toys)
y.toy <- create.new.toy.datasets(D1=D1.toy , D2=D2.toy)$y.toy beta1hat.fun(D1=D1.toy, H1=H1.toy, y=y.toy, phi=phi.toy) # now cheat: force the hyperparameters to have the correct psi1: phi.fix <- phi.change(old.phi=phi.toy,psi1=c(1, 0.5, 1.0, 1.0, 0.5, 0.4),phi.fun=phi.fun.toy) # The value for psi1 is obtained by cheating and #examining the source # code for computer.model(); see ?phi.change # Create a new toy dataset with 40 observations: D1.big <- latin.hypercube(40,5) jj <- create.new.toy.datasets(D1=D1.big , D2=D2.toy) # We know that the real coefficients are 4:9 because we # we can cheat and look at the source code for computer.model() # Now estimate the coefficients without cheating: beta1hat.fun(D1=D1.big, H1=H1.toy, jj$y, phi=phi.toy)

# We can do slightly better by cheating and using the
# correct value for the hyperparameters:

beta1hat.fun(D1=D1.big, H1=H1.toy, jj\$y,phi=phi.true.toy(phi=phi.toy))

#marginally worse.



[Package calibrator version 1.2-8 Index]