E.theta.toy {calibrator} R Documentation

## Expectation and variance with respect to theta

### Description

Function E.theta.toy returns expectation of H_1(D) with respect to \theta; Edash.theta.toy returns expectation with respect to E'. Function E.theta.toy also returns information about nonlinear behaviour of h1(x,theta).

### Usage

E.theta.toy(D2=NULL,  H1=NULL, x1=NULL, x2=NULL, phi, give.mean=TRUE)
Edash.theta.toy(x, t.vec, k,  H1, fast.but.opaque=FALSE, a=NULL, b=NULL,
phi=NULL)


### Arguments

 D2 Observation points H1 Regression function for D1 phi hyperparameters. Default value of NULL only to be used in Edash.theta.toy() when fast.but.opaque is TRUE x lat/long point (for Edash.theta.toy) t.vec Matrix whose rows are parameter values (for Edash.theta.toy) k Integer specifying column (for Edash.theta.toy) give.mean In E.theta.toy(), Boolean, with default TRUE meaning to return the mean (expectation), and FALSE meaning to return the “variance” fast.but.opaque In Edash.theta.toy(), Boolean, with default FALSE meaning to use a slow but clear method. If TRUE, use faster code but parameters a and b must then be specified a Constant term, needed if fast.but.opaque is TRUE: \left(V_\theta^{-1}+2\Omega_t\right)^{-1}V_\theta^{-1}m_\theta. Specifying a in advance saves execution time b Linear term, needed if fast.but.opaque is TRUE: 2\left(V_\theta^{-1}+2\Omega_t\right)^{-1}\Omega_t (multiplied by t[k,] in Edash.theta.toy()). x1 In E.theta.toy(g=F,...), the value of x in h_1(x,\theta). The default value is NULL because in simple cases such as that implemented here, the output is independent of x1 and x2 x2 In E.theta.toy(g=F,...), the value of x in h_1(x,\theta)

### Note

A terse discussion follows; see the calex.pdf vignette and the 1D case study in directory inst/doc/one/dim/ for more details and examples.

Function E.theta.toy(give.mean=FALSE,...) does not return the variance! The matrix returned is a different size from the variance matrix!

It returns the thing that must be added to crossprod(E_theta(h1(x,theta)),t(E_theta(h1(x,theta)))) to give E_theta(h1(x,theta).t(h1(x,theta))).

In other words, it returns E_theta(h1(x,theta).t(h1(x,theta)))- crossprod(E_theta(h1(x,theta)),t(E_theta(h1(x,theta)))).

If the terms of h1() are of the form c(o,theta) (where o is a vector that is a function of x alone, and independent of theta), then the function will include the variance matrix, in the lower right corner (zeroes elsewhere).

Function E.theta() must be updated if h1.toy() changes: unlike E.theta() and Edash.theta(), it does not “know” where the elements that vary with theta are, nor their (possibly x-dependent) coefficients.

This form of the function requires x1 and x2 arguments, for good form's sake, even though the returned value is independent of x in the toy example. To see why it is necessary to include x, consider a simple case with h_1(x,\theta)=(1,x\theta)^T. Now E_\theta\left(h(x,\theta)\right) is just (1,x\overline{\theta})^T but

E_\theta\left(h_1(x,\theta)h_1(x,\theta)^T\right)

is a 2-by-2 matrix (M, say) with E_\theta(M)=h_1(x,\overline{\theta})h_1(x,\overline{\theta})^T + \mbox{variance terms}.

 E_\theta\left( \begin{array}{cc} 1 & x\theta\\ x\theta & x^2\theta^2 \end{array}\right) 

All three functions here are intimately connected to the form of h1.toy() and changing it (or indeed H1.toy()) will usually require rewriting all three functions documented here. Look at the definition of E.theta.toy(give=F), and you will see that even changing the meat of h1.toy() from c(1,x) to c(x,1) would require a redefinition of E.theta.toy(g=F).

The only place that E.theta.toy(g=F) is used is internally in hh.fun().

### Author(s)

Robin K. S. Hankin

### References

• M. C. Kennedy and A. O'Hagan 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society B, 63(3) pp425-464

• M. C. Kennedy and A. O'Hagan 2001. Supplementary details on Bayesian calibration of computer models, Internal report, University of Sheffield. Available at http://www.tonyohagan.co.uk/academic/ps/calsup.ps

• R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)

toys
data(toys)