betahat.fun {emulator} | R Documentation |
Calculates MLE coefficients of linear fit
Description
Determines the maximum likelihood regression coeffients for the
specified regression basis and correlation matrix A
.
The “.A
” form needs only A
(and not Ainv
),
thus removing the need to calculate a matrix inverse. Note that this
form is slower than the other if Ainv
is known in
advance, as solve(.,.)
is slow.
If Ainv
is not known in advance, the two forms seem to
perform similarly in the cases considered here and in the
goldstein
package.
Usage
betahat.fun(xold, Ainv, d, give.variance=FALSE, func)
betahat.fun.A(xold, A, d, give.variance=FALSE, func)
Arguments
xold |
Data frame, each line being the parameters of one run |
A |
Correlation matrix, typically provided by
|
Ainv |
Inverse of the correlation matrix |
d |
Vector of results at the points specified in |
give.variance |
Boolean, with |
func |
Function to generate regression basis; defaults to |
Note
Here, the strategy of using two separate functions, eg foo()
and foo.A()
, one of which inverts A
and one of which
uses notionally more efficient means. Compare the other
strategy in which a Boolean flag, use.Ainv
, has the same
effect. An example would be scales.likelihood()
.
Author(s)
Robin K. S. Hankin
References
-
J. Oakley and A. O'Hagan, 2002. Bayesian Inference for the Uncertainty Distribution of Computer Model Outputs, Biometrika 89(4), pp769-784
-
R. K. S. Hankin 2005. Introducing BACCO, an R bundle for Bayesian analysis of computer code output, Journal of Statistical Software, 14(16)
Examples
data(toy)
val <- toy
H <- regressor.multi(val)
d <- apply(H,1,function(x){sum((0:6)*x)})
fish <- rep(2,6)
A <- corr.matrix(val,scales=fish)
Ainv <- solve(A)
# now add suitably correlated Gaussian noise:
d <- as.vector(rmvnorm(n=1,mean=d, 0.1*A))
betahat.fun(val , Ainv , d) # should be close to c(0,1:6)
# Now look at the variances:
betahat.fun(val,Ainv,give.variance=TRUE, d)
# now find the value of the prior expectation (ie the regression
# plane) at an unknown point:
x.unknown <- rep(0.5 , 6)
regressor.basis(x.unknown) %*% betahat.fun(val, Ainv, d)
# compare the prior with the posterior
interpolant(x.unknown, d, val, Ainv,scales=fish)
# Heh, it's the same! (of course it is, there is no error here!)
# OK, put some error on the old observations:
d.noisy <- as.vector(rmvnorm(n=1,mean=d,0.1*A))
# now compute the regression point:
regressor.basis(x.unknown) %*% betahat.fun(val, Ainv, d.noisy)
# and compare with the output of interpolant():
interpolant(x.unknown, d.noisy, val, Ainv, scales=fish)
# there is a difference!
# now try a basis function that has superfluous degrees of freedom.
# we need a bigger dataset. Try 100:
val <- latin.hypercube(100,6)
colnames(val) <- letters[1:6]
d <- apply(val,1,function(x){sum((1:6)*x)})
A <- corr.matrix(val,scales=rep(1,6))
Ainv <- solve(A)
betahat.fun(val, Ainv, d, func=function(x){c(1,x,x^2)})
# should be c(0:6 ,rep(0,6). The zeroes should be zero exactly
# because the original function didn't include any squares.
## And finally a sanity check:
f <- function(x){c(1,x,x^2)}
jj1 <- betahat.fun(val, Ainv, d, func=f)
jj2 <- betahat.fun.A(val, A, d, func=f)
abs(jj1-jj2) # should be small