| FactorKernel {GauPro} | R Documentation |
Factor Kernel R6 class
Description
Initialize kernel object
Usage
k_FactorKernel(
s2 = 1,
D,
nlevels,
xindex,
p_lower = 0,
p_upper = 0.9,
p_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
p,
useC = TRUE,
offdiagequal = 1 - 1e-06
)
Arguments
s2 |
Initial variance |
D |
Number of input dimensions of data |
nlevels |
Number of levels for the factor |
xindex |
Index of the factor (which column of X) |
p_lower |
Lower bound for p |
p_upper |
Upper bound for p |
p_est |
Should p be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
p |
Vector of correlations |
useC |
Should C code used? Not implemented for FactorKernel yet. |
offdiagequal |
What should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget. |
Format
R6Class object.
Details
For a factor that has been converted to its indices. Each factor will need a separate kernel.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_FactorKernel
Public fields
pParameter for correlation
p_estShould p be estimated?
p_lowerLower bound of p
p_upperUpper bound of p
p_lengthlength of p
s2variance
s2_estIs s2 estimated?
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
xindexIndex of the factor (which column of X)
nlevelsNumber of levels for the factor
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
FactorKernel$new( s2 = 1, D, nlevels, xindex, p_lower = 0, p_upper = 0.9, p_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, p, useC = TRUE, offdiagequal = 1 - 1e-06 )
Arguments
s2Initial variance
DNumber of input dimensions of data
nlevelsNumber of levels for the factor
xindexIndex of the factor (which column of X)
p_lowerLower bound for p
p_upperUpper bound for p
p_estShould p be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
pVector of correlations
useCShould C code used? Not implemented for FactorKernel yet.
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method k()
Calculate covariance between two points
Usage
FactorKernel$k(x, y = NULL, p = self$p, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
pCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
FactorKernel$kone(x, y, p, s2, isdiag = TRUE, offdiagequal = self$offdiagequal)
Arguments
xvector
yvector
pcorrelation parameters on regular scale
s2Variance parameter
isdiagIs this on the diagonal of the covariance?
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
FactorKernel$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
FactorKernel$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
FactorKernel$dC_dx(XX, X, ...)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
...Additional args, not used
Method param_optim_start()
Starting point for parameters for optimization
Usage
FactorKernel$param_optim_start( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
FactorKernel$param_optim_start0( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
FactorKernel$param_optim_lower(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
FactorKernel$param_optim_upper(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
FactorKernel$set_params_from_optim( optim_out, p_est = self$p_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method s2_from_params()
Get s2 from params vector
Usage
FactorKernel$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method print()
Print this object
Usage
FactorKernel$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
FactorKernel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
kk <- FactorKernel$new(D=1, nlevels=5, xindex=1)
kk$p <- (1:10)/100
kmat <- outer(1:5, 1:5, Vectorize(kk$k))
kmat
kk$plot()
# 2D, Gaussian on 1D, index on 2nd dim
library(dplyr)
n <- 20
X <- cbind(matrix(runif(n,2,6), ncol=1),
matrix(sample(1:2, size=n, replace=TRUE), ncol=1))
X <- rbind(X, c(3.3,3))
n <- nrow(X)
Z <- X[,1] - (X[,2]-1.8)^2 + rnorm(n,0,.1)
tibble(X=X, Z) %>% arrange(X,Z)
k2a <- IgnoreIndsKernel$new(k=Gaussian$new(D=1), ignoreinds = 2)
k2b <- FactorKernel$new(D=2, nlevels=3, xind=2)
k2 <- k2a * k2b
k2b$p_upper <- .65*k2b$p_upper
gp <- GauPro_kernel_model$new(X=X, Z=Z, kernel = k2, verbose = 5,
nug.min=1e-2, restarts=0)
gp$kernel$k1$kernel$beta
gp$kernel$k2$p
gp$kernel$k(x = gp$X)
tibble(X=X, Z=Z, pred=gp$predict(X)) %>% arrange(X, Z)
tibble(X=X[,2], Z) %>% group_by(X) %>% summarize(n=n(), mean(Z))
curve(gp$pred(cbind(matrix(x,ncol=1),1)),2,6, ylim=c(min(Z), max(Z)))
points(X[X[,2]==1,1], Z[X[,2]==1])
curve(gp$pred(cbind(matrix(x,ncol=1),2)), add=TRUE, col=2)
points(X[X[,2]==2,1], Z[X[,2]==2], col=2)
curve(gp$pred(cbind(matrix(x,ncol=1),3)), add=TRUE, col=3)
points(X[X[,2]==3,1], Z[X[,2]==3], col=3)
legend(legend=1:3, fill=1:3, x="topleft")
# See which points affect (5.5, 3 themost)
data.frame(X, cov=gp$kernel$k(X, c(5.5,3))) %>% arrange(-cov)
plot(k2b)