inference {GGMncv} | R Documentation |
Statistical Inference for Regularized Gaussian Graphical Models
Description
Compute p-values for each relation based on the de-sparsified glasso estimator (Jankova and Van De Geer 2015).
Usage
inference(object, method = "fdr", alpha = 0.05, ...)
significance_test(object, method = "fdr", alpha = 0.05, ...)
Arguments
object |
An object of class |
method |
Character string. A correction method for multiple comparison (defaults to |
alpha |
Numeric. Significance level (defaults to |
... |
Currently ignored. |
Value
-
Theta
De-sparsified precision matrix -
adj
Adjacency matrix based on the p-values. -
pval_uncorrected
Uncorrected p-values -
pval_corrected
Corrected p-values -
method
The approach used for multiple comparisons -
alpha
Significance level
Note
This assumes (reasonably) Gaussian data, and should not to be expected
to work for, say, polychoric correlations. Further, all work to date
has only looked at the graphical lasso estimator, and not de-sparsifying
nonconvex regularization. Accordingly, it is probably best to set
penalty = "lasso"
in ggmncv
.
Further, whether the de-sparsified estimator provides nominal error rates remains to be seen, at least across a range of conditions. For example, the simulation results in Williams (2021) demonstrated that the confidence intervals can have (severely) compromised coverage properties (whereas non-regularized methods had coverage at the nominal level).
References
Jankova J, Van De Geer S (2015).
“Confidence intervals for high-dimensional inverse covariance estimation.”
Electronic Journal of Statistics, 9(1), 1205–1229.
Williams DR (2021).
“The Confidence Interval that Wasn't: Bootstrapped "Confidence Intervals" in L1-Regularized Partial Correlation Networks.”
PsyArXiv.
doi: 10.31234/osf.io/kjh2f.
Examples
# data
Y <- GGMncv::ptsd[,1:5]
# fit model
fit <- ggmncv(cor(Y), n = nrow(Y),
progress = FALSE,
penalty = "lasso")
# statistical inference
inference(fit)
# alias
all.equal(inference(fit), significance_test(fit))