do_cumulative_htrx {HTRX}R Documentation

Cumulative HTRX on long haplotypes

Description

Two-step cross-validation used to select the best HTRX model for longer haplotypes, i.e. include at least 7 single nucleotide polymorphisms (SNPs).

Usage

do_cumulative_htrx(
  data_nosnp,
  hap1,
  hap2 = hap1,
  train_proportion = 0.5,
  sim_times = 5,
  featurecap = 40,
  usebinary = 1,
  randomorder = TRUE,
  fixorder = NULL,
  method = "simple",
  criteria = "BIC",
  gain = TRUE,
  nmodel = 3,
  runparallel = FALSE,
  mc.cores = 6,
  rareremove = FALSE,
  rare_threshold = 0.001,
  L = 6,
  dataseed = 1:sim_times,
  fold = 10,
  kfoldseed = 123,
  htronly = FALSE,
  max_int = NULL,
  returnwork = FALSE,
  verbose = FALSE
)

do_cumulative_htrx_step1(
  data_nosnp,
  hap1,
  hap2 = hap1,
  train_proportion = 0.5,
  featurecap = 40,
  usebinary = 1,
  randomorder = TRUE,
  fixorder = NULL,
  method = "simple",
  criteria = "BIC",
  nmodel = 3,
  splitseed = 123,
  gain = TRUE,
  runparallel = FALSE,
  mc.cores = 6,
  rareremove = FALSE,
  rare_threshold = 0.001,
  L = 6,
  htronly = FALSE,
  max_int = NULL,
  verbose = FALSE
)

extend_haps(
  data_nosnp,
  featuredata,
  train,
  featurecap = dim(featuredata)[2],
  usebinary = 1,
  gain = TRUE,
  runparallel = FALSE,
  mc.cores = 6,
  verbose = FALSE
)

make_cumulative_htrx(
  hap1,
  hap2 = hap1,
  featurename,
  rareremove = FALSE,
  rare_threshold = 0.001,
  htronly = FALSE,
  max_int = NULL
)

Arguments

data_nosnp

a data frame with outcome (the outcome must be the first column with colnames(data_nosnp)[1]="outcome"), fixed covariates (for example, sex, age and the first 18 PCs) if there are, and without SNPs or haplotypes.

hap1

a data frame of the SNPs' genotype of the first genome. The genotype of a SNP for each individual is either 0 (reference allele) or 1 (alternative allele).

hap2

a data frame of the SNPs' genotype of the second genome. The genotype of a SNP for each individual is either 0 (reference allele) or 1 (alternative allele). By default, hap2=hap1 representing haploid.

train_proportion

a positive number between 0 and 1 giving the proportion of the training dataset when splitting data into 2 folds. By default, train_proportion=0.5.

sim_times

an integer giving the number of simulations in Step 1 (see details). By default, sim_times=5.

featurecap

a positive integer which manually sets the maximum number of independent features. By default, featurecap=40.

usebinary

a non-negative number representing different models. Use linear model if usebinary=0, use logistic regression model via fastglm if usebinary=1 (by default), and use logistic regression model via glm if usebinary>1.

randomorder

logical. If randomorder=TRUE (default), use random order of all the SNPs to add SNPs in cumulative HTRX.

fixorder

a vector of the fixed order of SNPs to be added in cumulative HTRX. This only works by setting randomorder=FALSE. Otherwise, fixorder=NULL (default). The length of fixorder can be smaller than the total number of SNPs, i.e. users can specify the order of some instead of all of the SNPs.

method

the method used for data splitting, either "simple" (default) or "stratified".

criteria

the criteria for model selection, either "BIC" (default), "AIC" or "lasso".

gain

logical. If gain=TRUE (default), report the variance explained in addition to fixed covariates; otherwise, report the total variance explained by all the variables.

nmodel

a positive integer specifying the number of candidate models that the criterion selects. By default, nmodel=3.

runparallel

logical. Use parallel programming based on mclapply function from R package "parallel" or not. Note that for Windows users, mclapply doesn't work, so please set runparallel=FALSE (default).

mc.cores

an integer giving the number of cores used for parallel programming. By default, mc.cores=6. This only works when runparallel=TRUE.

rareremove

logical. Remove rare SNPs and haplotypes or not. By default, rareremove=FALSE.

rare_threshold

a numeric number below which the haplotype or SNP is removed. This only works when rareremove=TRUE. By default, rare_threshold=0.001.

L

a positive integer. The cumulative HTRX starts with haplotypes templates comtaining L SNPs. By default, L=6. Let nsnp be the number of SNPs in total, L must be smaller than nsnp-1.

dataseed

a vector of the seed that each simulation in Step 1 (see details) uses. The length of dataseed must be the same as sim_times. By default, dataseed=1:sim_times.

fold

a positive integer specifying how many folds the data should be split into for cross-validation.

kfoldseed

a positive integer specifying the seed used to split data for k-fold cross validation. By default, kfoldseed=123.

htronly

logical. If htronly=TRUE, only haplotypes with interaction between all the SNPs will be selected. Please set max_int=NULL when htronly=TRUE. By default, htronly=FALSE.

max_int

a positive integer which specifies the maximum number of SNPs that can interact. If no value is given, interactions between all the SNPs will be considered.

returnwork

logical. If returnwork=TRUE, return a vector of the maximum number of features that are assessed in each simulation, excluding the fixed covariates. This is used to assess how much computational 'work' is done in Step 1(2) of HTRX (see details). By default, returnwork=FALSE.

verbose

logical. If verbose=TRUE, print out the inference steps. By default, verbose=FALSE.

splitseed

a positive integer giving the seed that a single simulation in Step 1 (see details) uses.

featuredata

a data frame of the feature data, e.g. haplotype data created by HTRX or SNPs. These features exclude all the data in data_nosnp, and will be selected using 2-step cross-validation.

train

a vector of the indexes of the training data.

featurename

a character giving the names of features (haplotypes).

Details

Longer haplotypes are important for discovering interactions. However, there are 3k-1 haplotypes in HTRX if the region contains k SNPs, making HTRX (do_cv) unrealistic to apply on for regions with large numbers of SNPs. To address this issue, we proposed "cumulative HTRX" (do_cumulative_htrx) that enables HTRX to run on longer haplotypes, i.e. haplotypes which include at least 7 SNPs (we recommend). There are 2 steps to implement cumulative HTRX.

Step 1: extend haplotypes and select candidate models.

(1) Randomly sample a subset (50 use stratified sampling when the outcome is binary. This subset is used for all the analysis in (2) and (3);

(2) Start with L randomly chosen SNPs from the entire k SNPs, and keep the top M haplotypes that are chosen from the forward regression. Then add another SNP to the M haplotypes to create 3M+2 haplotypes. There are 3M haplotypes obtained by adding "0", "1" or "X" to the previous M haplotypes, as well as 2 bases of the added SNP, i.e. "XX...X0" and "XX...X1" (as "X" was implicitly used in the previous step). The top M haplotypes from them are then selected using forward regression. Repeat this process until obtaining M haplotypes which include k-1 SNPs;

(3) Add the last SNP to create 3M+2 haplotypes. Afterwards, if criteria="AIC" or criteria="BIC", start from a model with fixed covariates (e.g. 18 PCs, sex and age), and perform forward regression on the subset, i.e. iteratively choose a feature (in addition to the fixed covariates) to add whose inclusion enables the model to explain the largest variance, and select s models with the lowest Akaike information criterion (AIC) or Bayesian Information Criteria (BIC) to enter the candidate model pool; If criteria="lasso", using least absolute shrinkage and selection operator (lasso) to directly select the best s models to enter the candidate model pool;

(4) repeat (1)-(3) B times, and select all the different models in the candidate model pool as the candidate models.

Step 2: select the best model using k-fold cross-validation.

(1) Randomly split the whole data into k groups with approximately equal sizes, using stratified sampling when the outcome is binary;

(2) In each of the k folds, use a fold as the validation dataset, a fold as the test dataset, and the remaining folds as the training dataset. Then, fit all the candidate models on the training dataset, and use these fitted models to compute the additional variance explained by features (out-of-sample variance explained) in the validation and test dataset. Finally, select the candidate model with the biggest average out-of-sample variance explained in the validation set as the best model, and report the out-of-sample variance explained in the test set.

Function do_cumulative_htrx_step1 is the Step 1 (1)-(3) described above. Function extend_haps is used to select haplotypes in the Step 1 (2) described above. Function make_cumulative_htrx is used to generate the haplotype data (by adding a new SNP into the haplotypes) from M haplotypes to 3M+2 haplotypes, which is also described in the Step 1 (2)-(3).

When investigating haplotypes with interactions between at most 2 SNPs, L is suggested to be no bigger than 10. When investigating haplotypes with interactions between at most 3 SNPs, L should not be bigger than 9. If haplotypes with interactions between more than 4 SNPs are investigated, L is suggested to be 6 (which is the default value).

Value

do_cumulative_htrx returns a list containing the best model selected, and the out-of-sample variance explained in each test set.

do_cv_step1 returns a list of three candidate models selected by a single simulation.

extend_haps returns a character of the names of the selected features.

make_cumulative_htrx returns a data frame of the haplotype matrix.

References

Yang Y, Lawson DJ. HTRX: an R package for learning non-contiguous haplotypes associated with a phenotype. Bioinformatics Advances 3.1 (2023): vbad038.

Barrie, W., Yang, Y., Irving-Pease, E.K. et al. Elevated genetic risk for multiple sclerosis emerged in steppe pastoralist populations. Nature 625, 321–328 (2024).

Eforn, B. "Bootstrap methods: another look at the jackknife." The Annals of Statistics 7 (1979): 1-26.

Schwarz, Gideon. "Estimating the dimension of a model." The annals of statistics (1978): 461-464.

McFadden, Daniel. "Conditional logit analysis of qualitative choice behavior." (1973).

Akaike, Hirotugu. "A new look at the statistical model identification." IEEE transactions on automatic control 19.6 (1974): 716-723.

Tibshirani, Robert. "Regression shrinkage and selection via the lasso." Journal of the Royal Statistical Society: Series B (Methodological) 58.1 (1996): 267-288.

Examples

## use dataset "example_hap1", "example_hap2" and "example_data_nosnp"
## "example_hap1" and "example_hap2" are
## both genomes of 8 SNPs for 5,000 individuals (diploid data)
## "example_data_nosnp" is a simulated dataset
## which contains the outcome (binary), sex, age and 18 PCs

## visualise the covariates data
## we will use only the first two covariates: sex and age in the example
head(HTRX::example_data_nosnp)

## visualise the genotype data for the first genome
head(HTRX::example_hap1)

## we perform cumulative HTRX on all the 8 SNPs using 2-step cross-validation
## to compute additional variance explained by haplotypes
## If the data is haploid, please set hap2=HTRX::example_hap1
## If you want to compute total variance explained, please set gain=FALSE
## For Linux/MAC users, we recommend setting runparallel=TRUE

cumu_CV_results <- do_cumulative_htrx(HTRX::example_data_nosnp[1:500,1:3],
                                      HTRX::example_hap1[1:500,],
                                      HTRX::example_hap2[1:500,],
                                      train_proportion=0.5,sim_times=1,
                                      featurecap=10,usebinary=1,
                                      randomorder=TRUE,method="stratified",
                                      criteria="BIC",gain=TRUE,
                                      runparallel=FALSE,verbose=TRUE)

#This result would be more precise when setting larger sim_times and featurecap

[Package HTRX version 1.2.4 Index]