lmranks {csranks}R Documentation

Regressions Involving Ranks

Description

Estimation and inference for regressions involving ranks, i.e. regressions in which the dependent and/or the independent variable has been transformed into ranks before running the regression.

Usage

lmranks(
  formula,
  data,
  subset,
  weights,
  na.action = stats::na.fail,
  method = "qr",
  model = TRUE,
  x = FALSE,
  qr = TRUE,
  y = FALSE,
  singular.ok = TRUE,
  contrasts = NULL,
  offset = offset,
  omega = 1,
  ...
)

## S3 method for class 'lmranks'
plot(x, which = 1, ...)

## S3 method for class 'lmranks'
predict(object, newdata, ...)

## S3 method for class 'lmranks'
summary(object, correlation = FALSE, symbolic.cor = FALSE, ...)

## S3 method for class 'lmranks'
vcov(object, complete = TRUE, ...)

Arguments

formula

An object of class "formula": a symbolic description of the model to be fitted. Exactly like the formula for linear model except that variables to be ranked can be indicated by r(). See Details and Examples below.

data

an optional data frame, list or environment (or object coercible by as.data.frame to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which lm is called.

subset

currently not supported.

weights

currently not supported.

na.action

currently not supported. User is expected to handle NA values prior to the use of this function.

method

the method to be used; for fitting, currently only method = "qr" is supported; method = "model.frame" returns the model frame (the same as with model = TRUE, see below).

model, y, qr

logicals. If TRUE the corresponding components of the fit (the model frame, the response, the QR decomposition) are returned.

x
  • For lmranks: Logical. Should model matrix be returned?

  • For plot method: An lmranks object.

singular.ok

logical. If FALSE (the default in S but not in R) a singular fit is an error.

contrasts

an optional list. See the contrasts.arg of model.matrix.default.

offset

this can be used to specify an a priori known component to be included in the linear predictor during fitting. This should be NULL or a numeric vector or matrix of extents matching those of the response. One or more offset terms can be included in the formula instead or as well, and if more than one are specified their sum is used. See model.offset.

omega

real number in the interval [0,1] defining how ties are handled (if there are any). The value of omega is passed to frank for computation of ranks. The default is 1 so that the rank of a realized value is defined as the empirical cdf evaluated at that realized value. See Details below.

...

For lm(): additional arguments to be passed to the low level regression fitting functions (see below).

which

As in plot.lm. Currently only no.1 is available.

object

A lmranks object.

newdata

An optional data frame in which to look for variables with which to predict. If omitted, the fitted values are used.

correlation

logical; if TRUE, the correlation matrix of the estimated parameters is returned and printed.

symbolic.cor

logical. If TRUE, print the correlations in a symbolic form (see symnum) rather than as numbers.

complete

logical indicating if the full variance-covariance matrix should be returned also in case of an over-determined system where some coefficients are undefined and coef(.) contains NAs correspondingly. When complete = TRUE, vcov() is compatible with coef() also in this singular case.

Details

This function performs estimation and inference for regressions involving ranks. Suppose there is a dependent variable Y_i and independent variables X_i and W_i, where X_i is a scalar and W_i a vector (possibly including a constant). Instead of running a linear regression of Y_i on X_i and W_i, we want to first transform Y_i and/or X_i into ranks. Denote by R_i^Y the rank of Y_i and R_i^X the rank of X_i. Then, a rank-rank regression,

R_i^Y = \rho R_i^X + W_i'\beta + \varepsilon_i,

is run using the formula r(Y)~r(X)+W. Similarly, a regression of the raw dependent variable on the ranked regressor,

Y_i = \rho R_i^X + W_i'\beta + \varepsilon_i,

can be implemented by the formula Y~r(X)+W, and a regression of the ranked dependent variable on the raw regressors,

R^Y_i = W_i'\beta + \varepsilon_i,

can be implemented by the formula r(Y)~W.

The function works, in many ways, just like lm for linear regressions. Apart from some smaller details, there are two important differences: first, in lmranks, the mark r() can be used in formulas to indicate variables to be ranked before running the regression and, second, subsequent use of summary produces a summary table with the correct standard errors, t-values and p-values (while those of the lm are not correct for regressions involving ranks). See Chetverikov and Wilhelm (2023) for more details.

Many other aspects of the function are similar to lm. For instance, . in a formula means 'all columns not otherwise in the formula' just as in lm. An intercept is included by default. In a model specified as r(Y)~r(X)+., both r(X) and X will be included in the model - as it would have been in lm and, say, log() instead of r(). One can exclude X with a -, i.e. r(Y)~r(X)+.-X. See formula for more about model specification.

The r() is a private alias for frank. The increasing argument, provided at individual regressor level, specifies whether the ranks should increase or decrease as regressor values increase. The omega argument of frank, provided at lmranks function level, specifies how ties in variables are to be handled and can be supplied as argument in lmranks. For more details, see frank. By default increasing is set to TRUE and omega is set equal to 1, which means r() computes ranks by transforming a variable through its empirical cdf.

Many functions defined for lm also work correctly with lmranks. These include coef, model.frame, model.matrix, resid, update and others. On the other hand, some would return incorrect results if they treated lmranks output in the same way as lm's. The central contribution of this package are vcov, summary and confint implementations using the correct asymptotic theory for regressions involving ranks.

See the lm documentation for more.

Value

An object of class lmranks, inheriting (as much as possible) from class lm.

Additionally, it has an omega entry, corresponding to the omega argument, a ranked_response logical entry, and a rank_terms_indices - an integer vector with indices of entries of terms.labels attribute of terms(formula), which correspond to ranked regressors.

Methods (by generic)

Rank-rank regressions with clusters

Sometimes, the data is divided into clusters (groups) and one is interested in running rank-rank regressions separately within each cluster, where the ranks are not computed within each cluster, but using all observations pooled across all clusters. Specifically, let G_i=1,\ldots,n_G denote a variable that indicates the cluster to which the i-th observation belongs. Then, the regression model of interest is

R_i^Y = \sum_{g=1}^{n_G} 1\{G_i=g\}(\rho_g R_i^X + W_i'\beta_g) + \varepsilon_i,

where \rho_g and \beta_g are now cluster-specific coefficients, but the ranks R_i^Y and R_i^X are computed as ranks among all observations Y_i and X_i, respectively. That means the rank of an observation is not computed among the other observations in the same cluster, but rather among all available observations across all clusters.

This type of regression is implemented in the lmranks function using interaction notation: r(Y)~(r(X)+W):G. Here, the variable G must be a factor.

Since the theory for clustered regression mixing grouped and ungrouped (in)dependent variables is not yet developed, such a model will raise an error. Also, by default the function includes a cluster-specific intercept, i.e. r(Y)~(r(X)+W):G is internally interpreted as r(Y)~(r(X)+W):G+G-1.

contrasts of G must be of contr.treatment kind, which is the default.

Warning

As a consequence of the order, in which model.frame applies operations, subset and na.action would be applied after evaluation of r(). That would drop some rank values from the final model frame and returned coefficients and standard errors could no longer be correct. The user must handle NA values and filter the data on their own prior to usage in lmranks.

Wrapping r() with other functions (like log(r(x))) will not recognize correctly the mark (because it will not be caught in terms(formula, specials = "r")). The ranks will be calculated correctly, but their transformation will be treated later in lm as a regular regressor. This means that the corresponding regression coefficient will be calculated correctly, but the standard errors, statistics etc. will not.

r, .r_predict and .r_cache are special expressions, used internally to interpret r mark correctly. Do not use them in formula.

A number of methods defined for lm do not yield theoretically correct results when applied to lmranks objects; errors or warnings are raised in those instances. Also, the df.residual component is set to NA, since the notion of effects of freedom for the rank models is not theoretically established (at time of 1.2 release).

References

Chetverikov and Wilhelm (2023), "Inference for Rank-Rank Regressions". arXiv preprint arXiv:2310.15512

See Also

lm for details about other arguments; frank.

Generic functions coef, effects, residuals, fitted, model.frame, model.matrix, update .

Examples

# rank-rank regression:
X <- rnorm(500)
Y <- X + rnorm(500)
rrfit <- lmranks(r(Y) ~ r(X))
summary(rrfit)

# naive version of the rank-rank regression:
RY <- frank(Y, increasing=TRUE, omega=1)
RX <- frank(X, increasing=TRUE, omega=1)
fit <- lm(RY ~ RX)
summary(fit)
# the coefficient estimates are the same as in the lmranks function, but
# the standard errors, t-values, p-values are incorrect

# support of `data` argument:
data(mtcars)
lmranks(r(mpg) ~ r(hp) + ., data = mtcars)
# Same as above, but use the `hp` variable only through its rank
lmranks(r(mpg) ~ r(hp) + . - hp, data = mtcars)

# rank-rank regression with clusters:
G <- factor(rep(LETTERS[1:4], each=nrow(mtcars) / 4))
lmr <- lmranks(r(mpg) ~ r(hp):G, data = mtcars)
summary(lmr)
model.matrix(lmr)
# Include all columns of mtcars as usual covariates:
lmranks(r(mpg) ~ (r(hp) + .):G, data = mtcars)


[Package csranks version 1.2.2 Index]