stdPDIF {difR} | R Documentation |

Calculates standardized P-difference statistics for DIF detection.

stdPDIF(data, member, match = "score", anchor = 1:ncol(data), stdWeight = "focal")

`data` |
numeric: the data matrix (one row per subject, one column per item). |

`member` |
numeric: the vector of group membership with zero and one entries only. See |

`match` |
specifies the type of matching criterion. Can be either |

`anchor` |
a vector of integer values specifying which items (all by default) are currently considered as anchor (DIF free) items. See |

`stdWeight` |
character: the type of weights used for the standardized P-DIF statistic. Possible values are |

This command computes the standardized P-DIF statistic in the specific framework of differential item functioning (Dorans and Kulick, 1986). It forms the basic
command of `difStd`

and is specifically designed for this call. In addition, the standardized alpha values (Dorans, 1989) are also computed as a basis
for effect size calculation.

The standardized P-DIF statistic is a weighted average of the difference in proportions of successes in the reference group and in the focal group. The average is
computed across the test score strata. The weights can be of three kinds (Dorans, 1989; Dorans and Kulick, 1986) and are specified through the `stdWeight`

argument: the proportion of focal groups examinees within each stratum (`stdWeight="focal"`

), the proportion of reference group examinees within each stratum
(`stdWeight="reference"`

), and the proportion of examinees (from both groups) within each stratum (`stdWeight="total"`

). By default, the weights are
built from the focal group.

Similarly to the 'alpha' estimates of the common odds ratio for the Mantel-Haenszel method (see `mantelHaenszel`

), the *standardized alpha values*
can be computed as rough measures of effect sizes, after a transformation to the Delta Scale (Holland, 1985). See Dorans (1989, p.228, Eqn.15) for further details.

The data are passed through the `data`

argument, with one row per subject and one column per item. Missing values are allowed but must be coded as `NA`

values. They are discarded from sum-score computation.

The vector of group membership, specified with `member`

argument, must hold only zeros and ones, a value of zero corresponding to the reference group and a
value of one to the focal group.

The matching criterion can be either the test score or any other continuous or discrete variable to be passed in the `stdPDIF`

function. This is specified by the `match`

argument. By default, it takes the value `"score"`

and the test score (i.e. raw score) is computed. The second option is to assign to `match`

a vector of continuous or discrete numeric values, which acts as the matching criterion. Note that for consistency this vector should not belong to the `data`

matrix.

Option `anchor`

sets the items which are considered as anchor items for computing standardized P-DIF statistics. Items other than the anchor items and the
tested item are discarded. `anchor`

must hold integer values specifying the column numbers of the corresponding anchor items. It is mainly designed to
perform item purification.

A list with three arguments:

`resStd` |
the vector of the standardized P-DIF statistics. |

`resAlpha` |
the vector of standardized alpha values. |

`match` |
a character string, either |

Sebastien Beland

Collectif pour le Developpement et les Applications en Mesure et Evaluation (Cdame)

Universite du Quebec a Montreal

sebastien.beland.1@hotmail.com, http://www.cdame.uqam.ca/

David Magis

Department of Psychology, University of Liege

Research Group of Quantitative Psychology and Individual Differences, KU Leuven

David.Magis@uliege.be, http://ppw.kuleuven.be/okp/home/

Gilles Raiche

Collectif pour le Developpement et les Applications en Mesure et Evaluation (Cdame)

Universite du Quebec a Montreal

raiche.gilles@uqam.ca, http://www.cdame.uqam.ca/

Dorans, N. J. (1989). Two new approaches to assessing differential item functioning. Standardization and the Mantel-Haenszel method. *Applied Measurement in
Education, 2*, 217-233. doi: 10.1207/s15324818ame0203_3

Dorans, N. J. and Kulick, E. (1986). Demonstrating the utility of the standardization approach to assessing unexpected differential item performance on the Scholastic Aptitude Test. *Journal of Educational Measurement, 23*, 355-368. doi: 10.1111/j.1745-3984.1986.tb00255.x

Holland, P. W. (1985, October). *On the study of differential item performance without IRT*. Paper presented at the meeting of
Military Testing Association, San Diego (CA).

Magis, D., Beland, S., Tuerlinckx, F. and De Boeck, P. (2010). A general framework and an R package for the detection
of dichotomous differential item functioning. *Behavior Research Methods, 42*, 847-862. doi: 10.3758/BRM.42.3.847

`difStd`

, `dichoDif`

, `mantelHaenszel`

## Not run: # Loading of the verbal data data(verbal) # All items as anchor items stdPDIF(verbal[,1:24], verbal[,26]) # All items as anchor items, reference group weights stdPDIF(verbal[,1:24], verbal[,26], stdWeight = "reference") # All items as anchor items, both groups' weights stdPDIF(verbal[,1:24], verbal[,26], stdWeight = "total") # Removing item 6 from the set of anchor items stdPDIF(verbal[,1:24], verbal[,26], anchor = c(1:5,7:24)) ## End(Not run)

[Package *difR* version 5.1 Index]