Example of SIBTEST nonlinear regression correction applied to the logistic regression method of DIF detection (DeMars 2014, personal communication)

I am truly indebted to Christine E. DeMars (2009) who took the time to explicit the nasty formulas proposed by Jiang & Stout (1998). The use of regressed true score is preferable than using the mere observed score because a less-than-perfect (test) reliability, the latent score between groups would not be perfectly matched even when the observed scores were. In such a case, there is an inflated type 1 error, with spurious DIFs detected due to imperfect latent score matching between the groups examined. Below is the procedure to apply SIBTEST non-linear correction because the observed and true scores are not expected to be correlated linearly in presence of guessing. Specifically, it’s the example as can be applied to the Wordsum test (10 items, Word A from J). The x=10 denotes the level of ability indexed by total score.

The equations referenced below are from Jiang & Stout (1998).

First, let’s say you are studying item 2 for DIF. As a preliminary step, compute equation 12 (p. 295). Note that the items in the matching score used for equation 12 exclude item 2 (the item you are studying). Now you can find the breakpoint Sg for the regression line for each group for item 2, using equation 20 (p. 300).

Now, as another preliminary step, you have to use Equation 19 (p. 298). Equation 19 is really nasty. I’ll illustrate it for x = 10, although obviously you have to compute it for each x from 0 to J (J = one less than the number of items because it excludes item 2). For the first term in the equation, start with item 1 and calculate X_1 (total score excluding item 1 as well as item 2, the studied item). For the first term, multiply the proportion of examinees with X_1 =10 who got the item right by the proportion of examinees with X =10 who got the item wrong (remember X only excludes item 2 vs. X_1 excludes both items 1 and items 2). For the second term, multiply the proportion of examinees with X_1 =9 who got the item right by the proportion of examinees with X =10 who got the item right. Add these terms. Then calculate X_3 and do the multiplication and summing and so on. Then sum all of these values to obtain Z(10). And do this for Z(0), Z(1), etc. Also, although equation 19 does not have the subscript g, when you actually go to use these values in Equation 21 you will need them by group.

So, after obtaining all of the Zg(x)’s for item 2, do a weighted-least squares regression (Equation 21) for x’s <= Sg (the breakpoint for item 2 for group g), then another WLS regression for x’s > the breakpoint. For the first line, exclude x’s < Jc (where Jc is as in Equation 20, p. 300) from the regression. Use one of these 2 regression equations to estimate Vg(x) for all x except for x = cutpoint or x = cutpoint +1. Use linear interpolation between Vg(cutpoint-1) and Vg(cutpoint+2) to adjust these two points (last few lines on p. 301 and first lines on p. 302).

This entry was posted in Stats. Bookmark the permalink.