Fisking the Haladyna Rules #20: Vary location of right answer

[Each day in October, I analyze one of the 31 item writing rules from Haladyna, Downing and Rodriquez (2002), the super-dominant list of item authoring guidelines.]

Writing the choices: Vary the location of the right answer according to the number of choices.

Yes. Totally. I mean, I think this could be written more clearly. I don’t really understand what “according to the number of choices” adds to this rule, but sure. Fine. I think that replacing that phrase with “randomly” might be better.

But “randomly” isn’t actually quite right. In our work, we have found that putting the correct answer option earlier in the list might lower the cognitive complexity of an item. That is, if a test taker finds a truly good candidate early, they might not have to work out all the other answer options all the way through. That is, they might be able to more quickly rule them out as being inferior to that earlier option. The hunt for the right answer might be cognitively more complex if they have to work harder to eliminate more answer options before they find a good one to go with.

Of course, if the correct answer option is always last or always later, that will reward guessing strategies—which is bad. The location of the correct answer option should be distributed equally across an entire form, just to fight that kind of construct-irrelevant strategy. We do not expect careful work to pick just the right items to increase the cognitive complexity of, though we might dream.

You see, even this seemingly simple rule might not be so simple. But Haladyna and colleagues clearly do not sufficiently dive into the contents of items or the cognition that items elicit to recognize that. Instead, they look at this most quantifiable and testable of ideas (i.e., how many distractors?) and revel in the how easily quantified it is.

[Haladyna et al.’s exercise started with a pair of 1989 articles, and continued in a 2004 book and a 2013 book. But the 2002 list is the easiest and cheapest to read (see the linked article, which is freely downloadable) and it is the only version that includes a well formatted one-page version of the rules. Therefore, it is the central version that I am taking apart, rule by rule, pointing out how horrendously bad this list is and how little it helps actual item development. If we are going to have good standardized tests, the items need to be better, and this list’s place as the dominant item writing advice only makes that far less likely to happen.

Haladyna Lists and Explanations

  • Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Routledge.

  • Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.

  • Haladyna, T., Downing, S. and Rodriguez, M. (2002). A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment. Applied Measurement in Education. 15(3), 309-334

  • Haladyna, T.M. and Downing, S.M. (1989). Taxonomy of Multiple Choice Item-Writing Rules. Applied Measurement in Education, 2 (1), 37-50

  • Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied measurement in education, 2(1), 51-78.

  • Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education, 15(3), 309-333.

]