Fisking the Haladyna Rules #22: Choices not overlapping

[Each day in October, I analyze one of the 31 item writing rules from Haladyna, Downing and Rodriquez (2002), the super-dominant list of item authoring guidelines.]

Writing the choices: Keep choices independent; choices should not be overlapping.

Less than one-third of their 2002 sources mention this rule at all, but they never have cited an empirical basis for this rule. It seems thin.

Their reasoning seems to be based upon cluing and multiple correct answers, but there are already rules on cluing and ensuring that each item has just one correct answer (i.e., is not multi-keyed). So, what does this item add? Moreover, are those really inevitable results of overlapping answer options?

Any item aimed at identifying a set or range (e.g., which characters…, what are the symptoms of…, for what values of x….) would be made far more easy—perhaps too easy—if those sets/ranges could not overlap. I can imagine an argument that these kinda turn into complex multiple choice (type K) items, and that was already addressed in Rule 9. So, that might be a better place to address that concern. But Haladyna et all do not mention that concern in either article or either book. And overlapping ranges are simply not amenable to multiple select or multiple true-false item types. So, this issue doesn’t seem to create a need for this rule.

I simply cannot follow the logic suggesting that overlapping answer options would clue the correct answer option. If the answer options are:

a. Something

b. Some subset of A

c. Something else

d. Something else else

Does that suggest that the answer must be b? Must be a? Cannot be a or b? There is a general idea that answer options should all be the same in some way, all different in that way or come in pairs of two (i.e., when four answer options) in that way. The idea is that no single answer option should just jump out at test takers. But Haladyna et al. do not share this conventional wisdom in their rules. To be fair, I’ve never been quite sure about this wisdom. But would this set of answer options clue anything?

a. Something

b. Subset of A

c. Something else

d. Subset of B

I think not.

Which leaves the question of multi-keyed items. But we already know that multi-keyed items are bad (i.e. Rule 19). Is there something wrong with overlapping answer options if they are not­ multi-keyed? I keep looking and I cannot find anything other than obscurity. That is, complex multiple choice items (type K) can be needless confusing. So, try to avoid that. But there are also times—particularly with math items—when attention to precision is part of the targeted cognition. Precision in thinking and in communication is valuable in every content area, but math focuses on it more than most others. Should there really be a ban on items that lean into this skill?

I would note that this is not one of those rules that says “avoid.” Now, one might interpret such rules as being less than complete bans, suggesting something less strict. This rule, however, does not even leave that arguable wiggle room.

This seems redundant when it is not actually an obstacle to getting at important content. At best, it is useless.

[Haladyna et al.’s exercise started with a pair of 1989 articles, and continued in a 2004 book and a 2013 book. But the 2002 list is the easiest and cheapest to read (see the linked article, which is freely downloadable) and it is the only version that includes a well formatted one-page version of the rules. Therefore, it is the central version that I am taking apart, rule by rule, pointing out how horrendously bad this list is and how little it helps actual item development. If we are going to have good standardized tests, the items need to be better, and this list’s place as the dominant item writing advice only makes that far less likely to happen.

Haladyna Lists and Explanations

  • Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Routledge.

  • Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.

  • Haladyna, T., Downing, S. and Rodriguez, M. (2002). A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment. Applied Measurement in Education. 15(3), 309-334

  • Haladyna, T.M. and Downing, S.M. (1989). Taxonomy of Multiple Choice Item-Writing Rules. Applied Measurement in Education, 2 (1), 37-50

  • Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied measurement in education, 2(1), 51-78.

  • Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education, 15(3), 309-333.

]