Fisking the Haladyna Rules #26: Avoid All of the above

[Each day in October, I analyze one of the 31 item writing rules from Haladyna, Downing and Rodriquez (2002), the super-dominant list of item authoring guidelines.]

Writing the choices: Avoid All-of-the-above.

This rule and Rule 25 (None-of-the-above should be used carefully) are the two most opposed rules (by their own sources) on the Haladyna lists, though the explicit opposition to this rule is half that of Rule 25. To be fair, 70% of their 2002 sources support this rule, though Haladyna et al.’s offered reasoning seems a bit weak.

Their all of the above analysis cites use of this answer option making items less difficult, but their analysis of Rule 25 (none of the above) expresses concern that it makes items more difficult. Were they simply reporting on the literature, these differing results would be just different results for different phrases. But as they are offering their views in their actual recommendations, guidelines or rules, it is not even clear why a phrases impact on item difficulty automatically makes it objectionable.

The basis for this rule seems to be that when all of the above is included as an answer option that it is far far far too likely to be the correct answer option (i.e., the key). That is not a reason to avoid it, but rather a reason to use as a distractor more often. Test takers and teachers and test preparation tutors would quickly learn that it is no longer a dead giveaway—wisdom that I heard decades ago.

Their 2004 book suggests two ways to avoid all of the above. First, “ensure that there is one and only one correct answer options.” Yeah, duh. That might limit the nature of content that could be included, so I don’t favor that. Their other advice is to turn the simple multiple choice item into a multiple true-false (MTF) item. That is a much much better idea. Provided that the testing platform allows for MTF items, they should probably be used more often. Yes, they can take more time than simple multiple choice items, but they can delve deeper into various facets of an idea. Anything that helps constructed response tests to assess more deeply is a very good thing.

So, what do I think of this rule? I think greater use of MTF items would be a positive change. Otherwise, I would rather all of the above be used far more often as a distractor than it be abandoned for use as a key.

[Haladyna et al.’s exercise started with a pair of 1989 articles, and continued in a 2004 book and a 2013 book. But the 2002 list is the easiest and cheapest to read (see the linked article, which is freely downloadable) and it is the only version that includes a well formatted one-page version of the rules. Therefore, it is the central version that I am taking apart, rule by rule, pointing out how horrendously bad this list is and how little it helps actual item development. If we are going to have good standardized tests, the items need to be better, and this list’s place as the dominant item writing advice only makes that far less likely to happen.

Haladyna Lists and Explanations

  • Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Routledge.

  • Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.

  • Haladyna, T., Downing, S. and Rodriguez, M. (2002). A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment. Applied Measurement in Education. 15(3), 309-334

  • Haladyna, T.M. and Downing, S.M. (1989). Taxonomy of Multiple Choice Item-Writing Rules. Applied Measurement in Education, 2 (1), 37-50

  • Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied measurement in education, 2(1), 51-78.

  • Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education, 15(3), 309-333.

]