Fisking the Haladyna Rules #9: Use best item format

[Each day in October, I analyze one of the 31 item writing rules from Haladyna, Downing and Rodriquez (2002), the super-dominant list of item authoring guidelines.]

Formatting concerns: Use the question, completion, and best answer versions of the conventional MC, the alternate choice, true-false (TF), multiple true-false (MTF), matching, and the context-dependent item and item set formats, but AVOID the complex MC (Type K) format.

Though Haladyna et al put this in the short formatting concerns section of their list, this rule is not about formatting. This rule is about the very structure of the item. True-False items are quite different than a classic three- or four-option multiple choice item. A matching item question  (i.e., matching each of 3-8 choices in column A with the appropriate option(s) in column B) is entirely different than either of the others. This not merely “formatting;” these are all item types.

Content-dependent items and item sets are not merely a matter of formatting, either. They are items linked to a common stimulus or common topic. But they can each be of any item type.

So, this rule says that it is ok to use different item types? Oh, OK. It is ok to have items sets? Oh, OK.

What is this rule really saying? All that it is really saying is do not use complex MC items. Those are the ones that ask a question, list some possible answers and then give a list of combinations of answers to select from. For example,

Which of these rules are actually decent rules?

I. Keep vocabulary simple for the group of students being tested.

II. Avoid trick items.

III. Minimize the amount of reading in each item.

IV. Place choices in logical or numerical order.

a) I only

b) I and III, only

c) II and IV only

d) II, III and IV only

e) I, II, III and IV

Yes, we grew up with this item type. Yes, this item type is needlessly confusing. But the rule should be something like “Replace complex MC (type K) items with multiple true false or multiple select items.” Unfortunately, 80% of the rule is about other things, and the part of their rule that starts to get at this is buried at the end. Moreover, the rule itself does not say what to do about this problem, whereas our offered replacement is direct and helpful.

[Haladyna et al.’s exercise started with a pair of 1989 articles, and continued in a 2004 book and a 2013 book. But the 2002 list is the easiest and cheapest to read (see the linked article, which is freely downloadable) and it is the only version that includes a well formatted one-page version of the rules. Therefore, it is the central version that I am taking apart, rule by rule, pointing out how horrendously bad this list is and how little it helps actual item development. If we are going to have good standardized tests, the items need to be better, and this list’s place as the dominant item writing advice only makes that far less likely to happen.

Haladyna Lists and Explanations

  • Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Routledge.

  • Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.

  • Haladyna, T., Downing, S. and Rodriguez, M. (2002). A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment. Applied Measurement in Education. 15(3), 309-334

  • Haladyna, T.M. and Downing, S.M. (1989). Taxonomy of Multiple Choice Item-Writing Rules. Applied Measurement in Education, 2 (1), 37-50

  • Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied measurement in education, 2(1), 51-78.

  • Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education, 15(3), 309-333.

]