Fisking the Haladyna Rules #2: Important, not trivial content

[Each day in October, I analyze one of the 31 item writing rules from Haladyna, Downing and Rodriquez (2002), the super-dominant list of item authoring guidelines.]

Base each item on important content to learn; avoid trivial content.

Oh, we hate this one. We think that this rule undermines any concept of item or test alignment, even though it 42 of their 54 sources apparently support it.

It simply is not for item developers to decide what to test. Standards or other forms of domain modelling lay out what should be tested. It is the job of item developers to figure out how to assessment that content, not whether to assess that target.

Furthermore, this rule rather falls towards a tautology. Obviously, given limited testing time—as is invariably the case—the time should be spent well. Yeah, only test things worth testing. Perhaps this rule does not quite reach the level of tautology, but it does not get beyond too obvious to need to be said. As is, either it is too obvious, or begs the question. That is, what counts as trivial? What counts as important. Do they give any guidance on that?

We would prefer it to be phrased as, Don’t waste test takers’ time. But should that really need to be said?

Now, there is a related point that they are not making here, but is very very important. That is, when creating an item aligned to some assessment target, aim for the core of that target. Aim for the most important part of the standard, the part this most useful or most likely to be built upon later. Do not simply aim for the easiest part of the target. Yes, that would easier. But it would not help test validity in any way, would not help test takes or other users of tests.

But that is not what this rule is about.

[Haladyna et al.’s exercise started with a pair of 1989 articles, and continued in a 2004 book and a 2013 book. But the 2002 list is the easiest and cheapest to read (see the linked article, which is freely downloadable) and it is the only version that includes a well formatted one-page version of the rules. Therefore, it is the central version that I am taking apart, rule by rule, pointing out how horrendously bad this list is and how little it helps actual item development. If we are going to have good standardized tests, the items need to be better, and this list’s place as the dominant item writing advice only makes that far less likely to happen.

Haladyna Lists and Explanations

  • Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Routledge.

  • Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.

  • Haladyna, T., Downing, S. and Rodriguez, M. (2002). A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment. Applied Measurement in Education. 15(3), 309-334

  • Haladyna, T.M. and Downing, S.M. (1989). Taxonomy of Multiple Choice Item-Writing Rules. Applied Measurement in Education, 2 (1), 37-50

  • Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied measurement in education, 2(1), 51-78.

  • Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education, 15(3), 309-333.

]