Dimensionality Can Decrease Over Time

While it is glaringly obvious that the dominant psychometric models are incredibly poor matches for the multi-dimensional constructs specified in our domain models, it is less obvious that domain models sometimes understate the dimensionality of their contents. Sometimes. 

The fact is that dimensionality is not constant, even for a single group of students. Instead, it can even decrease over time.

Yes, some domain models are so detailed that they describe learning sequences. In these cases, later learning standards may simply represent more advanced versions which constitute more difficult applications or skills. That is, a group of standards may truly lie on the same dimension. Some may be more advanced cognition that is further along the dimension—but nonetheless of the same sort. One may not need to step far from the details of a domain model to see this.

But in other cases, one does need to think clearly about the details of learning to appreciate the true dimensionality of content. Those who work closely with domain models understand their dimensionality far better than those who do not. Similarly, those who work closely with students—who will eventually become test takers—understand that dimensionality is not necessarily invariant over time.

For example, with the distance of middle age, I can see that among my peers that math calculation skills can be viewed as a unidimensional collective. Some of us are better than others, but it really is just one continuum. Those of us who are better at division are also better at addition. Those of us who are better at two digit multiplication are also better at five digit subtraction. There are different sub-skills, but they line up together in parallel. 

On the other hand, those who work up close with third graders learning multiplication see multi-dimensionality. It is not simply that some kids are better at it than others. Rather, some kids are better at some of it, and other kids are better at other parts of it. One kid knows their 7’s but has trouble with 9’s, while another kid does well with 9’s but poorly with 7’s. They all know 2’s and 5’s, but some are better at 6’s and others are better at 8’s. They do not all line up sequentially, nor do they line up in parallel. 

And that does not even address the fact that some kids are better at the straight memorization of the multiplication math facts, other kids are better at the old algorithm for multi-digit multiplication and still others better at the regrouping strategies that appear to confuse so many adults. Yes, some students are great at all of them and some poor at all of them—but that does not cover the entire classroom of students.

While adults may be far enough removed from learning such that their collective of skills has settled into the unidimensional layout—as with calculation skills—this is not the case for those still learning. As they develop their proficiencies, they do not achieve mastery in the same order across all skills for all students. That is, that which seems unidimensional for adults at a distance from the learning period is often composed of more dimensions for those still building their proficiencies. 

This may not matter for those with distance from the students being tested and the processes of teaching and learning. But those who care about the students, are invested in their success and/or have some responsibility for their learning—I mean students, families, teachers, school leaders, curriculum specialists, school boards—the details of these differences really matter. "What is my child good at and where are they struggling?” is a key question that educational assessment should be able to answer. “How are our curricular and pedagogical choices working and not working for our students?” is another key question.

It is a profound misjudgment to treat the dimensionality of adult understanding as determinant in educational assessment—one that risks missing the very purpose of educational measurement. Certainly, we should not design our assessments based on the dimensionality of the constructs of those who have always been exceptional for their proficiency in a content area. Rather, we should build our blueprints, develop our items, conduct our analyses and report our results in ways that best describe the understandings of students still engaged in learning. After all, that is whom we claim our assessments are for.