How many times have math teachers told you to check your work? And, how many times did you make the mistake of putting ⅓ of a student on a bus because you didn’t think about what the story problem was asking and you only did the calculation?
Sadly, “like lemmings to the sea”, too many departments of education and educators are jumping onto the “value added model” and “growth model” practices in vogue today. Conceptually, these make good sense. Of course we want to know if teachers, schools, and districts are more effective than similar entities. Of course we want to make sure that each student is growing well. These are important questions.
Unfortunately, many practices used to answer these questions today are just performing calculations without really thinking about whether or not they are really answering the question. And, because most calculations are using data from state accountability assessments which were designed to meet the requirements of No Child Left Behind, they are getting poor results.
Think about it. These accountability tests are designed to answer the question, “Is the student performing on grade level?” These tests aren’t designed to answer the question, “Where is the student performing?” The distinction is significant. In analogy, it is like asking if fifth graders are between 55 and 57 inches in height. Some will be, but some will be shorter and some will be taller. If all we do, though, is determine if they are in the set range, we won’t know where the others are performing. Because NCLB accountability tests only measure students on grade level standards, this is precisely what is happening when it comes to assessing student achievement. And, to take the analogy further, because state standards for one grade are not the same standards for another, we are assessing things that may be related, but not the same and in some cases are truly very different. It’s like measuring the child’s weight one year and height the next. While they may be related, they aren’t the same thing.
Importantly, states also use cut scores for descriptors that vary from grade level to grade level and over time, so this complicates things further. It’s like using a yard stick in one case and a meter stick in another when we compare student performance from one grade to the next. Even though we might be comparing the same student, if we are using descriptors like “meets” and “exceeds”, we may be missing real growth or believing students made gains when they didn’t.
Overall, current state accountability tests are not effective measures for VAM or growth modeling.
The good news is, the Every Student Succeeds Act does allow for some states to adopt measures that assess students off grade level and some test publishers have already done good work that can be leveraged to support this sort of approach. And, some states are beginning to take advantage of this flexibility. And, there are other measures out there that already do a much better job of measuring growth, like NWEA’s Measures of Academic Progress, that can be used right now to answer the questions above.
In the meantime, while we hope and wait for state accountability measures to improve, educators, state departments, and legislators need to become more assessment and data literate so we can all start getting the correct answer.