Too Much Emphasis on ABCs Data, Not Enough on Real Learning

Published 10/17/04

Asheville Citizen-Times


“For nearly two decades policy makers have been engaged in a massive and unprecedented social experiment on our schoolchildren, one with enormous costs and unproven benefits.” (Peter Sacks, writing in The School Administrator, December, 2000.)

 

One of these “social experiments” is the North Carolina ABCs accountability program.  (ABCs stands for accountability, basics, and local control.)  This initiative holds educators accountable for improving student scores on End-of-Grade and End-of-Course  tests.  

 

Last September I wrote about the politics of this program.  This year I write about the statistical laws that govern the reporting of standardized test scores.  Citizens should understand how these laws of mathematics and logic limit the meaning of the ABCs Report Card data (AC-T, August 6).   

 

Two types of data are reported in the Report Card.  The North Carolina State Department of Public Instruction (SDPI) website explains, “In the ABCs, a school’s growth and performance are summarized using composite scores. There are two types of composite scores: growth, and the performance composite.”

 

A press release accompanying the first ABCs report, in 1997, quoted the State Board Chairman as saying, "Our goal was to give citizens and educators a way to know how each individual school was doing in terms of student achievement overall and in terms of growth." 

 

The Report Card “Growth/Gain” column indicates whether or not a school’s total student body achieved scores that met the growth rate established by the SDPI formula for that school.  The North Carolina SDPI uses principles of mathematics, statistics, and logic to establish a growth rate for each school. 

 

My first point is that the “Growth/Gain” column on the Report Card, indeed, tells citizens whether or not a school met its growth projection.   The scores of a school’s total student body are compared from one year to the next.   In this way North Carolina has stayed true to its purpose of charting school improvement.

 

To make my second point, about how ABCs data are misused, I present the following situation:  On the same exam, School A averaged 90%, School B averaged 80%, and School C averaged 70%. 

 

What conclusions can be drawn about how these schools compare to each other?  The answer is none.   All we can say is that each one had that average.   

 

No valid comparisons can be made about these scores until we know about the differences among the student bodies, differences in program quality, differences in quality of instruction, differences in the resources available to each school, and any other differences that affect student scores.  In other words, until we know the important information first, we cannot draw valid conclusions from the average scores of different groups of students.

 

Only after we have tested for each specific conclusion, while doing tests that rule out all other possible explanations, can we compare the average scores of one group of students to those of another.  In the scientific method this is called “the law of the single significant variable.”  

 

How does this relate to the ABCs Report Card? 

 

The column labeled “Composite” has one meaning.  According to the SDPI website, “The performance composite summarizes the performance of students in the school with respect to attaining Achievement Level III. It tells the percent of student test scores at or above Achievement Level III (consistent mastery of subject/course content matter) in the subjects taught in the school and included in the accountability model.”

 

Publishing these “composite” percentages in a chart suggests that they can be compared with each other.  To do so, however, is a misuse of the data. 

 

Unfortunately, many people misuse the data.  Politicians, educational administrators, and teachers compare scores from different classrooms and schools all the time, even though such comparisons violate “the law of the single significant variable” because they compare scores from different sets of students, without controlling for the variables that affect those scores. 

 

Although politicians and educational administrators want citizens to believe that ABCs results reflect the quality of learning, programs or instruction, none of these conclusions are warranted.  Consequently, we are holding educators accountable by drawing unwarranted conclusions, and we are wasting resources on an accountability program that serves only a political interest.  

 

But this column is not about politics – it is about logic and common sense.  Before we engage in a political debate about holding educators accountable, citizens should understand that the “law of the single significant variable” makes it impossible to do this through a program of standardized testing like the ABCs.

 

We will never be able to control all the factors that affect student scores.  Therefore, the meanings we can give to these data are extremely limited.  Although we want these data to measure student learning, program quality, or instructional effectiveness, they do not. 

 

This realization should prompt citizens to ask, “If these data tell us so little, why is the state spending so much time, effort, and money on these tests?”

 

If enough citizens ask this question, someday we may be able to direct these resources toward improving learning, programs, and instruction.   Right now that seems like a novel idea, even though that was how educators improved learning, programs, and instruction prior to ABCs testing and accountability. 

 

The logic is simple,  “If you want your cow to grow, feed it; don’t weigh it.”