Eagle Forum
Email
Subscribe
Shop
Shop
Youtube
Youtube
Blogger
Blog
Feeds
Feed

Back to July Ed Reporter
Education Reporter
NUMBER 174 THE NEWSPAPER OF EDUCATION RIGHTS JULY 2000

The NAEP is Losing Credibility
Until now, the National Assessment of Educational Progress (NAEP) was often considered the tool of choice for comparing education programs among the states. Events of the past year and a half, however, raise serious questions about this federally-administered testing program. Can NAEP really live up to its self-billing as "The Nation's Report Card"? Or, like many state tests, has NAEP fallen prey to politics and pitfalls involving students with learning disabilities?

Make no mistake, the NAEP is tremendously important. NAEP test results played a central role in California's mid-1990s decision to completely revamp its department of education and its curriculum. NAEP was a major player in the 1998 demise of Kentucky's failed assessment, called the Kentucky Instructional Results Information System (KIRIS), and more recently, President Clinton indicated he wanted the NAEP for his "voluntary" national assessment.

Unfortunately, NAEP has run into a serious problem in at least 14 states, including Louisiana, Kentucky, North and South Carolina, Connecticut, and Maryland. Most state testing programs have a similar weakness - an inability to manage the assessment of learning disabled (LD) students.

The Kentucky Example  
Kentucky posted one of the biggest 4th-grade reading score gains of any state participating in the NAEP in both 1994 and 1998, but fully 10% of Kentucky's students were excluded from the 1998 NAEP results because of learning disabilities. Only 4% of Kentucky's students had been excluded from both the 1992 and 1994 assessments - the 1998 exclusions were 2-1/2 times larger. Is it not reasonable to conclude that Kentucky's NAEP score increase could be due to the fact that many more weak students were barred from participation?

The large number of exclusions was due to several factors. First, requirements for inclusion in the state's own high stakes assessment, KIRIS, were extensive. Virtually all 4th graders in Kentucky's public schools had to participate in KIRIS, but the learning disabled were allowed very liberal testing accommodations. For example, all test questions were read aloud to many of these students, and about half of them even had an adult "scribe" to write down their answers.

Schools felt pressured to identify many more students as learning disabled so that the weak students could qualify for one or even more accommodations. The end result was dramatic. Kentucky's learning disabled population soared from 7% of the raw NAEP sample in 1992 to 13% in 1998 - an 86% increase! Testing with the accommodations described above definitely raised the learning disabled students' KIRIS scores, but it is unclear whether this actually meant that the students were better-educated.


Richard G. Innes
The second factor in the large increase in exclusions was a change in the 1997 reauthorization of the federal Individuals with Disabilities Education Act (IDEA). This change is currently interpreted to stipulate that, if a child's individual education plan (IEP) has a requirement for testing accommodations, then those accommodations must be offered on all tests that generate individual scores.

The reauthorized IDEA was less than a year old when instructions for the 1998 NAEP reading assessment had to be finalized. NAEP administrators were anxious to maintain comparability to earlier state NAEP assessments when no accommodations for LD students were allowed. However, administrators were uncertain of the legality of requiring those students to test without the accommodations formally listed in their IEPs. NAEP's 1998 guidelines therefore led schools to exclude students with testing accommodation requirements from the accountability sample.

These unfortunate circumstances resulted in a number of states experiencing a large jump in exclusions from the 1998 NAEP. The 1998 exclusion rates for LD students varied wildly - from a low of just 3% of the raw sample in several states to a high of 13% in Louisiana - a spread of 10 percentage points. In 1992, the spread was only 6 points, and the top exclusion was just 8%. Considering this, one wonders what sort of state-to-state comparisons can be fairly made with the 1998 NAEP results.

Kentucky's 1998 NAEP score was incalculably flawed, which is basically what the Educational Testing Service (ETS) said when it issued the first report on this problem in May 1999. ETS should know, because ETS created the NAEP and its complex scoring system. ETS says we will never know how much error was introduced by the exclusion issue.

The ETS report angered radical reformers who were eager to use Kentucky as proof that their ideas were working. Per their demand, a second report was created on the subject of LD student exclusions, claiming that the Kentucky NAEP results are essentially accurate as originally stated. This report claims that the exclusion factor was almost negligible, a contention that would "rescue" the NAEP from its awkward position.

The Wise Report  
To fully grasp what is happening to NAEP, one must understand the content and history of this second report, which was created by Dr. Lauress Wise. Dr. Wise has a contract with the Kentucky Department of Education to conduct research involving Kentucky's assessments and databases. He is also licensed by the National Center for Education Statistics (NCES) to access the NAEP databases. On the one hand, that made Wise seem like a reasonable choice for a research program involving both of these databases. On the other hand, the Kentucky Department of Education desperately wanted the NAEP scores upheld as originally posted, which certainly placed Dr. Wise in an awkward position.

Dr. Wise's study has several very serious flaws. First, his research essentially compares apples to oranges. He begins with the scores that Kentucky's LD students (who were excluded from the NAEP test) received on the KIRIS test, and converts them to "equivalent" NAEP scores. Even if we disregard the serious technical issues concerning the accuracy of converting any scores from one test to another, com- paring KIRIS "reading" scores to NAEP reading scores is especially inappropriate because evidence shows that Kentucky kids excluded from the NAEP test actually had their KIRIS "reading" tests read to them. There is good evidence that the KIRIS "reading" assessment was read to approximately three out of four Kentucky students with learning disabilities. KIRIS was at best merely a spoken word comprehension test for those students. It is totally inappropriate to convert "spoken word comprehension" scores into equivalent scores for the NAEP, which measures real printed text reading ability.

The Wise report has other serious defects. It actually claims that Kentucky's weakest LD students greatly outscored the state's strongest LD students, which is completely incredible. This clearly impossible score inversion is additional evidence that Wise did an apples-to-oranges comparison.

Wise's research has yet another problem: the validity of the KIRIS results themselves. KIRIS was clearly so flawed that the Kentucky state legislature overturned this obviously inaccurate assessment in 1998. The replacement even threw out the trend lines from KIRIS. Thus, aside from all its other problems, the Wise report's entire foundation is a seriously flawed test that was abandoned for cause.

There is no question that Kentucky experienced dramatic growth in the number of students identified as learning disabled. If Kentucky's education program is succeeding, as we have been told it is, why does the state show such a substantial increase in learning problems? Is this the sort of program other states should emulate? If not, of what use is an NAEP assessment that awarded Kentucky one of the largest score increases of any state that participated in 1994 and 1998? What can the public really learn from such scores?

The NCES was specifically warned about Wise's apples-to-oranges comparison long before his report was issued. The other problems are not hard to pinpoint either. Nevertheless, the NCES accepted the Wise report as the final word on the subject. By doing so, the NCES raised serious questions about the NAEP. If reports of the caliber of Dr. Wise's study are accepted as definitive proof of NAEP's accuracy, then what confidence can the public have in the managers of "The Nation's Report Card?" Considering the fact that President Clinton recently nominated Dr. Wise to take over the NCES, what confidence can the public have that this situation will get better in the future?

Richard G. Innes of Villa Hills, Kentucky, has studied education reform since 1970, when he programmed the Air Force’s first automated instructional machines for pilots using a form of Outcome-Based Education. He has researched his state’s reform efforts since 1994, and publishes a newsletter called KERA Update about the Kentucky Education Reform Act.


 
Google Ads are provided by Google and are not selected or endorsed by Eagle Forum
Eagle Forum • PO Box 618 • Alton, IL 62002 phone: 618-462-5415 fax: 618-462-8909 eagle@eagleforum.org