By Rep. Steven Baldwin
When I became the chairman of the California State Assembly Education Committee
last year, I was the first Republican to chair that committee in about 30 years.
I quickly started to dig out documents from all the different education
bureaucracies, and we have 12 in California. As a longtime conservative
activist, I've always been very aware of what was going on in education, but I was
shocked to see with my own eyes some of the internal documents I was able to
obtain. I held hearings right away on many areas of interest, including Whole
Language and the New Math.
When the Democrats regained control of the legislature, I was removed as
chairman of education. But I continue to work hard on education issues in
California because we lead (unfortunately) in Whole Language, New Math, and use
of psychology in our schools. We are probably the leading state in fusing our
medical establishment into our school system, and we are paying the price for it.
We are last in the country in reading scores. Excuse me, were tied for last
with Guam. We are nearly last in the country in math and we have a large
percentage of students who cannot read or write by the time they get to college.
While 30% of the students entering college in Oregon, for example, must go
through remedial education, in California it is 50%. That is, 50% of all
students entering college do not have the basic knowledge of math or English and
must go through a half year, and maybe a full year, of remediation courses in
order to start their college courses.
As for assessments, California is infamous for an assessment test called the
CLAS test. That test created a national controversy. A group headed by Carolyn
Steinke, called Parents Involved in Education (PIE), generated literally tens of
thousands of phone calls and letters to the Governor in opposition to the test.
This issue dominated the media for weeks. Eventually, the Governor was forced to
back down, temporarily anyway, from the use of the CLAS test, which was what the
OBE (Outcome-Based Education) crowd calls a Performance-Based test.
Many parents don't understand that the reasons we test our children have
changed. We used to test them to find out where they were in terms of academic
progress. In some cases, we wanted to compare schools to see which ones were
doing better than others. These are no longer the purposes of testing. Tests
aren't called tests any more; the politically correct term is assessments.
What are the schools assessing? Well, they're not assessing academic
performance. That's a very minor part of tests nowadays. In fact, the OBE crowd
has circulated the myth that objective standardized tests, the norm-referenced
tests, the measurable tests that we were used to, are no longer valid. Without
a shred of evidence, the OBE crowd has rejected these tests, claiming that they
do not test the students progress, that they do not test "authentic knowledge,"
which is the new buzz word used in describing tests.
Performance-Based tests are supposedly "authentic" because they're "real." Yet,
when I challenge these people to show me any evidence whatsoever that
old-fashioned tests did not test the students knowledge, they're unable to show
me one research report of any kind that backs up their thesis that
multiple-choice tests, for example, or norm-referenced tests, are invalid.
When I challenge them to show me how a Performance-Based test is somehow the
magic wand that's going to help us assess the students performance, they're
unable to show me that at all. In fact, they openly admit that Performance tests
are designed to drive curriculum, methodology, and teacher preparation. In other
words, the OBE crowd believes that open-ended tests lead to open-ended curricula.
A test that does not test for academic skills, but instead tests highly
subjective items, will eventually lead to a curriculum that is highly subjective
Performance-Based tests are absolutely compatible with Whole Language and New
Math, and with the abolition of grades and accountability in our schools. They
are very much a part of the whole Outcome-Based Education movement.
A lot of people still don't understand what takes place in a Performance-Based
test. Some of the activities are "hands-on tasks." They call them hands-on tasks
when they ask the student to assemble what's in front of them: A couple of
teachers sit around and grade how the student assembles something.
Portfolios are part of Performance-Based testing. Usually, a portfolio
in-cludes a self evaluation, where a student is asked to evaluate himself. Of
course, when I was a kid, I would have evaluated myself with straight A's. That's
a pretty dumb thing to ask kids to do.
Peer evaluation, when friends evaluate each other, is also part of the
portfolio. I can just see kids saying, "We'll make a deal: I'll evaluate you
in a favorable way and you do the same in return." So the portfolios include peer
evaluations and self evaluation by the students.
Some actual portfolio documents that I have in my possession include the
following: lists of student-selected goals, photographs of students work,
poetry, reactions to group activities (whatever that means), explanations of
political cartoons, and lists of books. The lists are supposed to show all the
books the students have read. Students don't have to do papers about any of the
books. They merely list the authors and the publishers, which supposedly
demonstrates that they have read the books. Other portfolio documents include holistic writing assignments. I don't
know what that means. I've been in education for 10 years and I still can't
understand half the terms used. Very few portfolio items are objective
measurements. Other parts of the Performance test have to do with group
activities; the group is graded rather than the individual.
Now let me quote from some of the actual ways that Performance tests are graded.
This is all taken from actual test documents. Does a student participate in
class discussions? Does the student share opinions? Does the student value other
perspectives? Does the student respect other class members? What is the degree
to which the writers response reflects personal investment and expressions? Does
the student work well in groups? This goes on and on and on, and it has very
little to do with academic performance.
There are several obvious problems with Performance-Based tests, and some are
very practical. College admission officers have told me they're having great
difficulty in judging whether or not a student is ready for college work based on
the results of Performance-Based tests because there's a complete lack of any
objective measurements. They specifically cite peer evaluations and the groups
collaborative exercises. They are very suspicious of these tests as a way of
determining whether or not someone is ready for college work.
There are also some legal issues. Believe it or not, one of the federally
funded regional laboratories, the North Central Regional Laboratory, recently
published a paper detailing the susceptibility of Performance-Based tests to
lawsuits by parents who claim their child was downgraded due to bias. It's clear
that when you have so many subjective testing factors involved, it's easy for
parents to file a lawsuit claiming that the teachers personal bias was the reason
their student was graded in a certain way.
It's harder to have grounds for a lawsuit if the test is objective. Apparently,
lawsuits are springing up around the country as a result of Performance-Based
tests. There are more practical reasons why Performance-Based tests are a
failure. Supposedly, they are designed to help employers.
Yet most employers I've talked to about portfolios and Performance-Based tests
tell me that they are too complex for them to understand; they don't have the
time to read through a big portfolio of the students work. They want to know
whether or not the student can read, write, compose a paragraph, and understand
the rules of grammar. Instead, busy employers are given a stack of documents and asked to read through all this material to determine whether or not a person
should be hired. It just wont work.
Let me give you some examples of what has happened around the country with
Performance-Based tests. Vermont in 1993 was one of the pioneer states to use
Performance-Based tests. The whole education establishment was excited about the
Vermont portfolio test, the Performance-Based test, that went into effect there.
A few years after it was instituted, it created a large amount of controversy and
an outcry from parents.
Eventually, Vermont contracted with the Rand Institute to evaluate the test.
Here's what the Rand Institute concluded: "The reliability of portfolio scoring
was so low that most of the planned uses of performance data had to be abandoned.
There is limited evidence from other programs that reliable scoring of writing
portfolios is practical. Accountability was difficult to obtain. Our efforts to
assess validity for the 1991-92 program was hindered by a variety of factors
including the low reliability of scores." The panel of experts at the Rand
Institute recommended a return to traditional tests.
Since that time, Vermont has partially backed away from Performance-Based tests
and has put a multiple-choice test back into the package, but Performance tests
are still a part of the package.
In Kentucky, the same thing happened. Again, the education establishment wildly
applauded Kentucky for being on the cutting edge of education reform when its
Outcome-Based Education package was passed. Part of the package was a
Performance-Based test called CIRIS. Everyone was told that the students were
doing great because the results from the CIRIS test, the Performance-Based test,
showed dramatic improvements in academic performance by Kentucky students.
But then something happened. The NAEP (National Assessment of Educational
Progress) scores came out, and, even though the NAEP test has some elements of
Performance-Based testing, it is still a lot more academic-based than the tests
I am describing. The NAEP scores showed that Kentucky scores had dropped. This
made it clear that the CIRIS test was basically useless in determining where
Kentucky students scored in academic performance.
As a result of the outcry from Kentucky parents, a panel of six nationally renowned testing experts was assembled in 1995. Their report accused CIRIS of misleading the public into believing that
students were doing well. They said that CIRIS is seriously flawed and that
open-ended tests such as CIRIS would not be a good way to measure student
achievement, that scoring of portfolios remained too flawed for use in
assessment, and that group work with other students and teachers undermined the
validity of the test. The panel recommended a return to multiple-choice tests.
That same year, the CLAS test came out in California. Again, there was a mass
outcry and lawsuits filed by parents. This same pattern has happened in state
after state. The education establishment takes a step backwards, temporarily puts
the test on hold, or perhaps keeps the Performance test but adds some traditional
test questions. They do not care how many Performance-Based tests have generated
failure and controversy. Performance-Based testing continues to be promoted by
all elements of the education establishment, from the National Education
Association to the U.S. Department of Education to every state department of
education to all the major think tanks and foundations and, more importantly, to
the New Standards Project.
Marc Tucker is one of the co-founders of the New Standards Project, which is the
nation's leading advocate of Performance-Based tests. He has had more impact and
influence on education policy than any single individual I know of. The New
Standards Project, a private organization, is a spinoff of the National Center for
Education and the Economy (NCEE), which has been in the forefront of criticizing
traditional tests and has long been emphasizing Performance-Based tests as a means
of changing curriculum and methodology. This is openly admitted in NCEE
documents, which state that the three P's of assessment are (1) performance
tasks, (2) projects, and (3) portfolios. No mention is made of academic skills.
NCEE has assumed the role of creating national standards to comply with national
assessments because the Federal Government does not yet do exactly what NCEE
NCEE has assumed the role of national clearinghouse for assessment tests and is
under contract with over 20 states. NCEE's goal is to implement the same
standards and the same kind of performance assessments in every state through
their contracts. NCEE was the contractee in the CLAS test in California, the
IPASS test in Indiana, the Vermont assessment test, and the Kentucky assessment
test. All these states had the National Standards Project as the main contractor
to design the test. You would think that, with all the controversy and failure,
these people would change their goals, but they have not. They've actually
accelerated the number of states under contract with the New Standards Project.
Their documents openly admit that their standards and Performance tests will be
aligned with the National Council of Teachers of Mathematics and the National
Council of Teachers of English. Those are the two major professional associations
of math teachers and English teachers and, of course, those organizations are
dominated by the New Math and Whole Language crowd. The alignment of assessments
and standards with those two organizations tells you where the New Standards
Project is heading.
Let me quote from the federal Office of Research and Education Improvement
(OERI): "Performance assessment is part of a model of schooling which emphasizes
a constructivist approach to teaching and learning [that's Whole Language and
Whole Math] and cooperative and collaborative learning [that's group learning]."
The same document also states, "Performance tests will drive teachers to change
their instructional practices to place greater emphasis upon high order cognitive
That may sound good, but the Whole Language movement, which has destroyed
American reading capabilities and put us almost last in the industrial world in
that area, was also considered a move toward higher order thinking skills, as was
the new math approach. Every time we hear about higher order thinking skills, we
should recognize this is usually code for shifting to a methodology that
undermines the teaching of basic academic skills, even after the failures of
Performance-Based testing in so many states.
In California, we already suspect another test is coming up. We've already
filed a public records act request because of leaks to my office about education
bureaucrats working on another Performance-Based test in California. This will
be one of the main battles because the testing area drives many parts of the
Outcome-Based Education agenda.
One California school district document stated, "Assessment is the Trojan horse
of restructuring." That statement probably summarizes what so-called school
"reform" is all about.