91制片厂视频

Special Report
Assessment Opinion

My Assessment Problem: A Desk-Eye View

By Ilana Garon 鈥 March 05, 2014 8 min read
  • Save to favorites
  • Print
Email Copy URL

In September of last year, my students at the New York City public high school where I teach sat for a test called the Measures of Student Learning, or MOSL. The test was given for both math and English, each in a one-and-a-half-hour session which, for the sanity of both teachers and students, took place a week apart.

For English, my discipline, the students were given two reading passages, which they were told to use as the basis for an argumentative essay鈥攊n New York City Department of 91制片厂视频 parlance, that means an essay in which students state a thesis and use articles to provide both claims and a counter-claim. And here鈥檚 the rub: Per the Department鈥檚 鈥淎dvance鈥 teacher-evaluation initiative, 20 percent of a teacher鈥檚 yearly 鈥渞ating鈥 (Ineffective, Developing, Effective, or Highly Effective) will be based on the students鈥 collective growth on the MOSL assessment, as judged through seven different matrices. Thus, the purpose of the MOSL is not to learn about the students鈥 strengths or weaknesses so much as to learn about their teachers鈥.

Unsurprisingly, my students did not find their teachers鈥 evaluation to be a sufficiently compelling motivator to sit and take this test. The first question they asked was whether, if they took the MOSL, they鈥檇 still have to take the Regents鈥攖he New York State standardized exams鈥攁t the end of the year. When they were told that they would, they asked, 鈥淪o why do we have to take this, too?鈥 and 鈥淒oes this test actually count for our grade?鈥 When told that it would not, they either wrote one-paragraph responses to the essay question (a guaranteed failure, as far as the test score goes) or refused to take it altogether. In the room where I was proctoring, one of the kids called me over and politely inquired, 鈥淲hy are we doing this? This test is a waste of class time.鈥

She couldn鈥檛 have articulated my own feelings more precisely. But it wasn鈥檛 only that I was irked by the fact that the four class periods required for administration of the math and English portions of the test could have been spent reading poetry, talking about novels, practicing writing conventions, or having discussions about world issues that would actually matter in the kids鈥 lives. Nor was it simply the fact that this test was now going to be used to evaluate my colleagues and me as teachers. I also felt quite strongly that, separate from its dubious applications, the MOSL was, quite simply, a badly designed test.

Missing the Point

What do I mean by this? For starters, the articles chosen for the argumentative essay didn鈥檛 pair well. The prompt asked the students to argue whether genius is innate or developed. One article, an excerpt from a Malcolm Gladwell book, examined a case study of violin students finding that greater levels of prestige in the profession corresponded to the number of hours per week the students spent practicing. The students who logged the most hours practicing, the study found, went on to play in world-renowned orchestras; the least consistently practicing students went on to become鈥攚hat else?鈥攙iolin teachers. (It was hard for us teachers not to feel a bit affronted by the choice of this particular passage). But the companion passage鈥攚hich was, presumably, supposed to show that genius was innate (and not developed through hours of practice)鈥攚as only tangentially connected to the prompt. It recounted noted animal scientist Temple Grandin鈥檚 experience of living and working with autism, and contained only a seemingly throwaway line that one had to think a little bit 鈥渙utside of the box鈥 in order to accomplish great things. This, to me, didn鈥檛 especially convey the supposed counter-point idea to the Gladwell article; and I don鈥檛 think my students got the connection either, since a number of the kids who bothered to respond at all wrote some very earnest essays on the subject of 鈥渨hy autism is a hard disease to have.鈥

The problems stemming from the MOSL and other assessments of its ilk really break down into two categories of questions: Are these tests good for evaluating teachers? And what uses鈥攊f any鈥攄o standardized tests have as far as students are concerned? (The tertiary issue of whether all these tests are simply a means of lining the pockets of various test corporations big-wigs is also a valid consideration, but not one I鈥檒l go into here).

Evaluation Dysfunction

From my standpoint, there are a great many reasons why assessments like MOSL are not good for evaluating teachers. One that I don鈥檛 hear articulated enough is how little they can control for outside factors, irrespective of what proponents of the infamous 鈥渧alue-added鈥 models might assert. Whether students do brilliantly or poorly, it鈥檚 nearly impossible to attribute that performance solely to the one teacher they鈥檝e had in a particular subject that year. Perhaps one English teacher鈥檚 students did well on a test because they had a history teacher who consistently drills them on essay writing. Or perhaps it was a writing teacher in an earlier grade who trained them particularly well. Whose pedagogical effect is the test measuring, exactly? Any group of scores is the result of a cumulative effect, not one single teacher鈥檚.

By the same token, what if a whole cohort of students does poorly on a given assessment because they are late-enrollers routed into 鈥渇ailing鈥 public schools, as they all too often are, according to a by the Annenberg Institute? Such students often come from families in extremely stressful situations, including recent immigration from non-English speaking countries and bouncing between relatives鈥 homes and homeless shelters, among other family crises. As a result, they tend to be the poorest performers on assessments.

I recently spoke with a group of researchers from the Human Resources Research Organization, a nonprofit that specializes in personnel management. They acknowledged the faulty link between a teacher鈥檚 performance and students鈥 scores on tests. They likened attempts to evaluate the performance of teachers of students in high-needs schools through their test scores to measuring the performance of an umbrella salesman in a desert: Even great 鈥減erformance"鈥攁 deep product knowledge, great salesmanship, and a great personality鈥攚ould not be 鈥渆ffective鈥 in yielding umbrella sales in a dry zone, just as even great pedagogical performance (interesting lessons, rigorous assignments, a way with kids) in no way guarantees effectiveness in terms of a group of students鈥 test scores hitting a certain mark.

In fact, what proponents of test-based teacher evaluation claim tests can show us about a teacher鈥檚 effectiveness can probably be determined better in other ways. These include principal and peer-to-peer observations, student questionnaires, and examining a teacher鈥檚 curriculum of lessons with an eye for rigor, creativity, and variety. Using students鈥 test scores as a means of evaluating teachers attempts to put a number on something that is inherently too unquantifiable, nuanced, and broadly impacted to be identified through an exam alone.

Failing the Students

The student benefit of these state-mandated assessments is also questionable. The premise of the particular exam my students took鈥攖hat the argumentative essay is somehow the basis of the critical thinking the students will need to reach that ever-elusive 鈥渃ollege-readiness鈥 benchmark鈥攊s in itself faulty. In my own preparation for college, a premium was placed on skills in expository writing, research, clear explanation of sources, and perhaps most importantly, my ability to come up with my own unique interpretation of any given source or set of sources. The argumentative essay as given by the MOSL requires students simply to choose the one reading that seems more 鈥渃orrect鈥 as far as answering the prompt, summarize it, and then mention the points of the other one. It鈥檚 laughable to assume that from this single exercise, one could distill their critical-thinking skills better than from any of the other tasks they perform over the course of their school year, or to believe that this test presents some 鈥減ure鈥 example of critical thinking.

Apologists for 鈥渢eaching to the test鈥 (such as the author of a recent ) might argue that tests like the MOSL promote critical thinking in that they require students to analyze passages of text for meaning and tone鈥攁nd that teachers are simply too lazy to teach difficult passages of reading or critical thinking that these tests require. This argument simplifies the truth, which is that assessments such as MOSL represent only one particular type of critical thinking that is neither universally relevant, nor indicative of all ways one might deduce, synthesize, and re-convey information. It is, however, painfully boring for the kids. And in schools in which truancy is an issue, in which teachers are making every effort to make learning exciting, engaging, and relevant for kids, simply so that they鈥檒l show up consistently, forcing students to take assessment on top of assessment in the hopes that something new will be shown is downright detrimental to educational outcomes.

The best and only useful application of assessments is a limited one, both in scope and frequency: They may be used diagnostically, in class, in the beginning of the school year or new curricular units, in order to help teachers (and parents) determine what strengths students already have and what weaknesses teachers need to address. They should be skill-specific, and not time-consuming.

The idea that an assessment can measure something so broad and nebulous as 鈥渃ritical thinking鈥 or 鈥渃ollege readiness鈥 or 鈥渢eacher efficacy鈥 must be given up entirely, in favor of small-scale mini-assessments, oriented towards discrete skills or topics, that can help teachers to target instruction effectively throughout the school year. Only when viewed this way can assessment through tests鈥攕tate-mandated or otherwise鈥攁ctually serve a useful purpose, both in guiding teachers鈥 instruction in the classroom, and enabling students to develop and obtain achievable educational goals.

Coverage of policy efforts to improve the teaching profession is supported by a grant from the Joyce Foundation, at . 91制片厂视频 Week Teacher retains sole editorial control over the content of this coverage.

Events

Recruitment & Retention Webinar Keep Talented Teachers and Improve Student Outcomes
Keep talented teachers and unlock student success with strategic planning based on insights from Apple 91制片厂视频 and educational leaders.鈥
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 91制片厂视频 Week's editorial staff.
Sponsor
Families & the Community Webinar
Family Engagement: The Foundation for a Strong School Year
Learn how family engagement promotes student success with insights from National PTA, AASA鈥痑nd leading districts and schools.鈥
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of 91制片厂视频 Week's editorial staff.
Sponsor
Special 91制片厂视频 Webinar
How Early Adopters of Remote Therapy are Improving IEPs
Learn how schools are using remote therapy to improve IEP compliance & scalability while delivering outcomes comparable to onsite providers.
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide 鈥 elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Assessment Explainer What Is Standards-Based Grading, and How Does It Work?
Schools can retool to make instruction more personalized and student-centered. But grading is a common sticking point.
11 min read
A collage of two faceless students sitting on an open book with a notebook and laptop. All around them are numbers, math symbols and pieces of an actual student transcript.
Nadia Radic for 91制片厂视频 Week
Assessment Letter to the Editor Are Advanced Placement Exams Becoming Easier?
A letter to the editor reflects on changes to the College Board's Advanced Placement exams over the years.
1 min read
91制片厂视频 Week opinion letters submissions
Gwen Keraval for 91制片厂视频 Week
Assessment Opinion 鈥楩ail Fast, Fail Often鈥: What a Tech-Bro Mantra Can Teach Us About Grading
I was tied to traditional grading practices鈥攗ntil I realized they didn鈥檛 reflect what I wanted students to learn: the power of failure.
Liz MacLauchlan
4 min read
Glowing light bulb among the crumpled papers of failed attempts
iStock/Getty + 91制片厂视频 Week
Assessment See How AP Exam Scores Have Changed Over Time
The College Board adopted a new methodology for scoring AP exams which has resulted in higher passing rates.
1 min read
Illustration concept: data lined background with a line graph and young person holding a pencil walking across the ups and down data points.
iStock/Getty