First Person

The Trouble With Not Releasing State Test Items

First Rule of Fight Club: Do Not Talk about Fight Club

Second Rule of Fight Club: DO NOT TALK about Fight Club

Has the New York State Education Department watched too many Brad Pitt movies? Okay, that’s a rhetorical question, but one that might be posed to other state education agencies also engaged in the business of high-stakes testing. This week, students in grades 3 through 8 across the state of New York are taking mathematics exams aligned with the Common Core State Standards. Following on the heels of last week’s English Language Arts exams, the math exams also promise to be unusually challenging, reflecting the complex skills and knowledge inscribed in the Common Core standards.

Regardless of broad pronouncements from policymakers and the media about the inherent superiority of the Common Core standards and the assessments designed to measure mastery of them, the truth is that no one really knows whether the standards will lead to higher student achievement, or whether the assessments will be good measures of students’ readiness for college and careers. In New York, although this year’s assessments are the first to be aligned with the Common Core standards, they have a short shelf-life: the state plans to administer the Partnership for Assessment of Readiness for College and Careers assessments in the spring of 2015, if those assessments are ready for prime time by then.

In the meantime, discussions about the content and quality of the assessments are hamstrung by New York’s decision not to release test items to the public. For educators, the issue is quite serious: Disclosure of secure test items by a teacher or school leader is considered a moral offense that can lead to disciplinary action, including loss of certification.

The strongest arguments in favor of keeping test questions and answers private are technical. It is desirable that different forms of a test, including those administered in different years, be scaled in such a way that a given score represents the same level of performance, regardless of the test form or year. Anchor items are used to link different forms of a test and equate them. Modern test theory uses the difficulty of test items, and their ability to differentiate higher and lower performers, as tools to estimate a test-taker’s performance. It’s important for anchor items to have a stable level of difficulty over time; if they become easier or harder over time, their ability to serve as a common anchor across test forms is compromised, as is our confidence that a given test score denotes the same level of performance over time. A change in the difficulty of a test item over time is referred to as item parameter drift.

Item parameter drift can occur due to changes in curriculum, teaching to a test, or practice. But the biggest risk is from the widespread release of test items, whether unintentionally, as in a security breach, or intentionally. If a wide swath of the test-taking population knows test questions and the right answers, the questions will be easier, even if the test-takers are not more capable. It’s for this reason that questions and answers in educational tests frequently aren’t released to the public: disclosing test questions would limit their ability to be reused and to serve as anchor items.

The National Assessment of Educational Progress is a case in point. The No Child Left Behind Act provides that the public shall have access to all assessment instruments used in NAEP, but that the Commissioner of the National Center for Education Statistics, which houses NAEP, may decline to make available test items that are intended for reuse for up to 10 years after their initial use.

Of course, one of the other features of the lovely NCLB law is that it prohibits the federal government from using NAEP to rank or punish individual students, teachers, schools or local education agencies. For this reason, NAEP is a low-stakes test — despite the ways in which pundits jump to draw broad policy inferences from comparisons of NAEP performance over time or across jurisdictions.

But one could argue that disclosure of test questions and answers may be justified when the test is used for high-stakes decisions such as student promotion, or the evaluation of teachers and/or schools. For most such high-stakes decisions, there are winners and losers, and when these decisions are made by agents of the government, the losers have a legitimate interest in whether the decisions were fair. One need look back no further than last week, when New York City announced that, due to a series of errors made by NCS Pearson, several thousand children were incorrectly classified as ineligible for gifted and talented programs.

Or, if you wish, reach back to last year, when the New York State Education Department discarded a series of items in the Grade 8 English Language Arts exam based on a passage involving a talking pineapple. Not too many people rose to defend the test items associated with this fable involving a hare and a pineapple, but Pearson, the firm contracted to develop and administer the exam, did. The choice of both the passage and the items, the company claimed, “was a sound decision in that ‘The Hare and the Pineapple’ and associated items had been field tested in New York State, yielded appropriate statistics for inclusion, and it was aligned to the appropriate NYS Standard.” Vetted by some teachers, too, I reckon. But with all of that, the passage and items were ludicrous.

One item following the passage asked which of the animals in the passage was the wisest: the moose, crow, hare or owl. Pearson claimed that it was unambiguous that the wisest animal was the owl, based on clues in the text. One such clue was that the owl declared that “Pineapples don’t have sleeves,” which, Pearson reported, was a factually accurate statement. So too, to the best of my knowledge, is that owls don’t talk.

High-stakes tests administered by governmental agencies call for a heightened sense of procedural fairness, including the ability to interrogate the tests and how they were constructed, and what counts as a correct response. The point is not so much that bad test items get discarded — although that may be appropriate from time to time — as much as it is that the procedures are subject to scrutiny by those they affect. New York does not have a great recent track record on this. The technical reports on the construction of last year’s state English Language Arts and math tests have not been made public yet, even though we’re in the midst of this year’s testing. And the technical manual for New York’s statewide teacher rankings, a modified version of value-added modeling, was released months ago—before the manual for the tests on which those rankings were based. It’s hard to know how much to trust the growth percentiles or value-added models without more information on the tests themselves.

Moreover, it may be especially important to have open and public discussions about tests that are aligned with the Common Core standards, which are new to educators and the public. The point of these tests, especially in their earliest administrations, is really not “ripping the Band-Aid off,” as New York City Schools Chancellor Dennis Walcott has declared — nor is it to document just how few students will meet the new standards, as a vehicle for supporting one policy reform or another. Rather, it’s to engage educators, policymakers and the public in a conversation about what we want our students to know, and how we can move them toward the desired levels of knowledge and skill.

And one good way to frame that conversation is to ground it in the discussion of particular assessment questions. Might teachers disagree with one another about what the best answer to an assessment question is? If they do, shouldn’t they be talking about it? Will students have an opportunity to discuss why a response is incorrect, what a better response might be, and why? Or will they simply receive a scale score telling them, and their parents, that they are well below grade-level?

Much has been made of the notion that assessments aligned with the Common Core standards are to be “authentic,” with real-world content that parallels what students might experience in adult daily life. (Ideally, something more sophisticated than “If Johnny has $5.63 and is wearing a pair of Nike Free Run+ 3 shoes, how long will it take him to run to the 7-Eleven to buy a delicious Coca-Cola product?”) If the content is indeed authentic, and reflective of what we expect students to know and be able to do as productive adults, we should be discussing that content, not hiding it under a rock.

There is a middle ground between total nondisclosure of test items and answers, and complete disclosure. It’s possible to retain the security of anchor items while releasing items that won’t be used again. But it’s easier to do this when there’s a more extensive bank of assessment items with known properties, and such an item bank for the Common Core does not yet exist. It may not be the most popular conclusion, but perhaps we should be investing more in the development of good assessment items.

First Rule of High-Stakes Assessments: Talk about High-Stakes Assessments

Second Rule of High-Stakes Assessments: TALK about High-Stakes Assessments

This post also appeared on The Hechinger Report’s Eye on Education blog.

First Person

I’m a principal who thinks personalized learning shouldn’t be a debate.

PHOTO: Lisa Epstein
Lisa Epstein, principal of Richard H. Lee Elementary, supports personalized learning

This is the first in what we hope will be a tradition of thoughtful opinion pieces—of all viewpoints—published by Chalkbeat Chicago. Have an idea? Send it to cburke@chalkbeat.org

As personalized learning takes hold throughout the city, Chicago teachers are wondering why a term so appealing has drawn so much criticism.

Until a few years ago, the school that I lead, Richard H. Lee Elementary on the Southwest Side, was on a path toward failing far too many of our students. We crafted curriculum and identified interventions to address gaps in achievement and the shifting sands of accountability. Our teachers were hardworking and committed. But our work seemed woefully disconnected from the demands we knew our students would face once they made the leap to postsecondary education.

We worried that our students were ill-equipped for today’s world of work and tomorrow’s jobs. Yet, we taught using the same model through which we’d been taught: textbook-based direct instruction.

How could we expect our learners to apply new knowledge to evolving facts, without creating opportunities for exploration? Where would they learn to chart their own paths, if we didn’t allow for agency at school? Why should our students engage with content that was disconnected from their experiences, values, and community?

We’ve read articles about a debate over personalized learning centered on Silicon Valley’s “takeover” of our schools. We hear that Trojan Horse technologies are coming for our jobs. But in our school, personalized learning has meant developing lessons informed by the cultural heritage and interests of our students. It has meant providing opportunities to pursue independent projects, and differentiating curriculum, instruction, and assessment to enable our students to progress at their own pace. It has reflected a paradigm shift that is bottom-up and teacher led.

And in a move that might have once seemed incomprehensible, it has meant getting rid of textbooks altogether. We’re not alone.

We are among hundreds of Chicago educators who would welcome critics to visit one of the 120 city schools implementing new models for learning – with and without technology. Because, as it turns out, Chicago is fast becoming a hub for personalized learning. And, it is no coincidence that our academic growth rates are also among the highest in the nation.

Before personalized learning, we designed our classrooms around the educator. Decisions were made based on how educators preferred to teach, where they wanted students to sit, and what subjects they wanted to cover.

Personalized learning looks different in every classroom, but the common thread is that we now make decisions looking at the student. We ask them how they learn best and what subjects strike their passions. We use small group instruction and individual coaching sessions to provide each student with lesson plans tailored to their needs and strengths. We’re reimagining how we use physical space, and the layout of our classrooms. We worry less about students talking with their friends; instead, we ask whether collaboration and socialization will help them learn.

Our emphasis on growth shows in the way students approach each school day. I have, for example, developed a mentorship relationship with one of our middle school students who, despite being diligent and bright, always ended the year with average grades. Last year, when she entered our personalized learning program for eighth grade, I saw her outlook change. She was determined to finish the year with all As.

More than that, she was determined to show that she could master anything her teachers put in front of her. She started coming to me with graded assignments. We’d talk about where she could improve and what skills she should focus on. She was pragmatic about challenges and so proud of her successes. At the end of the year she finished with straight As—and she still wanted more. She wanted to get A-pluses next year. Her outlook had changed from one of complacence to one oriented towards growth.

Rather than undermining the potential of great teachers, personalized learning is creating opportunities for collaboration as teachers band together to leverage team-teaching and capitalize on their strengths and passions. For some classrooms, this means offering units and lessons based on the interests and backgrounds of the class. For a couple of classrooms, it meant literally knocking down walls to combine classes from multiple grade-levels into a single room that offers each student maximum choice over how they learn. For every classroom, it means allowing students to work at their own pace, because teaching to the middle will always fail to push some while leaving others behind.

For many teachers, this change sounded daunting at first. For years, I watched one of my teachers – a woman who thrives off of structure and runs a tight ship – become less and less engaged in her profession. By the time we made the switch to personalized learning, I thought she might be done. We were both worried about whether she would be able to adjust to the flexibility of the new model. But she devised a way to maintain order in her classroom while still providing autonomy. She’s found that trusting students with the responsibility to be engaged and efficient is both more effective and far more rewarding than trying to force them into their roles. She now says that she would never go back to the traditional classroom structure, and has rediscovered her love for teaching. The difference is night and day.

The biggest change, though, is in the relationships between students and teachers. Gone is the traditional, authority-to-subordinate dynamic; instead, students see their teachers as mentors with whom they have a unique and individual connection, separate from the rest of the class. Students are actively involved in designing their learning plans, and are constantly challenged to articulate the skills they want to build and the steps that they must take to get there. They look up to their teachers, they respect their teachers, and, perhaps most important, they know their teachers respect them.

Along the way, we’ve found that students respond favorably when adults treat them as individuals. When teachers make important decisions for them, they see learning as a passive exercise. But, when you make it clear that their needs and opinions will shape each school day, they become invested in the outcome.

As our students take ownership over their learning, they earn autonomy, which means they know their teachers trust them. They see growth as the goal, so they no longer finish assignments just to be done; they finish assignments to get better. And it shows in their attendance rates – and test scores.

Lisa Epstein is the principal of Richard H. Lee Elementary School, a public school in Chicago’s West Lawn neighborhood serving 860 students from pre-kindergarten through eighth grade.

Editor’s note: This story has been updated to reflect that Richard H. Lee Elementary School serves 860 students, not 760 students.

First Person

I’ve spent years studying the link between SHSAT scores and student success. The test doesn’t tell you as much as you might think.

PHOTO: Photo by Robert Nickelsberg/Getty Images

Proponents of New York City’s specialized high school exam, the test the mayor wants to scrap in favor of a new admissions system, defend it as meritocratic. Opponents contend that when used without consideration of school grades or other factors, it’s an inappropriate metric.

One thing that’s been clear for decades about the exam, now used to admit students to eight top high schools, is that it matters a great deal.

Students admitted may not only receive a superior education, but also access to elite colleges and eventually to better employment. That system has also led to an under-representation of Hispanic students, black students, and girls.

As a doctoral student at The Graduate Center of the City University of New York in 2015, and in the years after I received my Ph.D., I have tried to understand how meritocratic the process really is.

First, that requires defining merit. Only New York City defines it as the score on a single test — other cities’ selective high schools use multiple measures, as do top colleges. There are certainly other potential criteria, such as artistic achievement or citizenship.

However, when merit is defined as achievement in school, the question of whether the test is meritocratic is an empirical question that can be answered with data.

To do that, I used SHSAT scores for nearly 28,000 students and school grades for all public school students in the city. (To be clear, the city changed the SHSAT itself somewhat last year; my analysis used scores on the earlier version.)

My analysis makes clear that the SHSAT does measure an ability that contributes to some extent to success in high school. Specifically, a SHSAT score predicts 20 percent of the variability in freshman grade-point average among all public school students who took the exam. Students with extremely high SHSAT scores (greater than 650) generally also had high grades when they reached a specialized school.

However, for the vast majority of students who were admitted with lower SHSAT scores, from 486 to 600, freshman grade point averages ranged widely — from around 50 to 100. That indicates that the SHSAT was a very imprecise predictor of future success for students who scored near the cutoffs.

Course grades earned in the seventh grade, in contrast, predicted 44 percent of the variability in freshman year grades, making it a far better admissions criterion than SHSAT score, at least for students near the score cutoffs.

It’s not surprising that a standardized test does not predict as well as past school performance. The SHSAT represents a two and a half hour sample of a limited range of skills and knowledge. In contrast, middle-school grades reflect a full year of student performance across the full range of academic subjects.

Furthermore, an exam which relies almost exclusively on one method of assessment, multiple choice questions, may fail to measure abilities that are revealed by the variety of assessment methods that go into course grades. Additionally, middle school grades may capture something important that the SHSAT fails to capture: long-term motivation.

Based on his current plan, Mayor de Blasio seems to be pointed in the right direction. His focus on middle school grades and the Discovery Program, which admits students with scores below the cutoff, is well supported by the data.

In the cohort I looked at, five of the eight schools admitted some students with scores below the cutoff. The sample sizes were too small at four of them to make meaningful comparisons with regularly admitted students. But at Brooklyn Technical High School, the performance of the 35 Discovery Program students was equal to that of other students. Freshman year grade point averages for the two groups were essentially identical: 86.6 versus 86.7.

My research leads me to believe that it might be reasonable to admit a certain percentage of the students with extremely high SHSAT scores — over 600, where the exam is a good predictor —and admit the remainder using a combined index of seventh grade GPA and SHSAT scores.

When I used that formula to simulate admissions, diversity increased, somewhat. An additional 40 black students, 209 Hispanic students, and 205 white students would have been admitted, as well as an additional 716 girls. It’s worth pointing out that in my simulation, Asian students would still constitute the largest segment of students (49 percent) and would be admitted in numbers far exceeding their proportion of applicants.

Because middle school grades are better than test scores at predicting high school achievement, their use in the admissions process should not in any way dilute the quality of the admitted class, and could not be seen as discriminating against Asian students.

The success of the Discovery students should allay some of the concerns about the ability of students with SHSAT scores below the cutoffs. There is no guarantee that similar results would be achieved in an expanded Discovery Program. But this finding certainly warrants larger-scale trials.

With consideration of additional criteria, it may be possible to select a group of students who will be more representative of the community the school system serves — and the pool of students who apply — without sacrificing the quality for which New York City’s specialized high schools are so justifiably famous.

Jon Taylor is a research analyst at Hunter College analyzing student success and retention.