Curriculum
- Get link
- X
- Other Apps
Skip to main content
view
view
Watch the movie, don’t just read the script: Teaching vs. curriculum
Fordham has produced The Supplemental Curriculum Bazaar: Is What’s Online Any Good? Worth reading!
Are popular materials offered on Teacher Pay Teachers, and similar sites, useful?
Not really, say the authors. They mostly suck. They looked at English materials. One big concern was the lack of directions:
For example, one poorly rated writing task asks students to select five important events from their life and write an autobiography but offers little guidance beyond that.
Stacey Childress responds to Bazaar. She correctly cautions us to wait a minute. While she likes the idea of curriculum reviews—she provided seed funding to what became EdReports—she asks why do teachers use this “low-quality” stuff. It’s because, she explains, teachers trust other teachers—not expert raters or principals.
So let’s dig in.
1. Why don’t teachers trust “the technocrats”?
One reason, which Robert Pondiscio points out in his most recent opus, is that “we” technocrats often lack humility.
Those of us—and I include myself—who seek more creation and use of high-quality curricula…we need to look ourselves in the mirror before we critique what individual teachers use and create.
The “Good Stuff”—those highly rated lesson plans? They don’t seem, when adopted by districts, to drive up student achievement.
Tom Kane’s recent study hit that point in math.
David Steiner’s recent essay reluctantly makes the same point more broadly. He writes:
I predict that in the coming months, we will see more such findings to the effect that new high-quality curricula aren’t achieving much of anything for our less prepared students—and the critics are already vocal.
The top down approach to assigning expertly-designed-and-validated curriculum hasn’t yet worked in real life. We haven’t exactly, as a field, copped to that failure. It doesn’t mean we shouldn’t try. Just that we should admit where we are. If the stuff we say is “good” doesn’t actually drive student achievement, we should be really careful about saying somebody else’s stuff isn’t good.
Individual teachers may not know the data about a particular curriculum, but they do know that lots of “highly-rated things” have failed in their personal experience.
2. Engagement versus achievement
Tom Arnett chimes in with a recent paper for AEI. He argues that another reason which teachers reject district-provided “high-quality curricula” because they are often picking a different “Job To Be Done” than trying to vault achievement. Sometimes they just want to increase student engagement, or make their own teaching more enjoyable.
Tom is right. “The Job” many teachers are “hiring” curriculum supplements to do is student engagement. Not achievement. The Supplemental Curriculum Bazaar seems to reject that quest on its face. They’re not even seeking to measure if a lesson might increase engagement.
An analogy: As parents, we don’t only give kids healthy food (at least this one). We might say: “I want my kid to enjoy this museum. We just pulled into the parking lot. He’s hangry. Here are cheese crackers and a Capri Sun. Yum. Now he’s not hangry. We’re enjoying the museum.”
Curriculum “purists” essentially demand parents give that kid carrot sticks and organic apple juice, always, leading to a parking lot showdown. Those purists might tell American parents to be more like French parents. Just ignore the crying when it comes to food, and everything will work out, goes the French conventional wisdom.
I’m skeptical. Most teachers realize that when they download “Sight Word Scramble” (fifth most popular on TPT) that it’s meant to be fun. They realize when they download the comic-book-illustrated-style “What if the World had 100 People?” (fourth most popular lesson plan on TPT) that it appears more engaging than their district-approved social studies lesson.
Engagement is a fully legitimate goal! When a reviewer sees a movie with Jonah Hill or Seth Rogen, we know the Job To Be Done is stupid laughs—i.e., engagement. They have to evaluate whether the movie accomplishes that job. They can’t say, “Wow, really not even close to Citizen Kane.” Teachers want to know if material is good for its genre, not to be told that only one genre, achievement gain, is worthwhile.
Similarly, to answer one of Bazaar’s metrics, when thousands of teachers repeatedly download lessons “without clear directions,” it may be because they’ve already got a workable set of directions in their heads.
New composers need cheat sheets for orchestrations, the “clear directions” for things like pizzicato, overblowing, bowing at the bridge, flutter tongue. But experienced composers have that stuff in their heads already. Materials for experienced composers don’t need explicit directions for that reason.
Mike Petrilli and Amber Northern acknowledge both these points in the foreword to Bazaar:
They (teachers) may be finding value in these materials use the materials to fill instructional gaps, meet the needs of both low and high achievers, foster student engagement, and save them time. They rarely use the materials as is.
But that message doesn’t quite get picked up. Bazaar got headlines like this from Hechinger Report: “Most English lessons on Teachers Pay Teachers and other sites are ‘mediocre’ or ‘not worth using,’ study finds.”
3. Watching movies versus reading scripts
Petrilli and Northern offer a great idea:
Just as the movie review site Rotten Tomatoes offers both critical evaluations and “audience scores” from viewers, these curriculum web sites should provide both experts’ opinions and information about the popularity of various lessons.
But one important distinction is worth noting.
The Rotten Tomatoes reviews are not written by people who read the script. They’ve watched the movie.
Consumer Reports does not rate a car solely by sitting inside it or studying its specs. Yes, they do that. But they also (and more importantly) actually drive the car.
Yet, typically, curriculum reviewers simply read lessons and imagine them in their minds’ eyes. That is a very difficult way to gauge what happens in real life when that lesson is taught.
Hollywood and Broadway executives know this. Sure, they pay people to “evaluate scripts,” but that’s not the main way they decide what movies to greenlight.
I would urge curriculum reviewers to rate curricula by watching several teachers actually use it. They need to have the experience, as I have, of watching teachers do exactly what you ask them to do and seeing it suck.
I realize that watching lots of lessons sounds logistically implausible. But it’s not!
I worked at a large education organization in Africa, with several hundred schools in five countries. We created our own curriculum. Our original review process was to hire experts to examine it, such as department heads at ed schools and so forth.
Yet if our team would visit the actual schools, often the most ambitious, “highest-rated-by-experts” lessons fell flat with real-life teachers and students. Meanwhile, some of the simpler-seeming lessons actually led to better results.
Even though these schools operate with 1 percent of the per-pupil spending we have in the U.S., the organization’s CEO was willing to invest in creating a new qualitative observation system to improve the curriculum. We hired ten experienced local teachers in our five countries. They wandered the country, visiting a different school each day. They’d observe class after class, sending detailed notes and ratings, not about what was in the lesson plan, but what actually transpired in the classroom from the lesson plan. We switched from inputs to outputs.
Little by little, we were able to grind our way to better lessons, both from teacher perspectives and from the achievement data. I have to believe that if that sort of feedback loop can be created in some of the poorest schools in Africa, back here in the U.S. we could figure out a way to provide EdReports and others with the money they’d need to have raters watch actual lessons across an array of schools and school types.
Then we'd have the chance for a “Tomatometer Expert” who is more trusted by teachers…because the feedback would be grounded in real life outputs (how the class actually goes) rather than inputs.
More importantly, we’d create a path for a series of grind-it-out, long-additive improvements—A/B testing of little things in lessons—that would perhaps allow us to one day see scholars publish studies of curriculum implementation that leads to huge gains for students.
Mike Goldstein disclosures: I am a volunteer board member for Match Education, which creates curriculum that is (highly) rated by EdReports. I am narrowly expressing my individual view here and speak for nobody else. I spoke briefly to the creators of Fordham’s Curriculum Bazaar study in its early stages. My kids do not eat enough vegetables.
POLICY PRIORITY:
Mike Goldstein is the founder of Match Education in Boston: a college prep charter school for low-income kids; an embedded Graduate School of Education; and a program to share best practices.
Related Content
STANDARDS & ACCOUNTABILITY
Hard lessons about SEL implementation
AARON CHURCHILL2.24.2020
OHIOOHIO GADFLY DAILY
NEW BOOK: How to Educate an American: The Conservative Vision for Tomorrow's Schools
2.24.2020
NATIONALFLYPAPER
- Get link
- X
- Other Apps