Skip to content Skip to navigation

Vetting and understanding research

Linda Darling-Hammond

This op-ed was first published in the Stanford Daily.

On April 18, an op-ed by Wendy Kopp, president of Teach for America, or TFA, criticized The Daily’s coverage of a study presented the previous week at the American Educational Research Association meetings in Montreal (“Considering the social responsibility of academia and journalism”). Her concern was both The Daily’s headline and article (“Study Raises Questions about Teach for America,” April 15) and the study itself, which I conducted with colleagues here at Stanford.

Kopp alleged that the study had not been independently reviewed (not true), that the analyses might have had small sample sizes, which were not published (not true — the large sample sizes were published in the appendix) and that the study was at odds with another study of TFA by Mathematica Policy Research, which endorsed the program’s effectiveness (also not true, as described below).

Much larger than the Mathematica study (which looked for a single year at only 41 TFA teachers and about 57 other teachers spread across six different cities), our study looked at achievement gains on six tests over a six-year period for over 132, 000 students taught by more than 4,400 teachers in Houston, Texas.

The study found that certified teachers are more effective than uncertified teachers in producing student achievement gains. Uncertified TFA teachers and others were less effective than fully prepared and certified teachers of comparable experience levels in similar schools.

This is not a very surprising finding. Nor should it be threatening to an organization that aims to serve inner city children well and has gone to some lengths to try to learn how to do so. It also is not a wholesale condemnation of TFA, whose recruits generally became certified over two or three years of teaching in Houston.

We were able to confirm a key finding of the Mathematica study and another study conducted by researchers at the Hoover Institution: Compared to other similarly experienced teachers in similar schools serving low-income and minority students, at least in some years and on some tests, TFA teachers were about as effective as other teachers in teaching reading and (on one test) a bit more effective in teaching math. Like the other studies, we found that this comparison was due mostly to other underprepared teachers who predominate in these schools. In fact, in some years TFA teachers were more likely to be certified than other teachers in the same or similar schools.

However, we found that the positive results for TFA did not hold up in all years on all tests. When Houston’s teachers were relatively better prepared in later years, the comparison to TFA recruits became less favorable. Overall, uncertified TFA recruits had a significant negative effect on achievement on five of six tests. Like the other studies, we also found that although they became more effective when they became certified, virtually all of the TFA teachers left within three years, providing few Houston students with the benefits of their emerging competence.

It’s important to look beyond specific programs to ask the bigger policy question: Rather than pitting under-prepared teachers against others in comparisons of effectiveness, how can policies provide well-qualified teachers to low-income students of color? Urban districts that have staffed all their schools with well-prepared teachers have addressed salaries and working conditions as well as preparation and induction to ensure that teachers will be prepared to teach effectively and will want to stay in teaching. While a band-aid on a bleeding sore is helpful in a crisis, healing the wounds of inequality and poverty is also a policy problem worth solving.