I am generally leery of statements in education which begin “It’s a fact…” I am even more so when these facts overwhelmingly support the organization making the claim in a sort of self-congratulations (though this is extraordinarily common). So I turned to the recent commentary on the effectiveness of the Boettcher Teachers program in these pages with some caution.
IT’S A FACT: BOETTCHER TEACHERS PROGRAM GETS BETTER RESULTS
[…] Students in classrooms with Boettcher Teachers are scoring better on CSAP and district Measures of Academic Progress (MAP) tests than their non-Boettcher prepared peers, according the evaluation, conducted by The Evaluation Center and the University of Colorado Denver School of Education and Human Development.
While the text is far more modest than the headline (which implies the program produces the results), and unlike some others it does not quite claim causation when it witnesses correlation. But the claim here, even couched, is pretty clear: Having a Boettcher teacher significantly improves student academic growth (by a factor of 2X or more, if one believes the chart).
Now I believe the Boettcher teachers program to be pretty good. I’ve met several Boettcher fellows, have been generally impressed, and I have seen first-hand the considerable impact of one teacher in particular. What I don’t know is if this teacher (and others) would have had equal results without the Boettcher training.
And while I am not familiar with the Evaluation Center, I would assume they understand this area in great depth. So what I really don’t get is why smart people from different areas lend their substantial work and reputation to such thin claims, especially when it would be easy to do a whole lot better.
To start, if you are going to tout the findings of a recent evaluation, release the whole evaluation. Sunlight works wonders: link to the report, or put in on your website. The Boettcher evaluation itself is available neither at the Boettcher site, nor at The Evaluation Center. Nor are evaluations from earlier years available. Want us to believe? Let us read it directly.
Secondly, dig a little deeper, please. Were there any other variables considered? The most obvious is poverty, and at least a first cut of this would be easy to do — all the FRL numbers for these schools are available. Plug them in and see — are Boettcher teachers instructing a different group of kids than the average district teacher? That in itself could explain the difference, and on first blush it’s a more plausible hypothesis since the correlation has been long established. Or are there some other factors that should be considered, from the very simple (teacher age) to the more complex (quality of educational institution) that might make a difference? Do we care so little that we don’t even ask?
The Boettcher program is highly competitive with a rigorous selection process. Should an evaluation not account for any difference in ability between Boettcher teachers and their peers? Is it the people that the program attracts that provide the difference, or is it the program itself? And is the right comparative group non Boettcher teachers, or can you do an evaluation with other residency or teacher preparation programs (TFA, NTP or others)? Should an evaluation, particularly by a prestigious third-party, not be more rigorous?
The Boettcher annual report is not the most forthcoming document either, and there is no information on specific funding for the program (which is a collaboration, so it is not clear who should be reporting on it, if anyone). Is it a reasonable assumption that annual costs are $25k per fellow for a two-year program, so with overhead this program probably has annual costs in excess of $1M per year? If so, why do a $0.25 evaluation?
Because if it is really this simple, then we should take the Boettcher training program and expand it, a lot. Other activities and efforts (maybe some from the School of Ed) should probably be shut down and the resources moved. But few things are this simple. It’s one thing not to know, it’s another not to ask, but it’s a whole different kettle of fish to not ask and yet say you know.
Facts is facts, but without more transparency and better data, we are pretty much right where we started.