Maybe, just maybe, if more of the comments on our student evaluations looked like the following, they’d be worth it:
would take again.
I just realized that rhymed
And now I’m feeling sublime.
They call me Bertrand Russell
You know, I’m all about the hustle
Sense data is my life, it’s also yours, this ain’t a puzzle.
I get recognized as “Ayer”,
and “A true fuckin’ player”
but those terms are interchangeable my bad, ’twas a tautological err
They sometimes call me Quine.
I got bars, I got lines.
I’ll eradicate your whole existence, (What existence?) it’s not in our space and time.
I’m known round here as Kripke,
The king of New York City
They tried to bind me to their laws, too bad their laws weren’t explained to me.
Aristotle, Plato, Socrates, Machiavelli, Mephistopheles
Metaphysics is our business. Our swagger’s in our Philosophies!
That’s a comment Daniel Harris (Hunter College) once reported receiving in the comments on a previous post about student evaluations. Though individual comments on student evaluations of teaching (SET) can be amusing and sometimes helpful (along with occasionally offensive and frequently dull), there has long been doubts about how informative and useful SETs are. (See, for example, this amusing, if predictable, finding.)
A new metastudy, published in the September 2017 issue of Studies in Educational Evaluation, drives another nail into the coffin. It shows that previously marshaled evidence about the correlation between student learning and student evaluations of teachers were the result of studies with small sample sizes and publication bias. Here’s the abstract:
Student evaluation of teaching (SET) ratings are used to evaluate faculty’s teaching effectiveness based on a widespread belief that students learn more from highly rated professors. The key evidence cited in support of this belief are meta-analyses of multisection studies showing small-to-moderate correlations between SET ratings and student achievement (e.g., Cohen, 1980, 1981; Feldman, 1989). We re-analyzed previously published meta-analyses of the multisection studies and found that their findings were an artifact of small sample sized studies and publication bias. Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between SET ratings and learning. Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between the SET ratings and learning. These findings suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty’s teaching effectiveness.
In short, the study, by psychologists Bob Uttl and Carmela A.White of the University of British Columbia, and Daniela Wong Gonzalez of the University of Windsor, concludes that “ratings are unrelated to student learning.”
The students might as well be using the random course evaluation generator.
In a post here about two and a half years ago, I asked how philosophy professors thought they should be assessed by their students. I think the topic is worth revisiting. Professors can distribute their own custom course evaluation forms at any time throughout the term, should they care to, and sometimes departments can have a say over which questions are asked of their students in the official term-end SETs.
This will involve figuring out what we want to know from our students. We should have no great expectation that our custom questionnaires will do a better job of reporting on teacher quality in general than the ones studied. But we can ask more specific questions that get at student understanding and their impression of what you and your course are like. They can emphasize substantive feedback rather than grading you with a number.
So, what do you want to learn from your students about your teaching? And what’s a good way to do so?
And if you already distribute your own custom course evaluation, please consider sharing it here (either by linking to an online version of it, cutting and pasting the text, or emailing me the document.