We’ve discussed student evaluations of teachers here before, focusing on the various problems associated with them. Yet the picture may be more complicated. Elizabeth Barre, assistant director of Rice University’s Center for Teaching Excellence, recently posted about her “deep dive” into the voluminous research about student evaluations—research which is typically left undisturbed by news reporters writing about the “latest studies.” Here are what she reports as “the six most surprising insights I took away from the formal research literature on student evaluations “:
- Yes, there are studies that have shown no correlation (or even inverse correlations) between the results of student evaluations and student learning. Yet, there are just as many, and in fact many more, that show just the opposite.
- As with all social science, this research question is incredibly complex. And insofar as the research literature reflects this complexity, there are few straightforward answers to any questions. If you read anything that suggests otherwise (in either direction), be suspicious.
- Despite this complexity, there is wide agreement that a number of independent factors, easily but rarely controlled for, will bias the numerical results of an evaluation. These include, but are not limited to, student motivation, student effort, class size, and discipline (note that gender, grades, and workload are NOT included in this list).
- Even when we control for these known biases, the relationship between scores and student learning is not 1 to 1. Most studies have found correlations of around .5. This is a relatively strong positive correlation in the social sciences, but it is important to understand that it means there are still many factors influencing the outcome that we don’t yet understand. Put differently, student evaluations of teaching effectiveness are a useful, but ultimately imperfect, measure of teaching effectiveness.
- Despite this recognition, we have not yet been able to find an alternative measure of teaching effectiveness that correlates as strongly with student learning. In other words, they may be imperfect measures, but they are also our best measures.
- Finally, if scholars of evaluations agree on anything, they agree that however useful student evaluations might be, they will be made more useful when used in conjunction with other measures of teaching effectiveness.
The whole post is here. Following up on point #6, it would be interesting to hear what other measures of teaching effectiveness philosophy professors (and students) would like to see deployed.