This summer has seen a series of guest posts by Elijah Millgram (Utah) on his new book, The Great Endarkenment: Philosophy for an Age of Hyperspecialization. One theme of the book is that there has been a steep increase in specialization that in some ways threatens knowledge. In the following post*, Millgram starts an exchange with Jerome Ravetz, author of Scientific Knowledge and Its Social Problems, among other works, about how specialization makes quality control in philosophy and other academic fields more difficult.
Keeping it Real (in Philosophy and Other Fields)
by Elijah Millgram and Jerome Ravetz
Millgram: Having posted on several themes from The Great Endarkenment, I’d like to wrap up by touching on the problem of quality assurance in specialized research. In other fields, it’s a matter of falsifying data; in philosophy, it’s rather a matter of merely going through the motions. I frequently find myself citing Jerome Ravetz’s Scientific Knowledge and Its Social Problems, so I thought I’d ask him to help me start a conversation about this.
I’ve talked to you in the past about quality control in research, which is an issue in philosophy in two ways. First, it comes up as a properly philosophical problem. Second, our failures on this front are eroding the practice of philosophy. While philosophers don’t fabricate data, one can’t but notice the increasing volume and decreasing quality of our professional publications.
It’s struck me that you see this as in the first place a moral matter. I agree, but some moral issues are especially difficult because of other problems lurking in the background.
Scientific research drives specialization, but then specialization makes quality control harder. If outsiders could assess whether work in some specialized field was good, it would be visible when the field’s institutions weren’t doing their job, and research integrity wouldn’t be nearly the issue it’s become. However, you can’t apply standards you don’t understand to activities and results you don’t understand.
This means that once specialists stop doing the exacting work of enforcing their own standards locally, there’s no backup. Charlie Munger is enthusiastic about cash registers, because they help retail employees to be honest. The idea is that, without the level of monitoring cash registers provide, over time you’re actually eroding the integrity of your sales staff. Analogously, when outsiders can’t monitor the way a specialized discipline is running, it has no cash register; it easily becomes corrupted very quickly.
If I’m right, the prior problem is cross-disciplinary assessment. We can’t just harvest metrics or assessments generated in-field—e.g., count publications or citations, or collect referee reports. If the field is becoming corrupted, they’ll tell us only that people are going through the motions. But outsiders can’t assess specialist activity as the specialists themselves do. We need to find a path between the horns of this dilemma.
I certainly can’t provide an answer to the dilemma, but I can try to help give it a shape. I start by observing the assumptions behind Hume’s classic separation of factual from value statements. These are, that those making the statements are honest and competent, so that there is no problem of quality assessment in their acceptance and implementation. As soon as quality enters the picture, then the distinction is dissolved. For as knowledge is a social possession, its reproduction and implementation depends on trust, in many ways. If the agents prove to be untrustworthy in their claims, then the knowledge itself is vitiated, and the process and the agents all become, in their own ways, corrupt.
We note that this corruption does not at need to be based on bad motivations. Everyone may be doing their best, but are forced to act falsely in a pathological situation. This is the case when scientists or scholars act as quality-controllers and inevitably do so incompetently. Then epistemology and ethics are inseparable. Such considerations were totally foreign to two of the three founders of modern science, Descartes and Galileo. But Bacon had a fund of practical wisdom that the other two lacked. He identified the evils in the knowledge of his time, and described ‘vermiculate’ knowledge as prevalent. His response was twofold: methodology and morals. The former was worked out in great detail, and was very popular in the times and places that science was conceived as ‘inductive’. For the latter, he had only some aphorisms and private reflections, some very profound but none worked out.
If we are to begin to heal these social problems of scientific knowledge and prevent endarkenment, we will need to work our way out of the classical perfectionist paradigm. For this a useful start might be the paradoxical question: “What is the scientific explanation of why I should not cheat at doing science?”
I’ll append a proper response shortly, but this seems like a good point at which to open up the conversation to people following the blog; suggestions and dissent are both welcome.