Mini-Heap


Recent links…

  1. “Down one path is understanding the humanities foremost as knowledge work… Down the other path is understanding the humanities as a kind of pure activism committed to rejecting the values that govern institutional and civic credibility” — Aaron Hanlon (Colby) on the “credibility crisis” facing the humanities
  2. “We all must take a more active role as consumers in how these technologies are developed” — novelist-professor Sam Lipsyte (Columbia) writes amusingly about his trip to Vegas to hear a philosopher & a sex-technologist talk about sex robots, and maybe try out the tech himself
  3. Socrates said that studying philosophy was preparation for dying — at one (and only one) university in North America, undergraduates can cut out the middleman and just major in death
  4. “Many Russians are not to blame for the war or the atrocities. Living under a draconian authoritarian regime, they are manipulated by a powerful propaganda machine and they face harsh punishment if they protest” — still, sanctions that may harm them are justified, argue Avia Pasternak (UCL) & Zofia Stemplowska (Oxford)
  5. “To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries.” How can we tell if such a problem exists? — the epistemology of the possibility of cryptography
  6. New: “In the CAVE: An Ethics Podcast”. It “explores some of the big ethical and philosophical issues facing contemporary societies” — from the Macquarie University Research Centre for Agency, Values and Ethics (CAVE), it’s on Spotify and other podcast platforms
  7. “Once you read these critiques, it becomes painfully obvious that the Dunning-Kruger effect is a statistical artifact. But to date, very few people know this fact” — a step by step explanation of the problem with one of psychology’s most famous findings

Mini-Heap posts usually appear when 7 or so new items accumulate in the Heap of Links, a collection of items from around the web that may be of interest to philosophers. Discussion welcome.

The Heap of Links consists partly of suggestions from readers; if you find something online that you think would be of interest to the philosophical community, please send it in for consideration for the Heap. Thanks!

USI Switzerland Philosophy
Subscribe
Notify of
guest

8 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Patrick S. O'Donnell
1 year ago

Re: 3, I vividly recall the course I took on “death and dying” as an undergraduate at UC Santa Barbara back in the early 1980s (it was offered in the Dept. of Religious Studies, although it was open to anyone). David Chidester was the instructor, and his charisma and conspicuous intelligence no doubt accounted in large measure for the extreme popularity of the course which, at the time, surprised me (he was soon to leave for a teaching position in South Africa). Over twenty-five years later, it moved me to put together a multidisciplinary compilation on the subject, that some folks might find useful should they be doing research in this area: https://www.academia.edu/4843998/Death_and_Dying_bibliography

Guy
Guy
1 year ago

RE: 7 – I’m hoping someone can help me out. I read the article and I somewhat get the argument against the way Dunning/Kruger represented that data. But I can’t understand how that harms the larger point.

I can imagine a person, Smith, predicting how well she will do on some assessment, A. And I can imagine that her prediction is much higher than her actual performance on A. If Smith does this repeatedly over numerous iterations of A, then it seems safe to conclude that Smith overestimates her abilities.

I can also imagine finding this out about a large number of people–say 1,000. And I can also imagine finding an interesting pattern: the worse people tended to perform on A, their overestimation of their performances tends to increase.

I’m not saying that, in fact, this pattern exists. That’s an empirical matter. But I am saying that it seems perfectly sensible to me. I don’t see how what I’ve described is somehow saying the same thing twice (comparing X to X as the article suggests). In other words, looking for that pattern in the way I have described it–I don’t see how in merely trying to find that pattern, I have thereby begun a search for a statistical artifact. Am I wrong? I suspect I’m missing something simple here or I’m interpreting Dunning/Kruger’s critics to be claiming more than they are.

Can anyone shed some light or point me to a better source (I looked at a couple of the cites mentioned in 7 already)?

durval
durval
Reply to  Guy
1 year ago

I’m no expert by any means, but I think I understand why you’re puzzled.

What ‘7’ says is that Dunning/Kruger made a mistake (placing the same variable in two sides of the equation) in their manipulation of the data to generate the famous diagram. If this mistake is corrected, the resulting diagram fails to show the famous effect (as shown by cited Nuhfer study).

And it also says that if you take completely random data and do the same mistaken manipulation and generate a diagram, it will show the same Dunning/Kruger effect; this shows the effect has nothing to do with the data but rather with its (again, mistaken) manipulation.

This, according to ‘7’, is pure math/statistics and has nothing to do with the apparently very intuitive and “sensible” nature of the problem and its interpretation, which is what tends to get people hung in the Dunning/Kruger “rethoric” and believe there’s actually an effect where there’s only statistical malpractice.

This is my interpretation of what ‘7’ says; I do not know whether what it claims is true or not. Anyway, its claim about getting a similar effect from entirely random data (which for me would be the strongest proof of the falsity of Dunning/Kruger’s effect) should be easy enough to prove by myself, given a graphing spreadsheet and an afternoon to spend. If I ever get around to it, I will post my results here.

Guy
Guy
Reply to  durval
1 year ago

Thanks, durval. I appreciate you taking the time.

Based on what you’re saying, I think I’m misinterpreting the critics–taking them to be saying something stronger than they are: that the effect itself is conceptually redundant or something like that. I hear you to be saying, it’s a mere method-of-measurement problem: there may very well be such an affect out there, but you’d need to analyze data differently in order to detect it. If that’s all they’re saying, then okay.

I’m a little jarred by all this because in one class period of one of my courses, I do make heavy weather about the DK effect (though I don’t show the infamous graph); now I’m wondering if I should ditch it altogether.

durval
durval
Reply to  Guy
1 year ago

You are welcome, Guy. I’m also puzzled by that, and appreciate the dialogue.

Re: “there may very well be such an affect out there, but you’d need to analyze data differently in order to detect it”, that’s exactly how I understand ‘7’: the author isn’t claiming the effect doesn’t exist, only attacking the method D/K used to supposedly detect/prove it.

Re: portraying the D/K effect in your classes, if I were in your shoes I would not go so far as ditching it; perhaps mentioning that there’s controversy about the statistical methods would make it more balanced and serve your students better.

Víctor
Víctor
Reply to  durval
1 year ago

I actually do take the author to say not only that the effect was found due to statistical malpractice, but also that there’s no Dunning-Kruger effect.

At the end of the article, the author is reviewing one of the critical papers on the Dunning-Kruger effect. There, Blair Fix says: “If the Dunning-Kruger effect were present, it would show up in Figure 11 as a downward trend in the data (similar to the trend in Figure 7). Such a trend would indicate that unskilled people overestimate their ability, and that this overestimate decreases with skill. Looking at Figure 11, there is no hint of a trend. Instead, the average assessment error (indicated by the green bubbles) hovers around zero. In other words, assessment bias is trivially small.” He takes this to be better research since they measure skill independently, using academic ranks.

Thomas
Thomas
Reply to  Víctor
1 year ago

I have to admit I don’t know the underlying literature, but I think it might be rash to call the original Kruger-Dunning paper a “mistake” or “malpractice” based solely on this blog post. The point made in that post about the randomized data is already acknowledged in the original paper: “Of course, this overestimation could be taken as a mathematical verity. If one has a low score, one has a better chance of overestimating one’s performance than underestimating it.” As they go on to say, “the real question in these studies is how much those who scored poorly would be miscalibrated with respect to their performance” (1123). (I didn’t read on to see how well they address this worry, but it’s not like they’re hiding the ball.) Meanwhile, as far as I can tell, one of the main things going on in the contrast between the KD results and the Nuhfer et al. results is that the latter get subjects to assess how competent they are in absolute terms (to put it crudely, they have to guess their scores) whereas in the former subjects make a relative assessment (they guess the percentile of their scores). It could obviously be the case that people who do badly generally know that they got low scores but think they still did better than average. Of course, it’s a legitimate question which of these forms of self-assessment is more interesting in any given context, but focussing on relative assessment isn’t “malpractice”. 

durval
durval
Reply to  Thomas
1 year ago

Thomas, the author’s malpractice/mistake claim against D-K is because of autocorrelation, which isn’t mentioned in the D-K paper.

If this is true, I think it’s fair to call the D-K effect the result of ‘statistical malpractice’.

But I agree with you, this should not be done based only on the author’s claim. More verification is needed.