Should We Stop Interviewing Job Candidates?


Recent research suggests that job interviews not only provide potential employers with irrelevant information, but actually “undercut… the impact of other, more valuable information about interviewees,” according to Jason Dana (Yale), in a recent column in The New York Times. How, if at all, should the hiring of philosophers be affected by these findings?

Here is one experiment that Dana ran:

[W]e had student subjects interview other students and then predict their grade point averages for the following semester. The prediction was to be based on the interview, the student’s course schedule and his or her past G.P.A. (We explained that past G.P.A. was historically the best predictor of future grades at their school.) In addition to predicting the G.P.A. of the interviewee, our subjects also predicted the performance of a student they did not meet, based only on that student’s course schedule and past G.P.A.

In the end, our subjects’ G.P.A. predictions were significantly more accurate for the students they did not meet. The interviews had been counterproductive.

It gets worse. Unbeknown to our subjects, we had instructed some of the interviewees to respond randomly to their questions. Though many of our interviewers were allowed to ask any questions they wanted, some were told to ask only yes/no or this/that questions. In half of these interviews, the interviewees were instructed to answer honestly. But in the other half, the interviewees were instructed to answer randomly. Specifically, they were told to note the first letter of each of the last two words of any question, and to see which category, A-M or N-Z, each letter fell into. If both letters were in the same category, the interviewee answered “yes” or took the “this” option; if the letters were in different categories, the interviewee answered “no” or took the “that” option.

Strikingly, not one interviewer reported noticing that he or she was conducting a random interview. More striking still, the students who conducted random interviews rated the degree to which they “got to know” the interviewee slightly higher on average than those who conducted honest interviews.

Dana writes that “The key psychological insight here is that people have no trouble turning any information into a coherent narrative,” and that this is true even when the information is incorrect. Furthermore, Dana explains how unwilling people are to accept evidence of this sort.

How should this affect hiring in philosophy? Some departments already forego official preliminary conference or video interviews and skip right to campus visits. What are the reasons, if any, for sticking with them? And should there be less interviewing during campus visits, too? Or perhaps none at all?

Related.

USI Switzerland Philosophy
Subscribe
Notify of
guest

49 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Fritz Warfield
7 years ago

It’s obvious isn’t it? If it’s counterproductive for students to interview students in an attempt to predict future grades then clearly no philosophy department should interview anyone for any job.

Here’s one not uncommon example of what I, cautiously of course in light of the noted studies, take to be relevant information I have learned doing convention interviews over the years. For a position advertised as “AOS: X” the question “what sort of majors level class would you teach in area X” has received answers on different occasions including “I’ve never thought about teaching a class of that sort,” “I would not want to teach in that area,” and “I’d start with Descartes” [where the near maximal disconnect between Descartes and the area in question was, ahem, clear]. I took these answers to be disqualifying of the interviewee. Perhaps I should have been more open to pursuing a candidate for a position who indicated in discussion that he “would not want to teach in [the advertised area]”? Or perhaps it’s best not to ask about such things so as to avoid the counterproductive information that one receives.

It’s possible, I know, that what I have learned that is relevant during interviews is outweighed by misleading information that has perhaps played an even larger role in evaluating candidates in part based on interviews. That student on student interviews are counterproductive when predicting grades doesn’t add much to the existing case for this possibility.

Tim
Tim
7 years ago

Two comments:
(1) By the fact that you mention “Convention interviews” I can infer that you have (probably) not been on a search committee in the last five years. Let me then inform you that things are different now. If you want to hire someone in area X, you can restrict your search to people who have multiple publications in area X and you will still have a boatload of candidates. So the particular problem you raise is no longer a problem.

(2) But I do realize that you raised the particular problem you raise as an example of something more general. And that general point seems to be this: “Well, yeah, undergrads are bad at this. But by god I’m a PHILOSOPHER!!. I’m much smarter and better at avoiding implicit biases and bad inferences than a mere UNDERGRAD!”.

My response to this is simple: no you’re not. It’s fun to think that as philosophers we have some magical skill at not falling into these traps, but we don’t. The sooner we’re honest about that the better for our profession.

Tim
Tim
Reply to  Tim
7 years ago

This was meant as a response to Fritz Warfield’s comment above. Obviously I failed to hit the “Reply” button. Oops.

Fritz Warfield
Reply to  Tim
7 years ago

My department, Notre Dame, searches nearly every year. I am regularly a part of our searching and have been for over 20 years. This includes interviewing from back when we did preliminary interviews at the Eastern and interviewing, as recently as this year, when we did initial interviews via video-conference. I’ve interviewed as part of open searches and as part of targeted area searches — for example, this year we did separate searches in “ethics” and “philosophy of biology” and I was one who interviewed ethics candidates.

I don’t know what your experience is interviewing. Perhaps you have many things to teach me. Do you really think one of those things you can teach me is that it’s possible to do a targeted area search that will attract many candidates with publications in the advertised area? Wow, I hadn’t noticed this in reading thousands of applications and interviewing hundreds of candidates. Thanks for the help.

What you say is my “general point” is not even a point I’m out to make. I do not at all believe that professional philosophers are somehow immune to various effects because we are not undergraduates. One point was that the inference from the counter-productivity discussed in this experiment to the conclusion that philosophers should not do the sort of interviewing we commonly do is a bit thin and doesn’t, so far as I can tell, add to already available information on that topic. You might notice that this claim is not equivalent to the claim that philosophers are immune from anything.

Lastly, please tell me: when I voted to reject a candidate for a position advertised in a specific area who said he was not interested in teaching a course in that area, are you classifying that as “implicit bias” on my part or as a “bad inference” that I drew?
thanks again for the help.

Tim
Tim
Reply to  Fritz Warfield
7 years ago

On your first question: here’s how you attract candidates with multiple publications to apply for your job, no matter the AOS: you advertise it. If you are telling me that Notre Dame advertised a job and no candidates with publications in the job’s AOS applied, then I am going to go ahead and say I don’t believe you. I call shenanigans.

I *do* believe that if you first do other sorts of screening, you may end up, after that screening, with nobody who has publications in the advertised AOS remaining in your pile. If you are finding yourself in that position, then you need to revisit more of the details of your search procedures than just whether you’re going to do interviews.

As for whether the inference you drew was a good one I’ll say this: Candidates placed in unnatural positions will sometimes give infelicitous answers to seemingly easy questions. Drawing very nearly *any* inference from what you think you learned about someone in an interview is unwise.

Fritz Warfield
Reply to  Tim
7 years ago

Tim,
You write “If you are telling me that Notre Dame advertised a job and no candidates with publications in the job’s AOS applied, then I am going to go ahead and say I don’t believe you. I call shenanigans.”

I’ve given my real name, my real employer, and a great many people can verify my history of involvement in job searches conducted by Notre Dame philosophy. But I guess you can “call shenanigans” if you want.

I of course did not say “no candidates with publications in the job’s AOS applied” — where did you get that idea? As I said, I’ve been on search committees for over 20 years in a large department searches every year. I know what the applicant pool looks like. Why are you just making this stuff up? I did not say anything about the publication record of the candidates who gave the answers I described nor about the applicant pool generally. Apparently you think I did. I’ll now say this: like any major department, when Notre Dame interviews applicants for assistant professor positions, the overwhelming majority have published in the advertised AOS. For example, in this year’s searches, every interviewed assistant professor candidate had multiple publications in the advertised AOS. A very high percentage of the 250+ *applicants* had publications in the AOS.
In my original comment, I gave examples from past years of answers to questions about *teaching* in the advertised AOS that I took to be disqualifying. Those answers came, in every case, from candidates with publications in the advertised AOS.

On the second point – I don’t at all agree that asking a candidate interviewing for a position advertised as “AOS: X” what sort of majors level class they would teach on X thereby puts the candidate, as you say, in “an unnatural position”….but I admit that I’ve offered no defense of my position.

mhl
mhl
7 years ago

A year ago or so, dailynous.com had a post about job announcements, and how a particular application review process was partially anonymized to keep the reviewer from seeing the school of the PhD candidate applicant. This was praised in the comments section, where commenters said that said that the information on paper (such as what school one attends) would bias the reviewers and not give quality applicants from programs outside the top ten a chance to impress them in an interview.

Now it seems that in a slightly different context, what is on paper (and only what is on paper) matters.

Can we just own up and say we aren’t very good at this – slightly better than random maybe – and that’s OK.

pike
pike
7 years ago

Perhaps there is 1 thing that can be measured by an interview: how much the candidate prepared for it. Which is absolutely no indicator of the candidate’s future success, but a pretty okay indicator of how much they would like to have the job. Being able to talk about teaching X for a position advertised for X, or being able to talk about the school’s mission and teaching philosophy means that they at least took the effort to look these things up.

Candidate
Candidate
Reply to  pike
7 years ago

But not being very good at talking about those things in an interview setting is not an indicator of the candidate’s lack of interest either.

Candidate
Candidate
Reply to  pike
7 years ago

I think it’s important that interview committees do not share Pike’s assumptions. Here’s why.
Suppose two candidates interview for a position with AOS X. Suppose further that both candidates are asked about how they would teach X, as well as about the school’s boilerplate mission and teaching philosophy. Suppose Candidate 1 (C1) provides well thought out answers, and Candidate 2 (C2) does not. Pike’s theory is that Candidate 1 prepared more for the job and therefore wants the job more than Candidate 2 does. But here are some alternative theories. C2 teaches a 3/3 load. C1 lives with his parents and has no teaching requirements. C1 therefore had more time to prepare for these questions. In equal circumstances, C2 would outperform C1, but circumstances are not equal. C2 also wants the job more than C1 (whatever that means). But Pike just made a hire based on the amount of responsibilities the candidates had outside of interviewing for a specific job. Alternatively, C1 has been on the market before hand has received exactly the same questions before. Therefore C1 has had practice answering exactly those questions in a live interview setting. But this is C2’s first interview. He has prepared answers to those questions, but he’s nervous and they don’t come out like he wanted. Pike just made a hire based on interviewing experience. Alternatively, C2 just misunderstands what sort of answer counts as a good answer to boilerplate questions. C2 thinks he can provide an answer and then get a bit of Q&A. So he says an inviting one-liner, and the committee moves on to the next question. C2 has plenty of exciting things to say about the boilerplate mission but missed the chance to say them. Etc. Pike just made a hire based on how the candidates understand interviews. Outside responsibilities, interview experience, understanding of interviews: these, I suspect, are no indicators of anything worth indicating.

David Wallace
David Wallace
7 years ago

I think the IP omits some relevant context from the article. Some salient quotes:

“Employers like to use free-form, unstructured interviews in an attempt to “get to know” a job candidate…”

“What can be done? One option is to structure interviews so that all candidates receive the same questions, a procedure that has been shown to make interviews more reliable and modestly more predictive of job success. Alternatively, you can use interviews to test job-related skills, rather than idly chatting or asking personal questions.”

“Realistically, unstructured interviews aren’t going away anytime soon. Until then, we should be humble about the likelihood that our impressions will provide a reliable guide to a candidate’s future performance.”

In other words, the article is saying that “free-form, unstructured interviews” are basically useless. At least at Oxford, interviews of both applicants and job candidates are pretty highly structured: an agreed list of questions in advance, asked to all candidates and with responsibility for asking and following up each line of questioning allocated to a specific member of the interview panel.

I don’t yet have enough experience of the US system to know if its interviews are in the “free-form” or “highly structured” category, but it seems pretty relevant to this discussion.

Brian Weatherson
Reply to  David Wallace
7 years ago

In any interview I’ve been part of in the US, it is much closer to the free-form category. I think this is bad, and if we interview at all, the interviews should be much more structured.

Shen-yi Liao
Reply to  Brian Weatherson
7 years ago

There’s a significant difference between the on-campus interview in the UK and the US. The UK ones tend to be highly structured because there are few interactions outside of the formal interview and the job talk equivalent. The US ones tend to involve much informal interactions at various meals. Now, I’d love it if those interactions don’t count as interviews and don’t affect assessments at all, but I suspect they function much more like freeform interviews.

Todd
Todd
Reply to  David Wallace
7 years ago

At my regional state university here in the U.S., we have to come up with a standard set of questions well ahead of any video interviews that then must be approved by HR. There is a similar procedure for the formal interview part of the on-campus as well, but obviously, the on-campus allows for lots more unscripted questions and conversation. While the process of getting HR approval is tedious, and there are times when it seems like it would be really helpful to ask a question not on the list, I do think this is helpful in getting us closer to having apples-to-apples comparisons.

Though, I am also kind of sympathetic to the view that once we narrow it down to the top 20-30 candidates, we might do just as well picking names out of a hat.

CW
CW
Reply to  David Wallace
7 years ago

Re David Wallace: At my US university, a regional state U, interviews are structured. Interview questions are not reviewed by HR, but search chairs are required to undergo some training before starting a search. This includes review of specific do’s and don’ts with respect to interview questions. All candidates are asked the same questions. Questions are allocated to specific search committee members. Follow-up questions are less structured. So far in my experience, they’ve been asked to draw out fuller responses to the structured questions.

jobseeker
jobseeker
Reply to  David Wallace
7 years ago

On the other side of the table, I had both more and less structured interviews this year (all in the US). I did not know about these kinds of considerations beforehand, and at what probably were structured interviews I felt much more uncomfortable: the interviewers did not follow up where I thought they should have followed up, and overall seemed much less interested about what I was saying just because they weren’t allowed to interact freely.
I understand that structured interviews can be more objective. But perhaps it would be nice to share that with the candidate that they are going to undergo a structured interview.

Marcus Arvan
7 years ago

This is not just one study. It is one of the most replicated findings in the field of Industrial-Organizational Psychology across several decades that algorithmic methods of hiring outperform “clinical methods” (i.e. interviews) across just about *every* predicant of job-performance measured.

These findings have been replicated across many fields, including academia. For instance, a 2016 MIT study showed that candidates selected for academic jobs via an algorithmic model had a *30%* higher rate of getting tenure than those selected by “human judgment.”

https://www.insidehighered.com/news/2016/12/20/mit-professors-push-data-based-model-they-say-more-predictive-academics-future

This, again, is not an isolated finding. It has been replicated again and again across decades. Here is a passage from an abstract of one of the largest and most famous meta-analyses in the field: “Empirical comparisons of the accuracy of the two methods (136 studies over a wide range of predictands) show that the mechanical method is almost *invariably equal to or superior* to the clinical method: Common antiactuarial arguments are rebutted, possible causes of widespread resistance to the comparative research are offered, and policy implications of the statistical method’s superiority are discussed.”

http://psycnet.apa.org/journals/law/2/2/293/

John Jackson
John Jackson
Reply to  Marcus Arvan
7 years ago

Paul Meehl (your second link)l had been making that argument at least since the 1950s, but not in regard to academic hiring (at least to my knowledge). It is not clear to me that the diagnoses he was discussing in psychology or psychiatry transfer into job hiring decisions and the kinds of things needed for “success” there. That it succeeds in the first situation does not mean it will succeed in the second. Nor does the fact that undergraduates have trouble with predicting GPAs–in fact I think the suggestion that the initial study informs hiring decisions is bizarre.

As for the Higher Ed. article it seems like it relies on a lot of qualitative judgments that are obscured by claims of it being an “objective” measure. What is a “top” journal? How do we weight one paper in a “top” journal vs. 3 in “second-tier” journals? Is a paper that is cited 10 times, but refuted or condemned each time it is cited really “better” than one that is cited only 7 times but praised each time? What if the journal in the “top” journal is never cited but the paper in the “third-tier” journal is cited a lot? Aren’t these all qualitative judgments that must be made before the algorithm is run on the data? And, of course, this study is on only one aspect of a tenure decision, ignoring teaching, research, etc. How does it transfer to the proposition that we eliminate interviews for initial hires? And, of course, it is a tiny sample size in one field.

Finally, it seems as if the discussion is based on a false dilemma: it is either “Completely objective statistics” or it is “Free form, uncontrolled impressionistic interviews.” The final decision can, of course be based on both.

Robert A Gressis
Reply to  Marcus Arvan
7 years ago

Marcus, do you have any sense of how far this fact about the superiority of the mechanical method is over the clinical method? Do we know only that it is superior in hiring decisions, or is it superior over more decisions, such as: finding a mate; figuring out what kinds of restaurants, books, or movies I’m likeliest to enjoy; figuring out whom I should pick as my friends; etc.?

Marcus Arvan
Reply to  Robert A Gressis
7 years ago

Hi Rob: Unfortunately, I do not know the answer to those questions. I only know the research regarding the workplace, as this is my spouse’s area of specialization (she is a PhD candidate in a highly ranked Industrial-Organizational Psychology program). However, it would not surprise me one bit if data-driven methods worked in other domains too, given the very nature of data (which one can always mine for predictive relations to outcomes). Indeed, as I understand it this is why social media sites (facebook) and online advertisers use algorithms to predict which “news feed” one is likely to like and which advertisements one is likely to respond positively to given one’s online behavior.

Robert A Gressis
Reply to  Marcus Arvan
7 years ago

Then, would you imagine that marriages arranged via social media algorithms would work out better than the way we do things nowadays? Same too for picking a major, picking a dissertation topic, etc.?

Marcus Arvan
Reply to  Robert A Gressis
7 years ago

I would indeed *imagine* all of that–but whether any of it is actually true is a matter of what the data shows, and as I mentioned previously I just don’t know the science on those things! 🙂

Kenny Easwaran
Reply to  Robert A Gressis
6 years ago

There are many cultures that have argued that marriages arranged by negotiations among families work better than marriages arranged by in-person contact between the partners. I don’t know what the data actually show here, or whether one of the two processes has the possibility of better optimization. The growth in popularity of online dating sites suggests that a lot of people find it valuable to put more weight on the “on paper” features of an applicant early on than is traditional.

Michael Kremer
Michael Kremer
Reply to  Marcus Arvan
7 years ago

Marcus Arvan, your first study does not show what you said it shows. It’s about tenure decisions and correlation with future research productivity, and it has *nothing* to do with interviews at all. (And there are lots of issues with that study, not the least the idea that all we care about when we give someone tenure is their future research productivity.)

Marcus Arvan
Reply to  Michael Kremer
7 years ago

Michael: The examples I gave are merely intended to point to a much broader empirical literature that consistently shows the general superiority of data-driven models to “human judgment”, not just interviews.

Craig
Craig
Reply to  Michael Kremer
7 years ago

“For instance, a 2016 MIT study showed that candidates selected for academic jobs via an algorithmic model had a *30%* higher rate of getting tenure than those selected by “human judgment.'”

That’s not what the study showed, right? The study showed that model-selected profs had higher impact scores than non-model selected profs, but I don’t see anything about getting tenure.

I’m sympathetic to the idea that interviews should involve more data. But that assumes: 1) we have a good idea what counts as success, and 2) we have good quantitative ways to measure what matters for whatever counts as success. And I have a deep suspicion of the Industrial Ops people that they are so obsessed with the success of this method that they will artificially answer both of those questions so as to get a seat at the table. The citation of this study in this context does little to abate that suspicion.

Michael Kremer
Michael Kremer
Reply to  Craig
7 years ago

Craig: exactly. And the model-selected profs were selected at the tenure stage, not at the hiring stage.

Craig
Craig
Reply to  Michael Kremer
7 years ago

Indeed. And even if you liked the data, you might think that this same research project shows not that the data-driven model does it better but that the proponents of the data-driven model will be pushed to select poorly, since they will go for the easy-to-measure. I can easily imagine preferring a slightly less productive friendly colleague who gives me good feedback on my own work to a slightly more productive colleague who is a ghost in the department.

Margo
Margo
Reply to  Craig
7 years ago

Craig,
“Suspicions” can undermine the body of evidence of a field that has existed since WWI? I-O psychologists have been saying these things for decades (i.e., that unstructured interviews are useless, if not actively harmful; that using mechanical vs. clinical methods of combining “hiring predictor” variables are superior in predicting job performance ); it’s merely that they have been largely ignored by organizations until now. So you think that every I-O psychologist who has found evidence on the uselessness of interviews has done so on mercenary grounds (i.e., to “get a spot at the table”)? Surely that doesn’t account for the many retired I-O psychology professors whose greatest annual income was only ever $70k. What a lousy reward to obtain for the cost of deliberately manipulating data and actively creating a pseudoscience, as you imply. Do you deny the legitimacy of other scientific fields as well? I suggest that you read a textbook on I-O psychology or do a little inquiry yourself using Google Scholar or Web of Science. It isn’t hard to find scores and scores of studies that support what Marcus is saying about the uselessness of unstructured interviews and how combining hiring predictors (e.g., publications, student ratings) via human judgment is inferior to doing so via empirical algorithmic methods.

Michael Kremer
Michael Kremer
Reply to  Marcus Arvan
7 years ago

Marcus, Your second example also seems not to be related to the topic at hand. The study concerns a comparison between two methods for evaluating a given set of data — using human judgment and consultation versus using algorithmic methods. This has nothing to do with interviews (which are a method for acquiring more data, not a method for evaluating existing data).

Marcus Arvan
Reply to  Michael Kremer
7 years ago

Yes, but see above. The rest of the standard hiring process in academia (judging CVs, research statements, teaching statements, and writing samples) does use clinical methods, not data-driven methods. This is why I shared those two studies. The problem isn’t just interviews: it is that our entire hiring process in academia privileges human judgers when the scientific evidence systematically indicates human judgers are inferior to more data-driven approaches.

Michael Kremer
Michael Kremer
Reply to  Marcus Arvan
7 years ago

Marcus, I’m sorry, I don’t see how the evidence you cite shows anything of the sort you’re claiming. Ultimately the data we use will rely on human judgers, else what will be the data you input to the algorithm — publications (subject to human review)? Impact factors of journals (determined by human decisions)? Course evaluations? Note that your second study *includes* among the data to be processed by the algorithmic processes “interviewer ratings” as the first example! So “interviews” aren’t part of the “clinical methods” that the algorithmic methods are supposed to be superior to in that study! (SImilarly, grading writing samples could yield data . The study abstract also lists “self-descriptions” as possible data.)

In any case the question in the OP wasn’t whether to use data-driven methods (I really can’t imagine what such methods would look like, at least methods that I would consider remotely plausible) instead of “clinical” methods, but whether to include interviews or just use the other methods we use. You began your first post

Michael Kremer
Michael Kremer
Reply to  Michael Kremer
7 years ago

Sorry about the last bit which I meant to delete!

Laura Grams
Laura Grams
7 years ago

Often the information we are trying to glean from an in-person interview is not the kind of thing that can be easily measured and predicted in another way. If I wanted to predict GPAs, why would an interview ever do that more accurately than information about past GPA and schedule difficulty? By contrast, if I want to predict whether a job candidate can communicate effectively with our students and is amiable with colleagues, I learn things from an in-person interview that I wouldn’t learn simply by examining data like past teaching evaluation scores or recommendation letters.

I have questions about some of the studies cited, too. One examined the performance of applicants to a top med school who already passed the first rounds of scrutiny. They are likely to have similar qualifications – top grades, high test scores – and because they’re all so well-qualified, the fact that one group selected via interview did not significantly outperform the next-best fifty among those who weren’t selected isn’t particularly surprising. Hiring in philosophy is similar. On paper we may have dozens of people expected to teach a terrific ethics course and publish interesting articles; perhaps some will eventually post higher numbers than others on some predictive analytics metric or other. Yet how does that help determine whose teaching would best fit the needs of a particular department and its students, or whose research plans and interests best suit its goals? Interviews tell us more about these things, among others. We could submit written Q&A, of course, but most of what we do transpires through some form of conversation, so why not have a conversation as part of the hiring?

Robert Gressis
Robert Gressis
Reply to  Laura Grams
7 years ago

“if I want to predict whether a job candidate can communicate effectively with our students and is amiable with colleagues, I learn things from an in-person interview that I wouldn’t learn simply by examining data like past teaching evaluation scores or recommendation letters.”
I think that’s precisely what Marcus is denying — even that stuff is better learned by examining data like past teaching evals, etc. I admit, I find it hard to believe, myself. But he’s very confident about it!

Brian Weatherson
Reply to  Robert Gressis
7 years ago

That’s not the relevant alternative to the status quo. The relevant alternative is to do what’s usual practice in the UK, and have (a large part of) the interview made up of standardised questions that are asked to every candidate. The theory (and arguably the evidence) is that having a common base of questions will make it easier to make fair comparisons between the candidates, especially about things like ability to communicate.

One could go further and argue that even standardised interviews are bad. But I don’t think the data fully supports that. I gather (though I’m not an expert on this) that the strongest evidence is against “free-flowing”, relatively unstructured interviews, of the type that are rare in the UK but common in the US.

Not Yet on the Job Market
Not Yet on the Job Market
Reply to  Brian Weatherson
7 years ago

Assuming interviews are still worthwhile, a base set of generally relevant (though perhaps customizable) questions seems best. But why not also inform the candidate in advance what at least some of these base-set questions will be? (Maybe this already happens, but it doesn’t seem so in many cases, judging by some of the discussions above.) I don’t see the harm in it, but I can see the good in it. Interviewees would more likely give answers that better reflect their considered views, helping out the less seasoned interviewees. And I’d think their considered views are what you *want* to judge. Maybe this’d help filter out noise from nerves, jitters, and interview inexperience. Maybe there’s a worry that candidates will come in coached and prepped or something. But interviews aren’t exams, I shouldn’t think; so I don’t see the problem. So, e.g., if every candidate for a particular job line knew they’d be asked about how they’d teach an upper division course in their AOS, you’d be less likely to get stammers and I’m-not-sures, and so on. And if you still did get an “I don’t want to teach this stuff”, like in Fritz’s case, you could be more confident that the best explanation for that answer is that, well, he/she doesn’t want to teach it; and so more confident that disqualification is appropriate (if your inclination is indeed disqualification in such a case).

Sara L. Uckelman
Reply to  Not Yet on the Job Market
7 years ago

I don’t know how it works in other UK universities, but on the two interview panels I’ve been on, the committee met and decided/assigned questions about 15 minutes before the first interview — so there wouldn’t really have been any time to let the candidates know the questions in advance. Now, one could argue that we should be better prepared/organized, but I think the reality is that this isn’t going to happen.

Not Yet on the Job Market
Not Yet on the Job Market
Reply to  Sara L. Uckelman
7 years ago

Sara, understood. Although, why wouldn’t your comment (no doubt based in solid experience) that “the reality is that this isn’t going to happen” also apply to pretty much any suggested improvements to interviewing procedures? It doesn’t seem too difficult to come up with a set of standard questions that the hiring committee knows will be asked (e.g., “Ok, well, we’re definitely going to ask each candidate how they’d go about teaching a course in their AOS…”), and then give these (or some of them) to candidates beforehand. If that really is too difficult to achieve (I’m not pointing at you; I’m just saying in general), then I guess I think pretty much any improvement in this area is too difficult. That’d be kind of a bummer.

Previous candidate
Previous candidate
Reply to  Sara L. Uckelman
7 years ago

Having been to three UK job interviews as a candidate, I find the whole thing a bit useless to be honest. You can predict every question, and many of the questions each candidate will answer almost identically (especially about grants). I can’t think of anything I’ve been asked in an interview that couldn’t have been gleaned either from a CV, my cover letter, references, or sending a set of written questions. Now, if the point is to determine soft skills and it’s not the answers that matter, but the way they are delivered (communication, amicability, etc.) then fine. But this points to moving towards a more test-based model rather than interviews. At each of these places I’ve had to do a presentation plus Q&A. I can’t see how my communication skills and ability to think on my feet were not already obvious by the end of that session. If you want to see how well a person would fit in, have a lunch or coffee hour where people don’t talk about your presentation or work. Again, though, people will be on their best behaviour so I don’t see how you necessarily get a genuine idea of how well the person will fit in or how good a colleague they’ll be. I did get a job, by the way, so this is not sour grapes.

Marcus Arvan
Reply to  Robert Gressis
7 years ago

Rob: The quantitative methods are better because (A) candidates who are not normally amiable, etc., can “turn it on” for an interview (thus deceiving the hiring committee), and (B) interviews are known to systematically play into a wide variety of biases.

http://philosopherscocoon.typepad.com/blog/2015/02/more-news-on-the-interviews-are-worse-than-useless-front.html

Kenny Easwaran
Reply to  Robert Gressis
6 years ago

Another alternative that might work better is to still conduct interviews, but to use people not on the hiring committee, who write up their reports on the interview for the hiring committee. That way you still get the information from the interview, but the vividness of certain types of warmth and personality that often mislead people don’t get transmitted to the people that are actually making the decision.

Philip Kremer
Philip Kremer
7 years ago

One suggestion above: if interviews are done at all, the the questions should be standardized. But in most interviews I’ve attended, many of the questions addressed highly specific issues arising out of the candidate’s writing sample and other publicly available work. “Here’s an objection to your argument for X, on pp 14-16. How would you reply?” Is there a way to probe a candidate’s work in the same way with standardized questions?

Brian Weatherson
Reply to  Philip Kremer
7 years ago

The standardised interviews work best for searches on relatively specific subject matters. If you have that kind of search, you can have questions that are about the state of some debate that’s central to the area you’re searching in. And that will lead naturally to the candidate’s views and (assuming follow ups are allowed) you can grill them on the argument on pages 14-16 then.

It is, I think, harder to write standardised questions for a really open area search, though it would be interesting to hear from people who have experience doing this.

Philip Kremer
Philip Kremer
Reply to  Brian Weatherson
7 years ago

“The standardised interviews work best for searches on relatively specific subject matters. If you have that kind of search, you can have questions that are about the state of some debate that’s central to the area you’re searching in. ”

I like the idea, but still am unconvinced. Say you’re searching in Early Modern. You’re interviewing a Spinoza scholar, a Cavendish scholar, and a scholar of the second scholasticism of the 16th and 17th centuries. You might be hard pressed to come up with suitable questions that (1) are about the state of some debate that’s central to the area and (2) don’t unfairly advantage one of the candidates, e.g., the candidate working on stuff closest to the traditional and familiar centre of the area. (Maybe I’m displaying my lack of imagination here.) This might be especially hard if nobody in the department currently works in the area. I think that this problem might apply more or less across subdisciplines — maybe less so for narrower subdisciplines than broader ones.

Brian Weatherson
Reply to  Philip Kremer
7 years ago

I agree – there will be (realistic) cases where I can’t figure out the questions.

And I hadn’t been thinking enough about the point that common questions might create a bias towards the “traditional and familiar center”. That’s a really important counter consideration, and one I hadn’t been giving enough weight to.

Gordon
Gordon
7 years ago

I only want to wade into this once, but a few thoughts based on my own interviewing experience and knowledge of data analytics:

1. We use structured interview questions in the initial screenings (always on Skype or comparable tech). We tell candidates this at the interview. They are somewhat open-ended in that candidates can respond how they want (and follow-ups are ok, within the limits of allotted time), but they do lead to relevant information about what courses the candidate would teach (and would want to teach), and things like that. It also signals how interested the candidate is in the job. I think that data point is a lot more relevant at the senior level, and we’ve had promising senior-level people show that they didn’t even bother to look up the department or university. It’s kind of a pain setting these up, but it definitely generates more reliable comparative information than free-flow chats.

2. We should probably distinguish between two kinds of analytics in hiring. A big literature says that all sorts of implicit bias make it into personnel processes (so when they distribute identical cv’s to hiring managers or students at large midwestern universities, candidate named “John Smith” gets a lot more callbacks than “Jamal Smith,” despite literally identical cv’s). It seems to me there is a very important role for data there, and we can use these results to try to minimize implicit bias problems.

I am much more skeptical of the use of data to predict “success,” because it’s not clear what the definition of that is. One widely discussed example in the analytics literature is when a company decided success was measured in part by duration of time at the job (I think that’s probably not bad in the academy: it would serve as a rough proxy for getting reappointment and tenure, as well as retention. Searches are expensive and disruptive, so retention is a worthy goal).. It turned out that commute time was a significant predictor of duration at the job. But that had the effect of redlining minority neighborhoods, most of which were further from the job location.

You can also end up locking past discriminatory practices into the future: if you give the computer training data based on current employees, and most of those are white men, then you run the risk of teaching the computer that white maleness (or proxies) are predictors of success. There are always things you can do to be careful, but relying on data too fast is risky. We would at least need to define success carefully, and look very carefully at what prior data (ex. teaching evaluations) select for (this is probably especially true for teaching evaluations, for which there is a large critical literature).

Given that we don’t have a clear idea of what counts as “success” in philosophy or in a philosophy job, I would urge caution in getting too quantitative too fast. This is not necessarily to say the status quo does a good job. But it is to suggest that there are serious pitfalls in quantitative analyses that need to be confronted with open eyes, and that’s without even raising questions about the perils of audit culture and neoliberalism more generally.

asst prof
asst prof
7 years ago

Ah, yes, thank goodness we have those bias-free evaluative metrics, like teaching evaluations! And GPAs! I mean, they are quantitative, which means, number-y, and things that involve numbers are objective! We should do away with subjective things like actually talking to people, and base our decisions entirely upon the impressions of nineteen year olds, converted into numbers … wouldn’t want to introduce any bias into this otherwise completely objective process.

David Ebrey
David Ebrey
7 years ago

What you choose to evaluate in the job market determines what sort of candidate you are going to get. Evaluating the publication record is a reasonable predictor of how well this person will publish. Evaluating the writing sample is a reasonable predictor of how good their writing will be. The same goes for a job talk, teaching demonstration, or interview. Now obviously this is imperfect. A job talk is not like a normal talk, a writing sample is often not like a normal publication, and there are pressures that come from this happening in the context of the job market.

That said, most people aren’t looking to hire a good interviewer, whereas they do want someone who will publish well, write good papers, give good talks, and teach well. How many of the skills and traits people care about come out through interviews? Perhaps in some cases where an interview goes very well, it could indicate that someone has philosophical acumen and the ability to think on her feet that doesn’t come out in other ways. But I think that is better selected for in Q&A at a talk and in conversations on campus. Similarly, for the question of whether the candidate is likely to fit in well in the department. But we all know some people who think well on their feet and would fit well into a department and don’t interview well.

I really do think the interview is an odd man out here. If eliminating it could make room for bringing another person to campus, I think it is clear that one should do so. But even if not, I am skeptical that it helps the process. At best, I think it makes sense to use them to add one additional candidate who surprised the committee, rather than using them to eliminate candidates who originally looked like they should be flown out.