Eat, Drink and Be Hired? Problems with Unstructured Interviews in Law Firm Recruits
Anna Welch, University of Toronto Faculty of Law, Class of 2024, Volume 82 Forum Editor
The Pittsburgh airport test: most law students applying to firms through organized recruits have heard of it or one of its variations. When hiring lawyers choose their final candidates from a large pool of highly-qualified applicants, the story goes that the determining factor is whether the lawyers would want to be stuck overnight in the Pittsburgh (or fill in another mid-size, post-Industrial American city) airport with the student. In a more plausible Canadian alternative, it’s whether they’d want to be stuck on a deal with you well past midnight. As Aaron Baer, a former Aird & Berlis partner, puts it in his “Unofficial Guide to the 1L and 2L Recruit – Toronto”, the entire recruitment process “boils down to whether or not a firm could see themselves spending 18 hours straight with you working on a deal or preparing for trial” (at 13).
Whether the airport test and its use by hiring lawyers is more myth or reality is debatable. Firms’ human resources apparatuses appear to be trending towards sophistication and systemization. “Fit” has become a dirty word in the recruits, as firms are rightly concerned about subjectivity and unconscious bias in their interview processes. Commitment to diversity in hiring is top of mind and, at many firms, is at the top of the recruit agenda. However, firms’ continued use of unstructured conversational interviews, dinners, and cocktail hours as part of the recruit process means the airport test likely lives on.
These unstructured interactions, termed “social belonging tests” by former Bay Street lawyer and current researcher of inequity, race, gender, media and justice Hadiya Roderique in her essay “Black on Bay Street”, are striking for both their susceptibility to bias and possible lack of correlation to long-term job success, issues I’ll discuss below. And it’s not as though the stakes for firms in the recruit aren’t high. On the business side, firms’ default candidates for partnership are those hired into their student classes, a variation of the “Cravath System ” that is nearly universally adopted. Firms make a vast investment in turning their students into effective lawyers. Morally, more and more firms acknowledge a need to hire classes that reflect society in its diversity of gender, sexual orientation, race, class, and ethnicity. Combined with those two factors is the fact that firms are also experiencing a certain amount of pressure from clients to staff matters with teams that reflect societal diversity [1].
What’s wrong with just asking (unstructured) questions?
Research on unstructured interviews in hiring processes is extensive, though it’s usually not specific to the legal industry. However, in the past few years, there is consensus from researchers that unstructured interviews are generally not a good approach. In 2023, for example, Jason Dana, Robyn Dawes, and Nathaniel Peterson, researchers from Yale and Carnegie Mellon, concluded their paper on the topic with the “simple recommendation” (at 520) not to use unstructured interviews in employee screening decisions at all.
The main issue with unstructured interviews is the possibility that they make interviewers more suspectable to bias or more likely to rely on factors that aren’t predictive of job performance than other forms of evaluation. A pressing type of bias in a profession that is, in several ways, heterogenous is the ‘similar to me’ error. According to Dr. Robert Dipboye, an industrial and organizational psychologist, “[n]umerous studies in both the laboratory and the field have shown similar-to-me effects in which the interviewer gives morefavourable evaluations to the extent that the applicant is similar to the interviewer on background characteristics, education, attitudes, and other factors” (at 86). Concerningly, this effect can be magnified by personal liking for an applicant, which can also bias judgement of qualifications.
Like other interviewers, hiring lawyers may also experience both hindsight and confirmatory bias. With hindsight bias, the outcomes from past hiring decisions are projected onto current candidates. In the law firm setting, for example, it’s possible interviewers who have seen lawyers from underrepresented groups leave the firm will be unjustifiably, and perhaps unconsciously, concerned that a student from an underrepresented group will do the same. Confirmation bias is the tendency to interpret data points as confirmation of one’s existing beliefs or theories, and it impacts the interview process as interviewers unconsciously seek to confirm their first impressions of a candidate. For example, interviewers may be less likely to ask probing questions if they are favourably impressed with an applicant than if they have negative impressions (Dipboye at 84). Further, Dougherty and Turban, in an empirical study of actual corporate interviews, found that when interviewers had favourable first impressions of a candidate a number of factors, which could all result in bias or unfairness, followed, including: “a positive style of interviewing, selling the company, providing job information to applicants, less information-gathering from applicants, more confident and effective applicant behavior, and more rapport of applicants with interviewers” (at 663).
Another form of cognitive bias experienced by interviewers, both a consequence and antecedent to those mentioned above, is overconfidence. Unstructured interviews are a particularly pernicious source of this bias. For example, adding unstructured interview ratings to standardized test scores as predictors of job success can create overconfidence and result in an illusion of knowledge. In 2016 experimental findingspublished in the journal “Organizational Behavior and Human Decision Processes”, experienced hiring managers who had the additional information of interview ratings to predict job performance were more overconfident than those who only assessed conscientiousness and general mental ability (GMA) scores. In a follow-up experiment, undergraduate students were asked to “bet” on candidates after predicting their job performance based on test scores alone or test scores and unstructured interview ratings. Those students who had access to unstructured interview information were (a) less accurate in their predictions of job performance, (b) more confident and more likely to bet more money for their preferred candidates, (c) and more overconfident in their choices. In their article “Belief in the unstructured interview: The persistence of an illusion” mentioned above, Dana, Dawes, and Peterson found that participants believed so strongly in the predictive power of interviews in projecting future GPAs that they rated random interviews—in which they knew the interviewee made up answers to questions—as more helpful than no interview at all.
Related to the issue of overconfidence in the context of unstructured interviews is a possible dilution effect: in an unstructured interview, interviewers gather multiple data points about a candidate that have little or no known relation to future job success. This overload of information can cause them to ignore some good indicators, causing a dilution effect: “extraneous information reduces reliance on good information” (Dana, Dawes, and Peterson at 513). These two factors—overconfidence and dilution—are related: “[w]hile the accuracy of prediction declines with increases in the redundancy and amount of information available to those making the predictions[,…] the confidence of these individuals in their own predictions tends to increase” (Dipboye at 90).
Bringing this back to the recruits, adding an inherently unstructured dinner interview on to rounds of unstructured interview “chats” may have limited utility and could be leading to unsound decisions. The Yale and Carnegie Mellon researchers sum up the possible issues that interview dinners, as another form of unstructured interview, could raise:
Unfortunately, a feeling of understanding [drawn from unstructured interviews], while reassuring and confidence-inspiring, is neither sufficient nor necessary for making accurate assessments. Further, there is empirical evidence that confidence and accuracy are often poorly related in interpersonal prediction contexts and confidence has been shown to increase with information even in situations where accuracy does not. We suggest that people can feel confident in the validity of unstructured interview impressions even if they are worthless (at 514).
Given the risk of introducing different biases, it’s hard to see why the dinners continue. Below, I’ll raise some reasons why they do.
A last general issue with unstructured interviews is that the applicant’s responses are not directly comparable to other applicants’ responses as the questions and depth of follow‐up vary from interview to interview. According to the Wiley Blackwell Handbook of the Psychology of Recruitment, Selection and Employee Retention, this can significantly compromise the validity and reliability of the interviewers’ ratings of candidates (Goldstein et al, eds at 183).
In the context of law firm student recruitment, there may be a compounding factor that makes unstructured interviews more susceptible to bias: a lack of formal job requirements and evaluative criteria. Researchers have observed that when unstructured procedures, (i.e., those without formal metrics), force a reliance on personal beliefs about job requirements, the likely result is a deterioration in the validity and reliability of interviewer judgments (Dipboye at 83). Typically, those formal metrics are derived from a job analysis , or the process of studying a job to determine “which activities and responsibilities it includes, its relative importance to other jobs, the qualifications necessary for performance of the job and the conditions under which the work is performed.” It’s possible some firms may have done their own internal job analysis, but I have a feeling most haven’t. Those that haven’t do not have much good external data to use. In 2013, Bill Henderson, a leader in the study of the legal profession, observed in his article “A Blueprint for Change” that“[t]here is a paucity of high quality empirical research on the factors that contribute to lawyer effectiveness” (at 498). To complicate matters further, some firms focus on hiring from top schools, while some US researchsuggests that academic factors are not very reliable proxies for future lawyering potential [2].
Legal exceptionalism?
As outlined above, research shows that using unstructured interviews is risky: they may introduce bias and reduce hirers’ ability to predict job performance. Two factors may mitigate the inappropriateness of unstructured interviews for firm hiring, however: firms’ long-term hiring objectives and the committee approach. First, given the client-facing nature of legal work, firms may be rightly hiring for personality traits, not just aptitude. Research has shown that unstructured interviews can be as accurate as standardized personality and integrity tests of job‐related personality characteristics and constructs including conscientiousness, agreeableness, dependability, organizational citizenship and integrity (Goldstein et al at 186). Small talk can be highly predictive of a candidate’s personality (p. 188). Furthermore, what can be viewed as inappropriate bias based on applicants’ style of self-presentation in other employment contexts may be exactly what firms are looking for: “[a]pplicants are viewed as more qualified if they show positive, responsive verbal and nonverbal behavior, like enthusiasm, warmth, good eye-contact, smiling, head nodding, voice modulation, energy, hand gestures, and vocal expressiveness” (Dipboye at 86). Where it becomes risky for firms (and candidates), and where the dilution effect comes in again, is that interview style is usually found to be more important than objective information on the applicant (p. 86).
The second factor that may mitigate concerns about bias in the recruit process is group evaluation and decision-making (i.e., panel interviews and committee-based hiring processes). Many firms interview candidates with six or more interviewers, in panels of two, and final decisions are made by a larger committee. Researchers Frank Schmidt and Ryan Zimmerman have found that averaging three to four independent unstructured interviews provides the same level of validity for predicting job performance as a structured interview administered by a single interviewer. When several interviewers pool their ratings of a factor, the increase in accurate ratings is substantial (Goldstein et al at 187). Of course, this benefit would not flow if firms were not using ratings. In fairness to hiring lawyers, though, it is also worth noting that even in the context of an unstructured procedure, some interviewers are capable of highly valid judgements (though others remain incapable) (Dipboye at 91).
The other side of the table
Above, I’ve only considered issues from the firms’ side. Of course, students’ experience in the recruitment process is important as well. Research has found that a candidate’s perception of the fairness of the hiring process informs the applicant’s opinion of the organization regardless of what they thought of it pre-interview (Goldstein et al at 187). This influence is important even if the lack of diversity in the Canadian legal industry stems from retention and promotion, not hiring . Interviews matter because retention begins with the interview process, if not before. While there’s some research from before the turn of the millennium pointing towards a candidate preference for unstructured interviews (Dipboye at 102), Bill Henderson’s more recent project “Solving the Legal Profession’s Diversity Problem” points in the other direction. Candidates who experienced a structured panel interview (SPI) process were much more likely to accept an offer of employment. Using the SPI process, an Am Law 200 firm increased its yield from 33% to 48% over a three-year period despite becoming more selective in making offers. Yield rates among racialized minority candidates were significantly higher. In a follow-up process, those candidates revealed that they thought the process was more thorough and fair than alternatives, making them more confident that the firm had an overall plan for their professional development.
What’s to be done?
Firms could conclude that a total reliance on unstructured interviews does not support their twin objectives of hiring and retaining diverse candidates and hiring the students most likely to succeed at the firm. The obstacles to acting on this conclusion include human nature. In their discussion of why certain types of interviews remain so widely-used in hiring despite the availability of more reliable and valid predictors, authors Allen Huffcutt and Satoris Culbertson observed that “[i]t is almost as if a part of the human make-up does not trust objective information completely even if it is accurate, the result of which is an underlying desire for personal verification” (at 185).
Any advocates for a reduced role for unstructured interviews will also face issues of power and politics. In his article “Structured and unstructured selection interviews: beyond the job-fit model”, Dr. Dipboye cites the opportunities that unstructured interviews provide for wielding power as a primary reason they are preferred over structured procedures: “power-seeking interviewers may prefer the looseness associated with unstructured interviews because it allows them to manage the uncertainty that surrounds selection decisions by persuading others to their candidates” (at 105). An unstructured approach gives decision-makers latitude to highlight factors about a candidate that support their preference and ignore others pointing in a different direction.
This same researcher also posits that unstructured approaches could provide political advantages, because they make evaluation of hiring decisions difficult. Internally within firms, unstructured interviews could have a role in shielding hiring lawyers’ selection practices from oversight by HR functions. A move away from unstructured interviews would place explicit demands on partner-owners, which, according to Bill Henderson, can be a challenge because those “partner-owners prize their autonomy and are given the greatest rewards for bringing in business”.
If these pressures against change can be overcome, firms have a number of options available to them. Like the Toronto litigation boutique Lenczner Slaght, they could end cocktail hours and consider ending interview dinners. When Rich Appiah started his own Toronto firm, he knew he didn’t want to continue the “wine and dine” parts of the recruitment process that made him feel insecure as a person of colour from a low-income family. Instead, he designed a four-step screening process that includes a question—core to the firm’s mission—asked consistently across all candidates and a fact-pattern based evaluation.
If they haven’t already, firms can look internally for data and information that can inform the recruitment process. At least in the United States in 2016, according to Bill Henderson, most law firms’ recruitment systems were based primarily on tradition and past practice and any data engaged was of varying quality. Job analysis could be a good place to start, as would consensus on the values and qualities the firm is hoping to hire for. Of course, both efforts are likely to encounter challenges in large corporate firms, given partner autonomy and the fact that different practice groups diverge in culture themselves.
A simpler first step is introducing structured, behavioural interviews. It should be noted, however, that those work best when hirers know what they’re hoping to hire for, necessitating the soul-searching flagged above.
I think it’s unlikely and unnecessary for any firm to completely scrap unstructured interviews. They have a place as the first step in socializing new hires and, as mentioned above, they may be a valid tool for assessing candidates’ personality. They also serve to provide information to candidates and give hiring lawyers an opportunity to sell the firm. Ultimately, shifting the balance towards structured interviews—and ending cocktail hours—could go a long way towards turning the Pittsburgh airport test into lore of the past, allowing law firms to contribute to incremental social change.
[1] See, for example, Bryan Seaman’s article “How client surveys are moving to the DEI needle at law firms” and Vivia Chen’s “Busting the myth that clients are driving big law diversity”.
[2] See the paper “Final Report – Identification, Development and Validation of Predictors for Successful Lawyering” by Marjorie Shultz and Sheldon Zedeck.