The use of rubrics to evaluate job candidates is touted as a way to reduce bias and improve diversity, equity, and inclusion in hiring. But few real-world studies exist to demonstrate that checklists of hiring criteria have the intended results, according to a group of sociologists and engineers at the University of California San Diego. To address that data gap, the researchers evaluated rubric use in four engineering faculty searches at one university that resulted in the hiring of three women and six men (Science 2022, DOI: 10.1126/science.abm2329). They found that women and men were hired in more-equal numbers when a rubric was used, but gender bias was still evident in rubric scoring. Limited data prevented the analysis of rubrics’ impact on hiring fairness related to race, ethnicity, and other aspects of diversity. The researchers concluded that rubrics should include a calibration metric that can be verified independently—such as comparing a candidate’s number of papers to productivity scores given by evaluators—to detect bias in scoring. The researchers also recommend that meetings of evaluators begin with one person presenting results. “By opening with a neutral reading of the full set of both positive and negative rubric comments, the impact was blunted of any first speakers, often senior men, attempting to vociferously promote or shoot down a candidate,” they say in the paper.