Created By: Jessica Shatzel and Rebecca VoulgarakisAugust 23, 2023 The New York City AI Bias Law (Local Law 144) took effect in early July, ushering in what is sure to be further regulation of the use of AI in recruiting and hiring. This law regulates the use of automated employment decision tools (AEDT) by employers and employment agencies by requiring the use of such tools to be disclosed to employees and candidates. AEDTs are also subject to bias audits, the results of which would be made public. Local Law 144 is the response to an increased understanding of the biases that exist in AI and the limitations of AI tools in human resources functions. AI’s Bias Tendency in Recruitment and Hiring Bias can creep in at every stage of the recruiting and hiring process. 68% of hiring managers believe that AI can remove unconscious bias, and many are turning to AI tools to diversify their candidate pools and talent pipelines. However, relying on AI to identify key qualifications in a stack of resumes is likely producing the opposite effect. If the AI is set up to filter talent against a particular job description, there is a high probability that it is filtering out quality candidates and diverse talent who do not meet traditional benchmarks of success for a particular role. Let’s consider the following qualifications: Years of Experience. It is common for job descriptions to list desired years of experience. However, filtering on this factor may expose an ageism bias and result in candidates who have fewer years of relevant and high-quality experience being eliminated, or candidates who exceed the high end of the threshold being removed from consideration. Higher Education and Advanced Degrees. Attending college and obtaining a degree is a privilege and not accessible to all. Inequities in our educational system and access to financial means are just a few barriers that low-income individuals and communities of color face in securing higher education opportunities. Furthermore, of working-age military veterans, only 42% have a college degree. Setting AI up to filter for degrees or even specific institutions of higher education will limit the diversity of your pool. Title Keywords. Scanning resumes for keywords in titles that match the level you are hiring for will limit your ability to see and consider emerging leaders in your applicant pool. Systemic barriers to advancement for people of color have limited leadership opportunities and access to higher-level jobs. Urban Institute’s Nonprofit Trends and Impacts 2021 report revealed that just 21% of nonprofit executive positions are held by people of color. Considering candidates who have not yet held executive leadership roles but have the skills and expertise to advance to this level of their career will allow for increased diverse representation. Utilizing AI to filter for protected classes (race, color, religion, sexual orientation, gender identity, national origin, age and/or disability) – even with the best of intentions to increase diversity – is still discriminatory. These practices, too, are fueling the call for increased regulation and transparency of how AI is used in hiring and recruiting processes. It is important for employers to be aware that they must take ownership of and responsibility for the ways in which AI is used throughout their recruitment and hiring processes to ensure continued compliance with Equal Opportunity Employment laws. Ultimately, the human relying on AI, not the robot, will be held responsible for the results and possible repercussions of automating these processes. AI’s Value in Enhancing Recruitment and Hiring At least for now, humans are still the best choice to lead the initial screening process due to their capacity for evaluating applicants holistically. This naturally raises the question, “Is there room for AI anywhere in the hiring process?” In short, yes. 40% of talent tech solutions include some sort of AI component. This figure encompasses a wide array of products, and when used judiciously, some of them can bring an unprecedented depth and breadth of information to hiring. Think of them as new tools to add to your toolkit rather than replacements for all of your existing tools. Outside of screening, AI can prove valuable in other stages of hiring: Job Description Development. Without realizing it, you may be introducing bias into your hiring process before the first application even reaches your inbox. Subtle language choices in job postings can discourage diverse talent from applying. Enter AI tools like Textio and Clovers, which are trained to identify and correct linguistic bias in job descriptions to attract more—and more diverse—applicants. Sourcing. Unsatisfied with the size of your candidate pool? You can leverage AI sourcing in conjunction with your regular sourcing methods to cast a wider net. Consider Juicebox AI’s PeopleGPT or the Workable ATS AI tool; both products comb public information for prospects who may be qualified for your open roles. Again, proceed with caution. While the use of AI in sourcing new prospective candidates is distinct from using it to evaluate the applications of those who have already expressed interest, these tools are not infallible in either case and may exclude exceptional talent whose resumes do not match the exact keywords of your posting. However, AI may also introduce diverse talent you otherwise would not have found. Interviews and Reference Checks. You are likely already familiar with AI products that boast transcription or note-taking capabilities, such as Otter.AI. However, you may not have considered the ways these tools of convenience can also help to mitigate bias. Real-time note-taking often shows its value after interviews have concluded. When forced to compare the content of candidates’ answers—as opposed to vague impressions and unreliable memories—you may be surprised by the ways the written record supports or challenges your perception of an interviewee’s performance. Reference calls can benefit from the same objectivity. While there are many uses for AI, human resources remains a human function and requires careful consideration before the implementation of AI. Leverage AI’s capacity for good by monitoring and evaluating outcomes to ensure appropriate and equitable utilization – and compliance with all existing and emerging laws. Orr Group’s talent team leverages technology and human expertise to support your organization’s recruitment, retention, and DEI goals. Get in touch with us to learn more. Contact Us Jessica Shatzel is a Senior Director and Head of Talent Management at Orr Group, specializing in executive search, recruitment, and a variety of human resources support for our clients. Rebecca Voulgarakis is an Associate Director on the Talent team at Orr Group. She supports Orr Group’s outsourced recruitment efforts, delivering peace of mind and identifying top talent for our nonprofit clients.
4 Practical Steps To Empower Your Team With AI AI Published Date 2024 4 Practical Steps To Empower Your Team With AI As organizations strive to empower their staff through AI tools, explore these four actionable steps to build your team’s AI skills and foster a culture of continuous improvement.
Why Accountable AI Matters and How to Get There | Orr Group TALKS Published Date 2024 Why Accountable AI Matters and How to Get There | Orr Group TALKS We're joined by Matthew Reisman from CIPL to understand the importance of building accountable AI programs and ensure your AI systems are transparent, ethical, and responsible.
Scary Surge: How Rising Health Insurance Costs Are Haunting Nonprofits Talent Published Date 2024 Scary Surge: How Rising Health Insurance Costs Are Haunting Nonprofits With rising health insurance costs affecting both nonprofits and their employees, ensuring your staff feel secure and informed of their benefits is paramount. Read our tips for managing costs while maintaining employee trust.