Nigerian Software Engineer or American Data Scientist? GitHub Profile Recruitment Bias in Large Language Models

📅 2024-09-19
🏛️ IEEE International Conference on Software Maintenance and Evolution
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study exposes implicit geographic and occupational role biases in large language models (LLMs) used for software engineering recruitment. To quantify such biases, we conduct the first counterfactual location-replacement experiment on 3,657 GitHub user profiles (2019–2023), integrating ChatGPT-based automated resume screening and team-formation simulation. Results reveal statistically significant LLM preferences for candidates from Europe and North America—particularly the United States—and systematic over-attribution of high-prestige roles (e.g., data scientist) to U.S.-based developers, evidencing measurable sociotechnical bias. Our key contributions are: (1) the first counterfactual evaluation framework specifically designed to assess LLM-induced bias in technical hiring; (2) empirical validation of structurally embedded geographic and occupational stereotyping in AI-mediated talent assessment; and (3) foundational methodological and evidentiary support for fairness governance in AI-driven recruitment systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have taken the world by storm, demonstrating their ability not only to automate tedious tasks, but also to show some degree of proficiency in completing software engineering tasks. A key concern with LLMs is their “black-box” nature, which obscures their internal workings and could lead to societal biases in their outputs. In the software engineering context, in this early results paper, we empirically explore how well LLMs can automate recruitment tasks for a geographically diverse software team. We use OpenAI's ChatGPT to conduct an initial set of experiments using GitHub User Profiles from four regions to recruit a six-person software development team, analyzing a total of 3,657 profiles over a five-year period (2019–2023). Results indicate that ChatGPT shows preference for some regions over others, even when swapping the location strings of two profiles (counterfactuals). Furthermore, ChatGPT was more likely to assign certain developer roles to users from a specific country, revealing an implicit bias. Overall, this study reveals insights into the inner workings of LLMs and has implications for mitigating such societal biases in these models.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Recruitment Bias
Geographical and Role Preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Bias in Recruitment
Decision Process Understanding
🔎 Similar Papers
No similar papers found.