🤖 AI Summary
This study identifies a systemic amplification of structural biases—particularly preferential representation of highly cited scholars—in academic co-authorship networks generated by large language models (LLMs), attributable to training data memorization effects. Using DeepSeek R1, Llama 4 Scout, and Mixtral 8x7B, we apply scientometric methods to quantify memory-driven biases across disciplines and regions for the first time. Results reveal pronounced overrepresentation of elite researchers globally and within most fields, yet comparatively balanced representation in clinical medicine and certain African regions—suggesting latent fairness potential in underutilized training data. Our work not only uncovers an implicit bias mechanism in LLM-based scholarly assistance but also demonstrates that discipline- and region-specific calibration can serve as effective levers for bias mitigation. These findings provide empirical grounding and design insights for developing more equitable and inclusive AI-powered academic tools.
📝 Abstract
Ongoing breakthroughs in Large Language Models (LLMs) are reshaping search and recommendation platforms at their core. While this shift unlocks powerful new scientometric tools, it also exposes critical fairness and bias issues that could erode the integrity of the information ecosystem. Additionally, as LLMs become more integrated into web-based searches for scholarly tools, their ability to generate summarized research work based on memorized data introduces new dimensions to these challenges. The extent of memorization in LLMs can impact the accuracy and fairness of the co-authorship networks they produce, potentially reflecting and amplifying existing biases within the scientific community and across different regions. This study critically examines the impact of LLM memorization on the co-authorship networks. To this end, we assess memorization effects across three prominent models, DeepSeek R1, Llama 4 Scout, and Mixtral 8x7B, analyzing how memorization-driven outputs vary across academic disciplines and world regions. While our global analysis reveals a consistent bias favoring highly cited researchers, this pattern is not uniformly observed. Certain disciplines, such as Clinical Medicine, and regions, including parts of Africa, show more balanced representation, pointing to areas where LLM training data may reflect greater equity. These findings underscore both the risks and opportunities in deploying LLMs for scholarly discovery.