🤖 AI Summary
This paper addresses the detection of implicit gender bias in pretrained masked language models (MLMs) for personality and trait evaluation, proposing the first statistically robust quantification framework. Methodologically, it innovatively integrates mixed-effects modeling with pseudo-perplexity weighting to account for random variability across templates and target concepts—overcoming key limitations of conventional template-based approaches, including neglect of variability, uniform weighting assumptions, and absence of effect-size estimation. It further presents the first systematic assessment of both binary and nonbinary (neo) gender bias, alongside cross-model comparisons across BERT, RoBERTa, and ALBERT. Results reveal pervasive small-to-moderate gender bias in MLMs: ALBERT exhibits the strongest neo bias, while RoBERTa-large shows the most pronounced binary gender bias. Notably, bias patterns in agreeableness and emotional stability align partially with empirical psychological findings—bridging a critical gap between computational models and human-centered psychological research.
📝 Abstract
There has been significant prior work using templates to study bias against demographic attributes in MLMs. However, these have limitations: they overlook random variability of templates and target concepts analyzed, assume equality amongst templates, and overlook bias quantification. Addressing these, we propose a systematic statistical approach to assess bias in MLMs, using mixed models to account for random effects, pseudo-perplexity weights for sentences derived from templates and quantify bias using statistical effect sizes. Replicating prior studies, we match on bias scores in magnitude and direction with small to medium effect sizes. Next, we explore the novel problem of gender bias in the context of $ extit{personality}$ and $ extit{character}$ traits, across seven MLMs (base and large). We find that MLMs vary; ALBERT is unbiased for binary gender but the most biased for non-binary $ extit{neo}$, while RoBERTa-large is the most biased for binary gender but shows small to no bias for $ extit{neo}$. There is some alignment of MLM bias and findings in psychology (human perspective) - in $ extit{agreeableness}$ with RoBERTa-large and $ extit{emotional stability}$ with BERT-large. There is general agreement for the remaining 3 personality dimensions: both sides observe at most small differences across gender. For character traits, human studies on gender bias are limited thus comparisons are not feasible.