🤖 AI Summary
This paper critiques the prevailing technocentric and individualistic framing of ethics in NLP research—particularly its narrow focus on “bias” and “harm”—which obscures the systemic, sociopolitical roots of discrimination. Through a critical qualitative review of ACL 2022 papers, integrated with insights from science and technology studies (STS) and critical social theory, the authors deconstruct how discourses of discrimination are constructed within NLP and expose their implicit normative assumptions. The key contribution is a conceptual reframing: replacing “harm” with “injustice” as the foundational analytic category, thereby shifting the ethical agenda from algorithmic mitigation toward structural critique. This move transcends technical solutionism, enriching NLP ethics with deeper sociological grounding and broader theoretical scope. The resulting framework offers both conceptual foundations and actionable pathways for more responsive, socially attuned NLP governance.
📝 Abstract
How to avoid discrimination in the context of NLP technology is one of the major challenges in the field. We propose that a different and more substantiated framing of the problem could help to find more productive approaches. In the first part of the paper we report on a case study: a qualitative review on papers published in the ACL anthologies 2022 on discriminatory behavior of NLP systems. We find that the field (i) still has a strong focus on a technological fix of algorithmic discrimination, and (ii) is struggling with a firm grounding of their ethical or normative vocabulary. Furthermore, this vocabulary is very limited, focusing mostly on the terms "harm" and "bias". In the second part of the paper we argue that addressing the latter problems might help with the former. The understanding of algorithmic discrimination as a technological problem is reflected in and reproduced by the vocabulary in use. The notions of "harm" and "bias" implicate a narrow framing of the issue of discrimination as one of the system-user interface. We argue that instead of "harm" the debate should make "injustice" the key notion. This would force us to understand the problem of algorithmic discrimination as a systemic problem. Thereby, it would broaden our perspective on the complex interactions that make NLP technology participate in discrimination. With that gain in perspective we can consider new angles for solutions.