๐ค AI Summary
This study addresses the challenges of credit scoring in Kenyaโs digital lending landscape, where data scarcity, institutional uncertainty, and pluralistic risk perceptions complicate algorithmic decision-making. Drawing on a nine-month ethnographic fieldwork in Nairobi, the research integrates alternative data construction, algorithmic modeling, and regulatory compliance strategies to examine how practitioners negotiate definitions of risk and model performance across technical and political dimensions. The project introduces โalignmentโ as a bidirectional translation process, elucidating the ongoing co-constitution of credit scoring models and their socio-institutional environments across three dimensions: cognition, modeling, and context. By extending alignment theory from human-computer interaction into the high-uncertainty domain of financial technology in the Global South, this work offers a novel perspective on algorithmic governance under conditions of epistemic and institutional ambiguity.
๐ Abstract
Credit scoring is an increasingly central and contested domain of data and AI governance, frequently framed as a neutral and objective method of assessing risk across diverse economic and political contexts. Based on a nine-month ethnography of credit scoring practices in Nairobi, Kenya, we examined the sociotechnical and institutional work of data science in digital lending. While established regional telcos and banks are leveraging proprietary data to develop digital loan products, algorithmic credit scoring is being transformed by new actors, techniques, and shifting regulations. Our findings show how practitioners construct alternative data using technical and legal workarounds, formulate risk through multiple interpretations, and negotiate model performance via technical and political means. We argue that algorithmic credit scoring is accomplished through the ongoing work of alignment that stabilizes risk under conditions of persistent uncertainty, taking epistemic, modeling, and contextual forms. Extending work on alignment in HCI, we show how it operates as a two-way translation, where models are made"safe for worlds"while those worlds are reshaped to be"safe for models."