🤖 AI Summary
This study addresses the frequent yet unsubstantiated claims of “AI for good” by proposing Impact-AI, a novel method that operationalizes public benefit and sustainability as dual pillars of AI impact assessment. Through qualitative interviews with diverse stakeholders, the approach systematically audits an AI project’s governance structure, theory of change, data characteristics, and its social, environmental, and economic consequences. Impact-AI delivers a structured, reusable, and civil society–oriented evaluation framework accompanied by standardized public reporting formats. By rendering the evaluation process transparent and participatory, the method significantly enhances the credibility of “AI for good” assertions and enables broader public deliberation on the societal implications of artificial intelligence.
📝 Abstract
The overall rapid increase of artificial intelligence (AI) use is linked to various initiatives that propose AI'for good'. However, there is a lack of transparency in the goals of such projects, as well as a missing evaluation of their actual impacts on society and the planet. We close this gap by proposing public interest and sustainability as a regulatory dual-concept, together creating the necessary framework for a just and sustainable development that can be operationalized and utilized for the assessment of AI systems. Based on this framework, and building on existing work in auditing, we introduce the Impact-AI-method, a qualitative audit method to evaluate concrete AI projects with respect to public interest and sustainability. The interview-based method captures a project's governance structure, its theory of change, AI model and data characteristics, and social, environmental, and economic impacts. We also propose a catalog of assessment criteria to rate the outcome of the audit as well as to create an accessible output that can be debated broadly by civil society. The Impact-AI-method, developed in a transdisciplinary research setting together with NGOs and a multi-stakeholder research council, is intended as a reusable blueprint that both informs public debate about AI'for good'claims and supports the creation of transparency of AI systems that purport to contribute to a just and sustainable development.