Position: We need responsible, application-driven (RAD) AI research

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How can AI research reconcile scientific rigor with societal impact? This paper proposes Responsibility- and Application-Driven AI (RAD-AI), a paradigm explicitly addressing the challenge of aligning ethical, legal, technical, and pluralistic value dimensions during AI’s societal embedding. Methodologically, it introduces an original three-stage framework: (1) interdisciplinary, human-centered modeling; (2) context-sensitive method and ethics metric design; and (3) iterative validation via staged testbeds and practice communities—integrating participatory action research, contextualized evaluation, and co-governance mechanisms. The core contribution is the first systematic definition and operationalization of RAD-AI, establishing novel standards for AI research across three dimensions: societal embeddedness, responsibility traceability, and value adaptability. This provides a reusable, rigorous methodology to advance trustworthy, sustainable, and human-centered AI ecosystems. (149 words)

Technology Category

Application Category

📝 Abstract
This position paper argues that achieving meaningful scientific and societal advances with artificial intelligence (AI) requires a responsible, application-driven approach (RAD) to AI research. As AI is increasingly integrated into society, AI researchers must engage with the specific contexts where AI is being applied. This includes being responsive to ethical and legal considerations, technical and societal constraints, and public discourse. We present the case for RAD-AI to drive research through a three-staged approach: (1) building transdisciplinary teams and people-centred studies; (2) addressing context-specific methods, ethical commitments, assumptions, and metrics; and (3) testing and sustaining efficacy through staged testbeds and a community of practice. We present a vision for the future of application-driven AI research to unlock new value through technically feasible methods that are adaptive to the contextual needs and values of the communities they ultimately serve.
Problem

Research questions and friction points this paper is trying to address.

Advancing AI research through responsible, application-driven approaches (RAD-AI)
Addressing ethical, legal, and societal constraints in AI applications
Developing context-specific AI methods with community-focused values
Innovation

Methods, ideas, or system contributions that make the work stand out.

Responsible application-driven AI research approach
Transdisciplinary teams and people-centred studies
Staged testbeds for efficacy and community practice
🔎 Similar Papers
No similar papers found.