๐ค AI Summary
In the AI era, the proliferation of short-form content and the increasing opacity of AI models undermine usersโ capacity for deep reflection, weaken critical thinking, and foster passive acceptance of AI-generated reasoning. Method: This paper introduces Interactive Chain-of-Thought (Interactive CoT), a framework designed to be verifiable, editable, and re-executable. It features (i) modular, user-editable reasoning blocks; (ii) a lightweight preference-learning mechanism that adapts to user edits; and (iii) a tripartite ethical safeguard integrating metadata disclosure, bias auditing, and privacy preservation. Contribution/Results: Experiments across diverse tasks demonstrate >92% alignment between user editing intentions and system behavior, significantly enhancing cognitive engagement and reasoning transparency. Ethical audits confirm substantial improvements in explainability and fairness, validating the frameworkโs robustness and responsible design.
๐ Abstract
Due to the proliferation of short-form content and the rapid adoption of AI, opportunities for deep, reflective thinking have significantly diminished, undermining users' critical thinking and reducing engagement with the reasoning behind AI-generated outputs. To address this issue, we propose an Interactive Chain-of-Thought (CoT) Framework that enhances human-centered explainability and responsible AI usage by making the model's inference process transparent, modular, and user-editable. The framework decomposes reasoning into clearly defined blocks that users can inspect, modify, and re-execute, encouraging active cognitive engagement rather than passive consumption. It further integrates a lightweight edit-adaptation mechanism inspired by preference learning, allowing the system to align with diverse cognitive styles and user intentions. Ethical transparency is ensured through explicit metadata disclosure, built-in bias checkpoint functionality, and privacy-preserving safeguards. This work outlines the design principles and architecture necessary to promote critical engagement, responsible interaction, and inclusive adaptation in AI systems aimed at addressing complex societal challenges.