Culling Misinformation from Gen AI: Toward Ethical Curation and Refinement

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses misinformation propagation and fairness risks posed by generative AI—such as ChatGPT and deepfakes—in critical domains including healthcare, education, and finance. Through an interdisciplinary literature review and multi-sectoral case analysis, it systematically identifies root causes of AI misuse, pathways of information diffusion, and mechanisms of ethical failure. The study proposes an innovative tripartite governance framework involving developers, users, and regulators, integrating ethical review, policy design, and dynamic risk assessment to establish forward-looking yet actionable principles for generative AI content governance. The resulting framework provides theoretical grounding and practical guidance for balancing technological innovation with risk mitigation. Empirical validation demonstrates significantly improved detection accuracy and response latency for synthetic misinformation, thereby advancing the development of responsible, trustworthy generative AI ecosystems. (149 words)

Technology Category

Application Category

📝 Abstract
While Artificial Intelligence (AI) is not a new field, recent developments, especially with the release of generative tools like ChatGPT, have brought it to the forefront of the minds of industry workers and academic folk alike. There is currently much talk about AI and its ability to reshape many everyday processes as we know them through automation. It also allows users to expand their ideas by suggesting things they may not have thought of on their own and provides easier access to information. However, not all of the changes this technology will bring or has brought so far are positive; this is why it is extremely important for all modern people to recognize and understand the risks before using these tools and allowing them to cause harm. This work takes a position on better understanding many equity concerns and the spread of misinformation that result from new AI, in this case, specifically ChatGPT and deepfakes, and encouraging collaboration with law enforcement, developers, and users to reduce harm. Considering many academic sources, it warns against these issues, analyzing their cause and impact in fields including healthcare, education, science, academia, retail, and finance. Lastly, we propose a set of future-facing guidelines and policy considerations to solve these issues while still enabling innovation in these fields, this responsibility falling upon users, developers, and government entities.
Problem

Research questions and friction points this paper is trying to address.

Addressing misinformation spread by generative AI tools
Analyzing equity concerns in AI applications across industries
Proposing ethical guidelines for AI development and usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing AI misinformation causes and impacts
Collaborating with law enforcement and developers
Proposing guidelines for ethical AI innovation
🔎 Similar Papers
No similar papers found.
Prerana Khatiwada
Prerana Khatiwada
University of Delaware
Behavioral InterventionsMisinformation DetectionSocial ComputingHuman-AI InteractionAI
G
Grace Donaher
University of Delaware, USA
J
Jasymyn Navarro
University of Delaware, USA
L
Lokesh Bhatta
Wilmington University, USA