Explainable AI in Usable Privacy and Security: Challenges and Opportunities

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies critical explanatory deficiencies—low transparency, inconsistent judgments, poor faithfulness, and hallucination susceptibility—in large language models (LLMs) when deployed as automated “judges” for high-stakes privacy and security assessments. Through a 22-participant user study centered on the privacy policy analysis tool PRISMe, we systematically uncover tensions between explanation quality and user preferences (e.g., granularity, interactivity). To address these, we propose a user-profile-aware adaptive explanation framework integrating retrieval-augmented generation (RAG), structured evaluation criteria, and uncertainty-aware reasoning—enabling trustworthy, human-AI collaborative assessment. Our contributions include the first empirical validation of human-centered explainable AI (HCXAI) in privacy/security contexts and a reusable design framework grounded in real-world usability insights.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly being used for automated evaluations and explaining them. However, concerns about explanation quality, consistency, and hallucinations remain open research challenges, particularly in high-stakes contexts like privacy and security, where user trust and decision-making are at stake. In this paper, we investigate these issues in the context of PRISMe, an interactive privacy policy assessment tool that leverages LLMs to evaluate and explain website privacy policies. Based on a prior user study with 22 participants, we identify key concerns regarding LLM judgment transparency, consistency, and faithfulness, as well as variations in user preferences for explanation detail and engagement. We discuss potential strategies to mitigate these concerns, including structured evaluation criteria, uncertainty estimation, and retrieval-augmented generation (RAG). We identify a need for adaptive explanation strategies tailored to different user profiles for LLM-as-a-judge. Our goal is to showcase the application area of usable privacy and security to be promising for Human-Centered Explainable AI (HCXAI) to make an impact.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM explanation quality in privacy and security contexts
Addressing user concerns about LLM judgment transparency and consistency
Developing adaptive explanation strategies for diverse user preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging LLMs for privacy policy evaluation
Using retrieval-augmented generation (RAG) for accuracy
Adaptive explanation strategies for user profiles
🔎 Similar Papers
No similar papers found.