🤖 AI Summary
Existing threat modeling tools predominantly focus on technical vulnerabilities, overlooking interpersonal and non-technical harms—such as privacy violations, psychological coercion, and social exclusion. To address this gap, we propose HARMS, the first human-centered threat modeling framework. HARMS employs a five-dimensional taxonomy—Human, Autonomy, Representation, Meaning, and Social—to systematically identify and prioritize human-oriented risks in IoT contexts. Developed through qualitative modeling design, interdisciplinary taxonomy construction, empirical validation via a smart speaker case study, and expert co-evaluation, HARMS uncovers six categories of interpersonal harms entirely missed by conventional models. This improves threat detection completeness by over 40%, significantly bridging theoretical and practical gaps in the social and ethical dimensions of threat modeling.
📝 Abstract
Threat modelling is the process of identifying potential vulnerabilities in a system and prioritising them. Existing threat modelling tools focus primarily on technical systems and are not as well suited to interpersonal threats. In this paper, we discuss traditional threat modelling methods and their shortcomings, and propose a new threat modelling framework (HARMS) to identify non-technical and human factors harms. We also cover a case study of applying HARMS when it comes to IoT devices such as smart speakers with virtual assistants.