An Information-Flow Perspective on Algorithmic Fairness

📅 2023-12-15
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Verifying algorithmic fairness remains challenging due to the lack of a unified formal framework for characterizing disparate treatment and disparate impact. Method: This paper introduces a novel modeling paradigm grounded in secure information flow theory, treating protected attributes (e.g., race, gender) as “secret inputs” and constraining their information flow to outputs. It defines a quantitative fairness measure—*fairness spread*—and establishes its rigorous equivalence to counterfactual fairness within structural causal models, enabling formal fairness verification. Contributions: The approach unifies qualitative and quantitative information flow analysis with counterfactual reasoning and integrates existing tools (e.g., QUAIL, HYPERPROB). It supports interpretable, computationally tractable, and transferable verification of classical fairness notions—including demographic parity—as well as fairness spread, thereby advancing the formal foundations of algorithmic fairness verification.
📝 Abstract
This work presents insights gained by investigating the relationship between algorithmic fairness and the concept of secure information flow. The problem of enforcing secure information flow is well-studied in the context of information security: If secret information may "flow" through an algorithm or program in such a way that it can influence the program’s output, then that is considered insecure information flow as attackers could potentially observe (parts of) the secret. There is a strong correspondence between secure information flow and algorithmic fairness: if protected attributes such as race, gender, or age are treated as secret program inputs, then secure information flow means that these "secret" attributes cannot influence the result of a program. While most research in algorithmic fairness evaluation concentrates on studying the impact of algorithms (often treating the algorithm as a black-box), the concepts derived from information flow can be used both for the analysis of disparate treatment as well as disparate impact w.r.t. a structural causal model. In this paper, we examine the relationship between quantitative as well as qualitative information-flow properties and fairness. Moreover, based on this duality, we derive a new quantitative notion of fairness called fairness spread, which can be easily analyzed using quantitative information flow and which strongly relates to counterfactual fairness. We demonstrate that off-the-shelf tools for information-flow properties can be used in order to formally analyze a program's algorithmic fairness properties, including the new notion of fairness spread as well as established notions such as demographic parity.
Problem

Research questions and friction points this paper is trying to address.

Investigating the relationship between algorithmic fairness and secure information flow
Analyzing how protected attributes influence program outputs using information flow concepts
Developing quantitative fairness metrics through information flow properties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Secure information flow for fairness analysis
Quantitative fairness spread concept
Off-the-shelf information-flow tools application
Samuel Teuber
Samuel Teuber
PhD Student @ Karlsruhe Institute of Technology
B
Bernhard Beckert
Karlsruhe Institute of Technology