AI summaryⓘ
The authors discuss how algorithms shape what people know and believe but note that usual fairness checks only look at prediction errors, not at how these systems might harm knowledge or understanding. They propose a new way to measure "epistemic injustice," which means when people don’t get fair access to truthful, credible information, or the chance to share their views. Their method tracks gaps between ideal and real conditions in credibility and influence, even if standard fairness rules are met. They also use fairness tools to spot different kinds of knowledge unfairness and test their ideas in simulations of recommendation platforms. This helps make knowledge-related harms from algorithms easier to identify and fix.
epistemic injusticealgorithmic fairnesspredictive fairnesscredibilityepistemic agencydistributive fairness indicesrecommender systemsopinion dynamicsalgorithmic auditingepistemic harms
Authors
Camilla Quaresmini, Lisa Piccinin, Valentina Breschi
Abstract
Algorithmic systems increasingly function as epistemic infrastructures that govern the conditions of interpretative access and social belief. Yet, mainstream auditing strategies operationalize fairness primarily in predictive terms - error rates, calibration, or group-level parity - leaving epistemic harms under-theorized and under-measured. We propose a quantitative framework for evaluating forms of epistemic injustice in algorithmic environments. First, we introduce a deficit-based template that models epistemic injustices as gaps between ideal and realized conditions across features such as credibility, uptake, and epistemic agency. We map these deficits to concrete stages of algorithmic mediation, showing how epistemic injustice can persist even when standard fairness constraints are satisfied. Drawing on distributive fairness indices, we distinguish two evaluation stances: resource inequality, where indices are applied to distributions of epistemic goods directly, and capability/rights inequity, where indices are applied to output-induced epistemic opportunity. We provide an epistemic translation of canonical indices, illustrating how they diagnose complementary signatures of unfairness - such as exclusionary tails and hierarchical concentration - and support longitudinal auditing under iterative deployment. We also provide a simulation study of a recommender-mediated opinion dynamics setting, showing how the proposed indices capture the evolution of epistemic unfairness under repeated platform interventions. The result is a measurement framework that makes the epistemic dimension of algorithmic harms explicit for system design and evaluation.