Algorithmic fairness in machine learning has recently garnered significant attention. However, several pressing challenges remain: (1) The fairness guarantees of existing fair classification methods often rely on specific data distributional assumptions and large sample sizes, which can lead to fairness violations when the sample size is moderate—a common situation in practice. (2) Due to legal and societal considerations, using sensitive group attributes during decision-making (referred to as the group-blind setting) may not always be feasible. (3) Useful machine learning models nowadays have complicated structures, making it challenging to develop practical theories.
In this work, we quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints. Specifically, we propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk. This framework is applicable to various notions of group fairness in both group-aware and group-blind scenarios. Furthermore, we establish a minimax lower bound on the excess risk, showing the minimax optimality of our proposed algorithm up to logarithmic factors. In addition, we will extend our analysis to fairness issues in other popular ML tasks, including conformal prediction, large language models, and computer vision, and develop practical theories that are useful in practice.
Linjun Zhang is an Associate Professor in the Department of Statistics, at Rutgers University. He obtained his Ph.D. in Statistics at the Wharton School, the University of Pennsylvania in 2019, and received J. Parker Bursk Memorial Prize and Donald S. Murray Prize for excellence in research and teaching, respectively upon graduation. He also received the NSF CAREER Award, and Rutgers Presidential Teaching Award in 2024. His current research interests include algorithmic fairness, privacy-preserving data analysis, deep learning theory, and high-dimensional statistics.