Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem in machine learning research. The interest in this field has been motivated by concerns that machine learning algorithms may introduce significant bias with respect to certain sensitive attributes, e.g., against black people while predicting future criminals, NYPD stop-and-frisk, and against women while recommending jobs.

We develop a new meta-classification algorithm that takes as input a large class of fairness metrics and returns an optimal classifier that is fair with respect to constraints on these metrics. Unlike previous work on fair classification which focussed on developing classifiers that are with respect to a specific fairness metric, our algorithm works for all multiple fairness metrics such as statistical parity and/or false discovery, etc.

The theoretical and experimental details of the algorithm can be found in the paper: Classification with Fairness Constraints - A Meta-Algorithm with Provable Guarantees. We also provide the implementation of the algorithm for a few fairness metrics (Statistical Parity and False Discovery Rate).

To facilitate comparison with other papers on this topic and for future research on this topic, we collaborated with IBM AIF360 to make our code compatible with their framework. Correspondingly to use our algorithm, one can use the AIF360 python library or code repository. To implement the algorithm for other fairness metrics, one needs to just write an implementation of the optimization problem corresponding to that fairness metric (similar to the implementation of Statistical rate or False Discovery Rate).

If you have any comments on this work or any questions about the implementation, please contact us.