A decision maker typically (i) incorporates training data to learn about the relative effectiveness of the treatments, and (ii) chooses an implementation mechanism that implies an "optimal" predicted outcome distribution according to some target functional. Nevertheless, a discrimination-aware decision maker may not be satisfied achieving said optimality at the cost of heavily discriminating against subgroups of the population, in the sense that the outcome distribution in a subgroup deviates strongly from the overall optimal outcome distribution. We study a framework that allows the decision maker to penalize for such deviations, while allowing for a wide range of target functionals and discrimination measures to be employed. We establish regret and consistency guarantees for empirical success policies with data-driven tuning parameters, and provide numerical results. Furthermore, we briefly illustrate the methods in two empirical settings.
Link to the underlying paper: https://arxiv.org/abs/2401.17909
Personal website of David Preinerstorfer