Alert button
Picture for John Dickerson

John Dickerson

Alert button

Fair Polylog-Approximate Low-Cost Hierarchical Clustering

Nov 21, 2023
Marina Knittel, Max Springer, John Dickerson, MohammadTaghi Hajiaghayi

Research in fair machine learning, and particularly clustering, has been crucial in recent years given the many ethical controversies that modern intelligent systems have posed. Ahmadian et al. [2020] established the study of fairness in \textit{hierarchical} clustering, a stronger, more structured variant of its well-known flat counterpart, though their proposed algorithm that optimizes for Dasgupta's [2016] famous cost function was highly theoretical. Knittel et al. [2023] then proposed the first practical fair approximation for cost, however they were unable to break the polynomial-approximate barrier they posed as a hurdle of interest. We break this barrier, proposing the first truly polylogarithmic-approximate low-cost fair hierarchical clustering, thus greatly bridging the gap between the best fair and vanilla hierarchical clustering approximations.

* Accepted to NeurIPS '23 (16 pages, 5 figures) 
Viaarxiv icon

Doubly Constrained Fair Clustering

May 31, 2023
John Dickerson, Seyed A. Esmaeili, Jamie Morgenstern, Claire Jie Zhang

Figure 1 for Doubly Constrained Fair Clustering
Figure 2 for Doubly Constrained Fair Clustering
Figure 3 for Doubly Constrained Fair Clustering
Figure 4 for Doubly Constrained Fair Clustering

The remarkable attention which fair clustering has received in the last few years has resulted in a significant number of different notions of fairness. Despite the fact that these notions are well-justified, they are often motivated and studied in a disjoint manner where one fairness desideratum is considered exclusively in isolation from the others. This leaves the understanding of the relations between different fairness notions as an important open problem in fair clustering. In this paper, we take the first step in this direction. Specifically, we consider the two most prominent demographic representation fairness notions in clustering: (1) Group Fairness (GF), where the different demographic groups are supposed to have close to population-level representation in each cluster and (2) Diversity in Center Selection (DS), where the selected centers are supposed to have close to population-level representation of each group. We show that given a constant approximation algorithm for one constraint (GF or DS only) we can obtain a constant approximation solution that satisfies both constraints simultaneously. Interestingly, we prove that any given solution that satisfies the GF constraint can always be post-processed at a bounded degradation to the clustering cost to additionally satisfy the DS constraint while the reverse is not true. Furthermore, we show that both GF and DS are incompatible (having an empty feasibility set in the worst case) with a collection of other distance-based fairness notions. Finally, we carry experiments to validate our theoretical findings.

Viaarxiv icon

Artificial Intelligence/Operations Research Workshop 2 Report Out

Apr 10, 2023
John Dickerson, Bistra Dilkina, Yu Ding, Swati Gupta, Pascal Van Hentenryck, Sven Koenig, Ramayya Krishnan, Radhika Kulkarni, Catherine Gill, Haley Griffin, Maddy Hunter, Ann Schwartz

Figure 1 for Artificial Intelligence/Operations Research Workshop 2 Report Out
Figure 2 for Artificial Intelligence/Operations Research Workshop 2 Report Out
Figure 3 for Artificial Intelligence/Operations Research Workshop 2 Report Out
Figure 4 for Artificial Intelligence/Operations Research Workshop 2 Report Out

This workshop Report Out focuses on the foundational elements of trustworthy AI and OR technology, and how to ensure all AI and OR systems implement these elements in their system designs. Four sessions on various topics within Trustworthy AI were held, these being Fairness, Explainable AI/Causality, Robustness/Privacy, and Human Alignment and Human-Computer Interaction. Following discussions of each of these topics, workshop participants also brainstormed challenge problems which require the collaboration of AI and OR researchers and will result in the integration of basic techniques from both fields to eventually benefit societal needs.

Viaarxiv icon

Reckoning with the Disagreement Problem: Explanation Consensus as a Training Objective

Mar 23, 2023
Avi Schwarzschild, Max Cembalest, Karthik Rao, Keegan Hines, John Dickerson

Figure 1 for Reckoning with the Disagreement Problem: Explanation Consensus as a Training Objective
Figure 2 for Reckoning with the Disagreement Problem: Explanation Consensus as a Training Objective
Figure 3 for Reckoning with the Disagreement Problem: Explanation Consensus as a Training Objective
Figure 4 for Reckoning with the Disagreement Problem: Explanation Consensus as a Training Objective

As neural networks increasingly make critical decisions in high-stakes settings, monitoring and explaining their behavior in an understandable and trustworthy manner is a necessity. One commonly used type of explainer is post hoc feature attribution, a family of methods for giving each feature in an input a score corresponding to its influence on a model's output. A major limitation of this family of explainers in practice is that they can disagree on which features are more important than others. Our contribution in this paper is a method of training models with this disagreement problem in mind. We do this by introducing a Post hoc Explainer Agreement Regularization (PEAR) loss term alongside the standard term corresponding to accuracy, an additional term that measures the difference in feature attribution between a pair of explainers. We observe on three datasets that we can train a model with this loss term to improve explanation consensus on unseen data, and see improved consensus between explainers other than those used in the loss term. We examine the trade-off between improved consensus and model performance. And finally, we study the influence our method has on feature attribution explanations.

Viaarxiv icon

Neural Auctions Compromise Bidder Information

Feb 28, 2023
Alex Stein, Avi Schwarzschild, Michael Curry, Tom Goldstein, John Dickerson

Figure 1 for Neural Auctions Compromise Bidder Information
Figure 2 for Neural Auctions Compromise Bidder Information
Figure 3 for Neural Auctions Compromise Bidder Information
Figure 4 for Neural Auctions Compromise Bidder Information

Single-shot auctions are commonly used as a means to sell goods, for example when selling ad space or allocating radio frequencies, however devising mechanisms for auctions with multiple bidders and multiple items can be complicated. It has been shown that neural networks can be used to approximate optimal mechanisms while satisfying the constraints that an auction be strategyproof and individually rational. We show that despite such auctions maximizing revenue, they do so at the cost of revealing private bidder information. While randomness is often used to build in privacy, in this context it comes with complications if done without care. Specifically, it can violate rationality and feasibility constraints, fundamentally change the incentive structure of the mechanism, and/or harm top-level metrics such as revenue and social welfare. We propose a method that employs stochasticity to improve privacy while meeting the requirements for auction mechanisms with only a modest sacrifice in revenue. We analyze the cost to the auction house that comes with introducing varying degrees of privacy in common auction settings. Our results show that despite current neural auctions' ability to approximate optimal mechanisms, the resulting vulnerability that comes with relying on neural networks must be accounted for.

Viaarxiv icon

Targets in Reinforcement Learning to solve Stackelberg Security Games

Nov 30, 2022
Saptarashmi Bandyopadhyay, Chenqi Zhu, Philip Daniel, Joshua Morrison, Ethan Shay, John Dickerson

Reinforcement Learning (RL) algorithms have been successfully applied to real world situations like illegal smuggling, poaching, deforestation, climate change, airport security, etc. These scenarios can be framed as Stackelberg security games (SSGs) where defenders and attackers compete to control target resources. The algorithm's competency is assessed by which agent is controlling the targets. This review investigates modeling of SSGs in RL with a focus on possible improvements of target representations in RL algorithms.

* Appears in Proceedings of AAAI FSS-22 Symposium "Lessons Learned for Autonomous Assessment of Machine Abilities (LLAAMA)" 
Viaarxiv icon

Achieving Downstream Fairness with Geometric Repair

Mar 14, 2022
Kweku Kwegyir-Aggrey, Jessica Dai, John Dickerson, Keegan Hines

Figure 1 for Achieving Downstream Fairness with Geometric Repair
Figure 2 for Achieving Downstream Fairness with Geometric Repair
Figure 3 for Achieving Downstream Fairness with Geometric Repair
Figure 4 for Achieving Downstream Fairness with Geometric Repair

Consider a scenario where some upstream model developer must train a fair model, but is unaware of the fairness requirements of a downstream model user or stakeholder. In the context of fair classification, we present a technique that specifically addresses this setting, by post-processing a regressor's scores such they yield fair classifications for any downstream choice in decision threshold. To begin, we leverage ideas from optimal transport to show how this can be achieved for binary protected groups across a broad class of fairness metrics. Then, we extend our approach to address the setting where a protected attribute takes on multiple values, by re-recasting our technique as a convex optimization problem that leverages lexicographic fairness.

Viaarxiv icon

Differentiable Economics for Randomized Affine Maximizer Auctions

Feb 06, 2022
Michael Curry, Tuomas Sandholm, John Dickerson

Figure 1 for Differentiable Economics for Randomized Affine Maximizer Auctions
Figure 2 for Differentiable Economics for Randomized Affine Maximizer Auctions
Figure 3 for Differentiable Economics for Randomized Affine Maximizer Auctions
Figure 4 for Differentiable Economics for Randomized Affine Maximizer Auctions

A recent approach to automated mechanism design, differentiable economics, represents auctions by rich function approximators and optimizes their performance by gradient descent. The ideal auction architecture for differentiable economics would be perfectly strategyproof, support multiple bidders and items, and be rich enough to represent the optimal (i.e. revenue-maximizing) mechanism. So far, such an architecture does not exist. There are single-bidder approaches (MenuNet, RochetNet) which are always strategyproof and can represent optimal mechanisms. RegretNet is multi-bidder and can approximate any mechanism, but is only approximately strategyproof. We present an architecture that supports multiple bidders and is perfectly strategyproof, but cannot necessarily represent the optimal mechanism. This architecture is the classic affine maximizer auction (AMA), modified to offer lotteries. By using the gradient-based optimization tools of differentiable economics, we can now train lottery AMAs, competing with or outperforming prior approaches in revenue.

Viaarxiv icon

Data-Driven Methods for Balancing Fairness and Efficiency in Ride-Pooling

Oct 07, 2021
Naveen Raman, Sanket Shah, John Dickerson

Figure 1 for Data-Driven Methods for Balancing Fairness and Efficiency in Ride-Pooling
Figure 2 for Data-Driven Methods for Balancing Fairness and Efficiency in Ride-Pooling
Figure 3 for Data-Driven Methods for Balancing Fairness and Efficiency in Ride-Pooling

Rideshare and ride-pooling platforms use artificial intelligence-based matching algorithms to pair riders and drivers. However, these platforms can induce inequality either through an unequal income distribution or disparate treatment of riders. We investigate two methods to reduce forms of inequality in ride-pooling platforms: (1) incorporating fairness constraints into the objective function and (2) redistributing income to drivers to reduce income fluctuation and inequality. To evaluate our solutions, we use the New York City taxi data set. For the first method, we find that optimizing for driver-side fairness outperforms state-of-the-art models on the number of riders serviced, both in the worst-off neighborhood and overall, showing that optimizing for fairness can assist profitability in certain circumstances. For the second method, we explore income redistribution as a way to combat income inequality by having drivers keep an $r$ fraction of their income, and contributing the rest to a redistribution pool. For certain values of $r$, most drivers earn near their Shapley value, while still incentivizing drivers to maximize value, thereby avoiding the free-rider problem and reducing income variability. The first method can be extended to many definitions of fairness and the second method provably improves fairness without affecting profitability.

Viaarxiv icon

Learning Revenue-Maximizing Auctions With Differentiable Matching

Jun 15, 2021
Michael J. Curry, Uro Lyi, Tom Goldstein, John Dickerson

Figure 1 for Learning Revenue-Maximizing Auctions With Differentiable Matching
Figure 2 for Learning Revenue-Maximizing Auctions With Differentiable Matching
Figure 3 for Learning Revenue-Maximizing Auctions With Differentiable Matching
Figure 4 for Learning Revenue-Maximizing Auctions With Differentiable Matching

We propose a new architecture to approximately learn incentive compatible, revenue-maximizing auctions from sampled valuations. Our architecture uses the Sinkhorn algorithm to perform a differentiable bipartite matching which allows the network to learn strategyproof revenue-maximizing mechanisms in settings not learnable by the previous RegretNet architecture. In particular, our architecture is able to learn mechanisms in settings without free disposal where each bidder must be allocated exactly some number of items. In experiments, we show our approach successfully recovers multiple known optimal mechanisms and high-revenue, low-regret mechanisms in larger settings where the optimal mechanism is unknown.

Viaarxiv icon