Alert button
Picture for Toby Walsh

Toby Walsh

Alert button

Mechanisms that play a game, not toss a coin

Aug 21, 2023
Toby Walsh

Randomized mechanisms can have good normative properties compared to their deterministic counterparts. However, randomized mechanisms are problematic in several ways such as in their verifiability. We propose here to derandomize such mechanisms by having agents play a game instead of tossing a coin. The game is designed so an agent's best action is to play randomly, and this play then injects ``randomness'' into the mechanism. This derandomization retains many of the good normative properties of the original randomized mechanism but gives a mechanism that is deterministic and easy, for instance, to audit. We consider three related methods to derandomize randomized mechanism in six different domains: voting, facility location, task allocation, school choice, peer selection, and resource allocation. We propose a number of novel derandomized mechanisms for these six domains with good normative properties. Each mechanism has a mixed Nash equilibrium in which agents play a modular arithmetic game with an uniform mixed strategy. In all but one mixed Nash equilibrium, agents report their preferences over the original problem sincerely. The derandomized methods are thus ``quasi-strategy proof''. In one domain, we additionally show that a new and desirable normative property emerges as a result of derandomization.

Viaarxiv icon

Incentives to Offer Algorithmic Recourse

Jan 27, 2023
Matthew Olckers, Toby Walsh

Figure 1 for Incentives to Offer Algorithmic Recourse
Figure 2 for Incentives to Offer Algorithmic Recourse
Figure 3 for Incentives to Offer Algorithmic Recourse
Figure 4 for Incentives to Offer Algorithmic Recourse

Due to the importance of artificial intelligence (AI) in a variety of high-stakes decisions, such as loan approval, job hiring, and criminal bail, researchers in Explainable AI (XAI) have developed algorithms to provide users with recourse for an unfavorable outcome. We analyze the incentives for a decision-maker to offer recourse to a set of applicants. Does the decision-maker have the incentive to offer recourse to all rejected applicants? We show that the decision-maker only offers recourse to all applicants in extreme cases, such as when the recourse process is impossible to manipulate. Some applicants may be worse off when the decision-maker can offer recourse.

Viaarxiv icon

Proportional Fairness in Obnoxious Facility Location

Jan 11, 2023
Haris Aziz, Alexander Lam, Bo Li, Fahimeh Ramezani, Toby Walsh

Figure 1 for Proportional Fairness in Obnoxious Facility Location
Figure 2 for Proportional Fairness in Obnoxious Facility Location
Figure 3 for Proportional Fairness in Obnoxious Facility Location
Figure 4 for Proportional Fairness in Obnoxious Facility Location

We consider the obnoxious facility location problem (in which agents prefer the facility location to be far from them) and propose a hierarchy of distance-based proportional fairness concepts for the problem. These fairness axioms ensure that groups of agents at the same location are guaranteed to be a distance from the facility proportional to their group size. We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness. In the deterministic setting, not only are our proportional fairness axioms incompatible with strategyproofness, the Nash equilibria may not guarantee welfare within a constant factor of the optimal welfare. On the other hand, in the randomized setting, we identify proportionally fair and strategyproof mechanisms that give an expected welfare within a constant factor of the optimal welfare.

Viaarxiv icon

Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report

Oct 27, 2022
Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, Toby Walsh

In September 2021, the "One Hundred Year Study on Artificial Intelligence" project (AI100) issued the second report of its planned long-term periodic assessment of artificial intelligence (AI) and its impact on society. It was written by a panel of 17 study authors, each of whom is deeply rooted in AI research, chaired by Michael Littman of Brown University. The report, entitled "Gathering Strength, Gathering Storms," answers a set of 14 questions probing critical areas of AI development addressing the major risks and dangers of AI, its effects on society, its public perception and the future of the field. The report concludes that AI has made a major leap from the lab to people's lives in recent years, which increases the urgency to understand its potential negative effects. The questions were developed by the AI100 Standing Committee, chaired by Peter Stone of the University of Texas at Austin, consisting of a group of AI leaders with expertise in computer science, sociology, ethics, economics, and other disciplines.

* 82 pages, https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-study 
Viaarxiv icon

Manipulation and Peer Mechanisms: A Survey

Oct 05, 2022
Matthew Olckers, Toby Walsh

Figure 1 for Manipulation and Peer Mechanisms: A Survey
Figure 2 for Manipulation and Peer Mechanisms: A Survey

In peer mechanisms, the competitors for a prize also determine who wins. Each competitor may be asked to rank, grade, or nominate peers for the prize. Since the prize can be valuable, such as financial aid, course grades, or an award at a conference, competitors may be tempted to manipulate the mechanism. We survey approaches to prevent or discourage the manipulation of peer mechanisms. We conclude our survey by identifying several important research challenges

Viaarxiv icon

Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism

May 30, 2022
Haris Aziz, Alexander Lam, Mashbat Suzuki, Toby Walsh

Figure 1 for Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism
Figure 2 for Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism
Figure 3 for Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism

Proportionality is an attractive fairness concept that has been applied to a range of problems including the facility location problem, a classic problem in social choice. In our work, we propose a concept called Strong Proportionality, which ensures that when there are two groups of agents at different locations, both groups incur the same total cost. We show that although Strong Proportionality is a well-motivated and basic axiom, there is no deterministic strategyproof mechanism satisfying the property. We then identify a randomized mechanism called Random Rank (which uniformly selects a number $k$ between $1$ to $n$ and locates the facility at the $k$'th highest agent location) which satisfies Strong Proportionality in expectation. Our main theorem characterizes Random Rank as the unique mechanism that achieves universal truthfulness, universal anonymity, and Strong Proportionality in expectation among all randomized mechanisms. Finally, we show via the AverageOrRandomRank mechanism that even stronger ex-post fairness guarantees can be achieved by weakening universal truthfulness to strategyproofness in expectation.

Viaarxiv icon

The Meta-Turing Test

May 11, 2022
Toby Walsh

We propose an alternative to the Turing test that removes the inherent asymmetry between humans and machines in Turing's original imitation game. In this new test, both humans and machines judge each other. We argue that this makes the test more robust against simple deceptions. We also propose a small number of refinements to improve further the test. These refinements could be applied also to Turing's original imitation game.

* Appeared in AAAI 2017 Workshop - Technical Report, San Francisco, California USA, pp. 132 - 137, presented at AAAI 2017 conference 
Viaarxiv icon

Fairness Amidst Non-IID Graph Data: A Literature Review

Feb 16, 2022
Wenbin Zhang, Jeremy C. Weiss, Shuigeng Zhou, Toby Walsh

Figure 1 for Fairness Amidst Non-IID Graph Data: A Literature Review
Figure 2 for Fairness Amidst Non-IID Graph Data: A Literature Review
Figure 3 for Fairness Amidst Non-IID Graph Data: A Literature Review
Figure 4 for Fairness Amidst Non-IID Graph Data: A Literature Review

Fairness in machine learning (ML), the process to understand and correct algorithmic bias, has gained increasing attention with numerous literature being carried out, commonly assume the underlying data is independent and identically distributed (IID). On the other hand, graphs are a ubiquitous data structure to capture connections among individual units and is non-IID by nature. It is therefore of great importance to bridge the traditional fairness literature designed on IID data and ubiquitous non-IID graph representations to tackle bias in ML systems. In this survey, we review such recent advance in fairness amidst non-IID graph data and identify datasets and evaluation metrics available for future research. We also point out the limitations of existing work as well as promising future directions.

Viaarxiv icon

Strategyproof and Proportionally Fair Facility Location

Nov 02, 2021
Haris Aziz, Alexander Lam, Barton E. Lee, Toby Walsh

Figure 1 for Strategyproof and Proportionally Fair Facility Location
Figure 2 for Strategyproof and Proportionally Fair Facility Location
Figure 3 for Strategyproof and Proportionally Fair Facility Location
Figure 4 for Strategyproof and Proportionally Fair Facility Location

We focus on a simple, one-dimensional collective decision problem (often referred to as the facility location problem) and explore issues of strategyproofness and proportional fairness. We present several characterization results for mechanisms that satisfy strategyproofness and varying levels of proportional fairness. We also characterize one of the mechanisms as the unique equilibrium outcome for any mechanism that satisfies natural fairness and monotonicity properties. Finally, we identify strategyproof and proportionally fair mechanisms that provide the best welfare-optimal approximation among all mechanisms that satisfy the corresponding fairness axiom.

Viaarxiv icon

Strategy Proof Mechanisms for Facility Location with Capacity Limits

Sep 17, 2020
Toby Walsh

An important feature of many real world facility location problems are capacity limits on the facilities. We show here how capacity constraints make it harder to design strategy proof mechanisms for facility location, but counter-intuitively can improve the guarantees on how well we can approximate the optimal solution.

Viaarxiv icon