Alert button
Picture for Ariel D. Procaccia

Ariel D. Procaccia

Alert button

Generative Social Choice

Sep 03, 2023
Sara Fish, Paul Gölz, David C. Parkes, Ariel D. Procaccia, Gili Rusak, Itai Shapira, Manuel Wüthrich

Traditionally, social choice theory has only been applicable to choices among a few predetermined alternatives but not to more complex decisions such as collectively selecting a textual statement. We introduce generative social choice, a framework that combines the mathematical rigor of social choice theory with large language models' capability to generate text and extrapolate preferences. This framework divides the design of AI-augmented democratic processes into two components: first, proving that the process satisfies rigorous representation guarantees when given access to oracle queries; second, empirically validating that these queries can be approximately implemented using a large language model. We illustrate this framework by applying it to the problem of generating a slate of statements that is representative of opinions expressed as free-form text, for instance in an online deliberative process.

Viaarxiv icon

The Distortion of Binomial Voting Defies Expectation

Jun 27, 2023
Yannai A. Gonczarowski, Gregory Kehne, Ariel D. Procaccia, Ben Schiffer, Shirley Zhang

In computational social choice, the distortion of a voting rule quantifies the degree to which the rule overcomes limited preference information to select a socially desirable outcome. This concept has been investigated extensively, but only through a worst-case lens. Instead, we study the expected distortion of voting rules with respect to an underlying distribution over voter utilities. Our main contribution is the design and analysis of a novel and intuitive rule, binomial voting, which provides strong expected distortion guarantees for all distributions.

Viaarxiv icon

The Goal-Gradient Hypothesis in Stack Overflow

Feb 14, 2020
Nicholas Hoernle, Gregory Kehne, Ariel D. Procaccia, Kobi Gal

Figure 1 for The Goal-Gradient Hypothesis in Stack Overflow
Figure 2 for The Goal-Gradient Hypothesis in Stack Overflow
Figure 3 for The Goal-Gradient Hypothesis in Stack Overflow
Figure 4 for The Goal-Gradient Hypothesis in Stack Overflow

According to the goal-gradient hypothesis, people increase their efforts toward a reward as they close in on the reward. This hypothesis has recently been used to explain users' behavior in online communities that use badges as rewards for completing specific activities. In such settings, users exhibit a "steering effect," a dramatic increase in activity as the users approach a badge threshold, thereby following the predictions made by the goal-gradient hypothesis. This paper provides a new probabilistic model of users' behavior, which captures users who exhibit different levels of steering. We apply this model to data from the popular Q&A site, Stack Overflow, and study users who achieve one of the badges available on this platform. Our results show that only a fraction (20%) of all users strongly experience steering, whereas the activity of more than 40% of badge achievers appears not to be affected by the badge. In particular, we find that for some of the population, an increased activity in and around the badge acquisition date may reflect a statistical artifact rather than steering, as was previously thought in prior work. These results are important for system designers who hope to motivate and guide their users towards certain actions. We have highlighted the need for further studies which investigate what motivations drive the non-steered users to contribute to online communities.

Viaarxiv icon

Learning and Planning in Feature Deception Games

May 13, 2019
Zheyuan Ryan Shi, Ariel D. Procaccia, Kevin S. Chan, Sridhar Venkatesan, Noam Ben-Asher, Nandi O. Leslie, Charles Kamhoua, Fei Fang

Figure 1 for Learning and Planning in Feature Deception Games
Figure 2 for Learning and Planning in Feature Deception Games
Figure 3 for Learning and Planning in Feature Deception Games
Figure 4 for Learning and Planning in Feature Deception Games

Today's high-stakes adversarial interactions feature attackers who constantly breach the ever-improving security measures. Deception mitigates the defender's loss by misleading the attacker to make suboptimal decisions. In order to formally reason about deception, we introduce the feature deception game (FDG), a domain-independent game-theoretic model and present a learning and planning framework. We make the following contributions. (1) We show that we can uniformly learn the adversary's preferences using data from a modest number of deception strategies. (2) We propose an approximation algorithm for finding the optimal deception strategy and show that the problem is NP-hard. (3) We perform extensive experiments to empirically validate our methods and results.

Viaarxiv icon

Envy-Free Classification

Sep 23, 2018
Maria-Florina Balcan, Travis Dick, Ritesh Noothigattu, Ariel D. Procaccia

Figure 1 for Envy-Free Classification
Figure 2 for Envy-Free Classification
Figure 3 for Envy-Free Classification
Figure 4 for Envy-Free Classification

In classic fair division problems such as cake cutting and rent division, envy-freeness requires that each individual (weakly) prefer his allocation to anyone else's. On a conceptual level, we argue that envy-freeness also provides a compelling notion of fairness for classification tasks. Our technical focus is the generalizability of envy-free classification, i.e., understanding whether a classifier that is envy free on a sample would be almost envy free with respect to the underlying distribution with high probability. Our main result establishes that a small sample is sufficient to achieve such guarantees, when the classifier in question is a mixture of deterministic classifiers that belong to a family of low Natarajan dimension.

Viaarxiv icon

Choosing How to Choose Papers

Aug 27, 2018
Ritesh Noothigattu, Nihar B. Shah, Ariel D. Procaccia

Figure 1 for Choosing How to Choose Papers
Figure 2 for Choosing How to Choose Papers
Figure 3 for Choosing How to Choose Papers
Figure 4 for Choosing How to Choose Papers

It is common to see a handful of reviewers reject a highly novel paper, because they view, say, extensive experiments as far more important than novelty, whereas the community as a whole would have embraced the paper. More generally, the disparate mapping of criteria scores to final recommendations by different reviewers is a major source of inconsistency in peer review. In this paper we present a framework --- based on $L(p,q)$-norm empirical risk minimization --- for learning the community's aggregate mapping. We draw on computational social choice to identify desirable values of $p$ and $q$; specifically, we characterize $p=q=1$ as the only choice that satisfies three natural axiomatic properties. Finally, we implement and apply our approach to reviews from IJCAI 2017.

Viaarxiv icon

Fairly Allocating Many Goods with Few Queries

Jul 30, 2018
Hoon Oh, Ariel D. Procaccia, Warut Suksompong

Figure 1 for Fairly Allocating Many Goods with Few Queries

We investigate the query complexity of the fair allocation of indivisible goods. For two agents with arbitrary monotonic valuations, we design an algorithm that computes an allocation satisfying envy-freeness up to one good (EF1), a relaxation of envy-freeness, using a logarithmic number of queries. We show that the logarithmic query complexity bound also holds for three agents with additive valuations. These results suggest that it is possible to fairly allocate goods in practice even when the number of goods is extremely large. By contrast, we prove that computing an allocation satisfying envy-freeness and another of its relaxations, envy-freeness up to any good (EFX), requires a linear number of queries even when there are only two agents with identical additive valuations.

Viaarxiv icon

Strategyproof Linear Regression in High Dimensions

May 27, 2018
Yiling Chen, Chara Podimata, Ariel D. Procaccia, Nisarg Shah

Figure 1 for Strategyproof Linear Regression in High Dimensions

This paper is part of an emerging line of work at the intersection of machine learning and mechanism design, which aims to avoid noise in training data by correctly aligning the incentives of data sources. Specifically, we focus on the ubiquitous problem of linear regression, where strategyproof mechanisms have previously been identified in two dimensions. In our setting, agents have single-peaked preferences and can manipulate only their response variables. Our main contribution is the discovery of a family of group strategyproof linear regression mechanisms in any number of dimensions, which we call generalized resistant hyperplane mechanisms. The game-theoretic properties of these mechanisms -- and, in fact, their very existence -- are established through a connection to a discrete version of the Ham Sandwich Theorem.

* In the Proceedings of the 19th ACM Conference on Economics and Computation (EC), 2018 (to appear) 
Viaarxiv icon

The Provable Virtue of Laziness in Motion Planning

Oct 11, 2017
Nika Haghtalab, Simon Mackenzie, Ariel D. Procaccia, Oren Salzman, Siddhartha S. Srinivasa

Figure 1 for The Provable Virtue of Laziness in Motion Planning
Figure 2 for The Provable Virtue of Laziness in Motion Planning
Figure 3 for The Provable Virtue of Laziness in Motion Planning
Figure 4 for The Provable Virtue of Laziness in Motion Planning

The Lazy Shortest Path (LazySP) class consists of motion-planning algorithms that only evaluate edges along shortest paths between the source and target. These algorithms were designed to minimize the number of edge evaluations in settings where edge evaluation dominates the running time of the algorithm; but how close to optimal are LazySP algorithms in terms of this objective? Our main result is an analytical upper bound, in a probabilistic model, on the number of edge evaluations required by LazySP algorithms; a matching lower bound shows that these algorithms are asymptotically optimal in the worst case.

Viaarxiv icon

A Voting-Based System for Ethical Decision Making

Sep 20, 2017
Ritesh Noothigattu, Snehalkumar 'Neil' S. Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, Ariel D. Procaccia

Figure 1 for A Voting-Based System for Ethical Decision Making
Figure 2 for A Voting-Based System for Ethical Decision Making
Figure 3 for A Voting-Based System for Ethical Decision Making
Figure 4 for A Voting-Based System for Ethical Decision Making

We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.

Viaarxiv icon