Alert button
Picture for Weiran Shen

Weiran Shen

Alert button

LTP-MMF: Towards Long-term Provider Max-min Fairness Under Recommendation Feedback Loops

Aug 11, 2023
Chen Xu, Xiaopeng Ye, Jun Xu, Xiao Zhang, Weiran Shen, Ji-Rong Wen

Figure 1 for LTP-MMF: Towards Long-term Provider Max-min Fairness Under Recommendation Feedback Loops
Figure 2 for LTP-MMF: Towards Long-term Provider Max-min Fairness Under Recommendation Feedback Loops
Figure 3 for LTP-MMF: Towards Long-term Provider Max-min Fairness Under Recommendation Feedback Loops
Figure 4 for LTP-MMF: Towards Long-term Provider Max-min Fairness Under Recommendation Feedback Loops

Multi-stakeholder recommender systems involve various roles, such as users, providers. Previous work pointed out that max-min fairness (MMF) is a better metric to support weak providers. However, when considering MMF, the features or parameters of these roles vary over time, how to ensure long-term provider MMF has become a significant challenge. We observed that recommendation feedback loops (named RFL) will influence the provider MMF greatly in the long term. RFL means that recommender system can only receive feedback on exposed items from users and update recommender models incrementally based on this feedback. When utilizing the feedback, the recommender model will regard unexposed item as negative. In this way, tail provider will not get the opportunity to be exposed, and its items will always be considered as negative samples. Such phenomenons will become more and more serious in RFL. To alleviate the problem, this paper proposes an online ranking model named Long-Term Provider Max-min Fairness (named LTP-MMF). Theoretical analysis shows that the long-term regret of LTP-MMF enjoys a sub-linear bound. Experimental results on three public recommendation benchmarks demonstrated that LTP-MMF can outperform the baselines in the long term.

* arXiv admin note: text overlap with arXiv:2303.06660 
Viaarxiv icon

P-MMF: Provider Max-min Fairness Re-ranking in Recommender System

Mar 12, 2023
Chen Xu, Sirui Chen, Jun Xu, Weiran Shen, Xiao Zhang, Gang Wang, Zhenghua Dong

Figure 1 for P-MMF: Provider Max-min Fairness Re-ranking in Recommender System
Figure 2 for P-MMF: Provider Max-min Fairness Re-ranking in Recommender System
Figure 3 for P-MMF: Provider Max-min Fairness Re-ranking in Recommender System
Figure 4 for P-MMF: Provider Max-min Fairness Re-ranking in Recommender System

In this paper, we address the issue of recommending fairly from the aspect of providers, which has become increasingly essential in multistakeholder recommender systems. Existing studies on provider fairness usually focused on designing proportion fairness (PF) metrics that first consider systematic fairness. However, sociological researches show that to make the market more stable, max-min fairness (MMF) is a better metric. The main reason is that MMF aims to improve the utility of the worst ones preferentially, guiding the system to support the providers in weak market positions. When applying MMF to recommender systems, how to balance user preferences and provider fairness in an online recommendation scenario is still a challenging problem. In this paper, we proposed an online re-ranking model named Provider Max-min Fairness Re-ranking (P-MMF) to tackle the problem. Specifically, P-MMF formulates provider fair recommendation as a resource allocation problem, where the exposure slots are considered the resources to be allocated to providers and the max-min fairness is used as the regularizer during the process. We show that the problem can be further represented as a regularized online optimizing problem and solved efficiently in its dual space. During the online re-ranking phase, a momentum gradient descent method is designed to conduct the dynamic re-ranking. Theoretical analysis showed that the regret of P-MMF can be bounded. Experimental results on four public recommender datasets demonstrated that P-MMF can outperformed the state-of-the-art baselines. Experimental results also show that P-MMF can retain small computationally costs on a corpus with the large number of items.

* Accepted in WWW23 
Viaarxiv icon

Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models

Aug 30, 2022
Yong Zhong, Hongtao Liu, Xiaodong Liu, Fan Bao, Weiran Shen, Chongxuan Li

Figure 1 for Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models
Figure 2 for Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models
Figure 3 for Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models
Figure 4 for Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models

Deep generative models (DGMs) are data-eager. Essentially, it is because learning a complex model on limited data suffers from a large variance and easily overfits. Inspired by the \emph{bias-variance dilemma}, we propose \emph{regularized deep generative model} (Reg-DGM), which leverages a nontransferable pre-trained model to reduce the variance of generative modeling with limited data. Formally, Reg-DGM optimizes a weighted sum of a certain divergence between the data distribution and the DGM and the expectation of an energy function defined by the pre-trained model w.r.t. the DGM. Theoretically, we characterize the existence and uniqueness of the global minimum of Reg-DGM in the nonparametric setting and rigorously prove the statistical benefits of Reg-DGM w.r.t. the mean squared error and the expected risk in a simple yet representative Gaussian-fitting example. Empirically, it is quite flexible to specify the DGM and the pre-trained model in Reg-DGM. In particular, with a ResNet-18 classifier pre-trained on ImageNet and a data-dependent energy function, Reg-DGM consistently improves the generation performance of strong DGMs including StyleGAN2 and ADA on several benchmarks with limited data and achieves competitive results to the state-of-the-art methods.

Viaarxiv icon

Learning to Clear the Market

Jun 04, 2019
Weiran Shen, Sébastien Lahaie, Renato Paes Leme

Figure 1 for Learning to Clear the Market
Figure 2 for Learning to Clear the Market
Figure 3 for Learning to Clear the Market
Figure 4 for Learning to Clear the Market

The problem of market clearing is to set a price for an item such that quantity demanded equals quantity supplied. In this work, we cast the problem of predicting clearing prices into a learning framework and use the resulting models to perform revenue optimization in auctions and markets with contextual information. The economic intuition behind market clearing allows us to obtain fine-grained control over the aggressiveness of the resulting pricing policy, grounded in theory. To evaluate our approach, we fit a model of clearing prices over a massive dataset of bids in display ad auctions from a major ad exchange. The learned prices outperform other modeling techniques in the literature in terms of revenue and efficiency trade-offs. Because of the convex nature of the clearing loss function, the convergence rate of our method is as fast as linear regression.

Viaarxiv icon

Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks

May 09, 2018
Weiran Shen, Pingzhong Tang, Song Zuo

Figure 1 for Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks
Figure 2 for Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks
Figure 3 for Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks
Figure 4 for Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks

Using AI approaches to automatically design mechanisms has been a central research mission at the interface of AI and economics [Conitzer and Sandholm, 2002]. Previous approaches that a empt to design revenue optimal auctions for the multi-dimensional settings fall short in at least one of the three aspects: 1) representation --- search in a space that probably does not even contain the optimal mechanism; 2) exactness --- finding a mechanism that is either not truthful or far from optimal; 3) domain dependence --- need a different design for different environment settings. To resolve the three difficulties, in this paper, we put forward a uni ed neural network based framework that automatically learns to design revenue optimal mechanisms. Our framework consists of a mechanism network that takes an input distribution for training and outputs a mechanism, as well as a buyer network that takes a mechanism as input and output an action. Such a separation in design mitigates the difficulty to impose incentive compatibility constraints on the mechanism, by making it a rational choice of the buyer. As a result, our framework easily overcomes the previously mentioned difficulty in incorporating IC constraints and always returns exactly incentive compatible mechanisms. We then applied our framework to a number of multi-item revenue optimal design settings, for a few of which the theoretically optimal mechanisms are unknown. We then go on to theoretically prove that the mechanisms found by our framework are indeed optimal.

Viaarxiv icon

Optimal Vehicle Dispatching Schemes via Dynamic Pricing

Mar 01, 2018
Mengjing Chen, Weiran Shen, Pingzhong Tang, Song Zuo

Figure 1 for Optimal Vehicle Dispatching Schemes via Dynamic Pricing
Figure 2 for Optimal Vehicle Dispatching Schemes via Dynamic Pricing
Figure 3 for Optimal Vehicle Dispatching Schemes via Dynamic Pricing
Figure 4 for Optimal Vehicle Dispatching Schemes via Dynamic Pricing

Over the past few years, ride-sharing has emerged as an effective way to relieve traffic congestion. A key problem for these platforms is to come up with a revenue-optimal (or GMV-optimal) pricing scheme and an induced vehicle dispatching policy that incorporate geographic and temporal information. In this paper, we aim to tackle this problem via an economic approach. Modeled naively, the underlying optimization problem may be non-convex and thus hard to compute. To this end, we use a so-called "ironing" technique to convert the problem into an equivalent convex optimization one via a clean Markov decision process (MDP) formulation, where the states are the driver distributions and the decision variables are the prices for each pair of locations. Our main finding is an efficient algorithm that computes the exact revenue-optimal (or GMV-optimal) randomized pricing schemes. We characterize the optimal solution of the MDP by a primal-dual analysis of a corresponding convex program. We also conduct empirical evaluations of our solution through real data of a major ride-sharing platform and show its advantages over fixed pricing schemes as well as several prevalent surge-based pricing schemes.

Viaarxiv icon