Alert button
Picture for Yinchuan Li

Yinchuan Li

Alert button

Bridging the Gap: Neural Collapse Inspired Prompt Tuning for Generalization under Class Imbalance

Jun 29, 2023
Didi Zhu, Yinchuan Li, Min Zhang, Junkun Yuan, Jiashuo Liu, Zexi Li, Kun Kuang, Chao Wu

Figure 1 for Bridging the Gap: Neural Collapse Inspired Prompt Tuning for Generalization under Class Imbalance
Figure 2 for Bridging the Gap: Neural Collapse Inspired Prompt Tuning for Generalization under Class Imbalance
Figure 3 for Bridging the Gap: Neural Collapse Inspired Prompt Tuning for Generalization under Class Imbalance
Figure 4 for Bridging the Gap: Neural Collapse Inspired Prompt Tuning for Generalization under Class Imbalance

Large-scale vision-language (V-L) models have demonstrated remarkable generalization capabilities for downstream tasks through prompt tuning. However, their performance suffers significantly in the presence of class imbalance, a common issue in real-world scenarios. In this paper, we investigate the effects of class imbalance on the generalization performance of V-L models and extend Neural Collapse phenomenon to these models, revealing the geometric reasons behind the impact of class imbalance on their generalization ability. To address this problem, we propose Neural Collapse based Prompt Tuning (NPT), a novel method that optimizes prompts so that both text and image features satisfy the same simplex ETF structure. NPT incorporates two regularization terms, geometric de-biasing and multi-modal isomorphism, to enhance the robustness of V-L models under class imbalance conditions while maintaining their generalization capabilities. Our comprehensive experiments show that NPT outperforms existing prompt learning techniques across 11 diverse image recognition datasets, achieving an absolute average gain of 2.63\% for novel classes and 2.47\% for harmonic mean when facing imbalanced data.

Viaarxiv icon

Meta Generative Flow Networks with Personalization for Task-Specific Adaptation

Jun 16, 2023
Xinyuan Ji, Xu Zhang, Wei Xi, Haozhi Wang, Olga Gadyatskaya, Yinchuan Li

Figure 1 for Meta Generative Flow Networks with Personalization for Task-Specific Adaptation
Figure 2 for Meta Generative Flow Networks with Personalization for Task-Specific Adaptation
Figure 3 for Meta Generative Flow Networks with Personalization for Task-Specific Adaptation
Figure 4 for Meta Generative Flow Networks with Personalization for Task-Specific Adaptation

Multi-task reinforcement learning and meta-reinforcement learning have been developed to quickly adapt to new tasks, but they tend to focus on tasks with higher rewards and more frequent occurrences, leading to poor performance on tasks with sparse rewards. To address this issue, GFlowNets can be integrated into meta-learning algorithms (GFlowMeta) by leveraging the advantages of GFlowNets on tasks with sparse rewards. However, GFlowMeta suffers from performance degradation when encountering heterogeneous transitions from distinct tasks. To overcome this challenge, this paper proposes a personalized approach named pGFlowMeta, which combines task-specific personalized policies with a meta policy. Each personalized policy balances the loss on its personalized task and the difference from the meta policy, while the meta policy aims to minimize the average loss of all tasks. The theoretical analysis shows that the algorithm converges at a sublinear rate. Extensive experiments demonstrate that the proposed algorithm outperforms state-of-the-art reinforcement learning algorithms in discrete environments.

* journal 
Viaarxiv icon

GFlowNets with Human Feedback

May 11, 2023
Yinchuan Li, Shuang Luo, Yunfeng Shao, Jianye Hao

Figure 1 for GFlowNets with Human Feedback
Figure 2 for GFlowNets with Human Feedback
Figure 3 for GFlowNets with Human Feedback
Figure 4 for GFlowNets with Human Feedback

We propose the GFlowNets with Human Feedback (GFlowHF) framework to improve the exploration ability when training AI models. For tasks where the reward is unknown, we fit the reward function through human evaluations on different trajectories. The goal of GFlowHF is to learn a policy that is strictly proportional to human ratings, instead of only focusing on human favorite ratings like RLHF. Experiments show that GFlowHF can achieve better exploration ability than RLHF.

Viaarxiv icon

Generalized Universal Domain Adaptation with Generative Flow Networks

May 08, 2023
Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, Chao Wu

We introduce a new problem in unsupervised domain adaptation, termed as Generalized Universal Domain Adaptation (GUDA), which aims to achieve precise prediction of all target labels including unknown categories. GUDA bridges the gap between label distribution shift-based and label space mismatch-based variants, essentially categorizing them as a unified problem, guiding to a comprehensive framework for thoroughly solving all the variants. The key challenge of GUDA is developing and identifying novel target categories while estimating the target label distribution. To address this problem, we take advantage of the powerful exploration capability of generative flow networks and propose an active domain adaptation algorithm named GFlowDA, which selects diverse samples with probabilities proportional to a reward function. To enhance the exploration capability and effectively perceive the target label distribution, we tailor the states and rewards, and introduce an efficient solution for parent exploration and state transition. We also propose a training paradigm for GUDA called Generalized Universal Adversarial Network (GUAN), which involves collaborative optimization between GUAN and GFlowNet. Theoretical analysis highlights the importance of exploration, and extensive experiments on benchmark datasets demonstrate the superiority of GFlowDA.

Viaarxiv icon

Generative Flow Networks for Precise Reward-Oriented Active Learning on Graphs

Apr 24, 2023
Yinchuan Li, Zhigang Li, Wenqian Li, Yunfeng Shao, Yan Zheng, Jianye Hao

Figure 1 for Generative Flow Networks for Precise Reward-Oriented Active Learning on Graphs
Figure 2 for Generative Flow Networks for Precise Reward-Oriented Active Learning on Graphs
Figure 3 for Generative Flow Networks for Precise Reward-Oriented Active Learning on Graphs
Figure 4 for Generative Flow Networks for Precise Reward-Oriented Active Learning on Graphs

Many score-based active learning methods have been successfully applied to graph-structured data, aiming to reduce the number of labels and achieve better performance of graph neural networks based on predefined score functions. However, these algorithms struggle to learn policy distributions that are proportional to rewards and have limited exploration capabilities. In this paper, we innovatively formulate the graph active learning problem as a generative process, named GFlowGNN, which generates various samples through sequential actions with probabilities precisely proportional to a predefined reward function. Furthermore, we propose the concept of flow nodes and flow features to efficiently model graphs as flows based on generative flow networks, where the policy network is trained with specially designed rewards. Extensive experiments on real datasets show that the proposed approach has good exploration capability and transferability, outperforming various state-of-the-art methods.

Viaarxiv icon

Multi-agent Policy Reciprocity with Theoretical Guarantee

Apr 12, 2023
Haozhi Wang, Yinchuan Li, Qing Wang, Yunfeng Shao, Jianye Hao

Figure 1 for Multi-agent Policy Reciprocity with Theoretical Guarantee
Figure 2 for Multi-agent Policy Reciprocity with Theoretical Guarantee
Figure 3 for Multi-agent Policy Reciprocity with Theoretical Guarantee
Figure 4 for Multi-agent Policy Reciprocity with Theoretical Guarantee

Modern multi-agent reinforcement learning (RL) algorithms hold great potential for solving a variety of real-world problems. However, they do not fully exploit cross-agent knowledge to reduce sample complexity and improve performance. Although transfer RL supports knowledge sharing, it is hyperparameter sensitive and complex. To solve this problem, we propose a novel multi-agent policy reciprocity (PR) framework, where each agent can fully exploit cross-agent policies even in mismatched states. We then define an adjacency space for mismatched states and design a plug-and-play module for value iteration, which enables agents to infer more precise returns. To improve the scalability of PR, deep PR is proposed for continuous control tasks. Moreover, theoretical analysis shows that agents can asymptotically reach consensus through individual perceived rewards and converge to an optimal value function, which implies the stability and effectiveness of PR, respectively. Experimental results on discrete and continuous environments demonstrate that PR outperforms various existing RL and transfer RL methods.

Viaarxiv icon

Federated Learning via Variational Bayesian Inference: Personalization, Sparsity and Clustering

Mar 08, 2023
Xu Zhang, Wenpeng Li, Yunfeng Shao, Yinchuan Li

Figure 1 for Federated Learning via Variational Bayesian Inference: Personalization, Sparsity and Clustering
Figure 2 for Federated Learning via Variational Bayesian Inference: Personalization, Sparsity and Clustering
Figure 3 for Federated Learning via Variational Bayesian Inference: Personalization, Sparsity and Clustering
Figure 4 for Federated Learning via Variational Bayesian Inference: Personalization, Sparsity and Clustering

Federated learning (FL) is a promising framework that models distributed machine learning while protecting the privacy of clients. However, FL suffers performance degradation from heterogeneous and limited data. To alleviate the degradation, we present a novel personalized Bayesian FL approach named pFedBayes. By using the trained global distribution from the server as the prior distribution of each client, each client adjusts its own distribution by minimizing the sum of the reconstruction error over its personalized data and the KL divergence with the downloaded global distribution. Then, we propose a sparse personalized Bayesian FL approach named sFedBayes. To overcome the extreme heterogeneity in non-i.i.d. data, we propose a clustered Bayesian FL model named cFedbayes by learning different prior distributions for different clients. Theoretical analysis gives the generalization error bound of three approaches and shows that the generalization error convergence rates of the proposed approaches achieve minimax optimality up to a logarithmic factor. Moreover, the analysis presents that cFedbayes has a tighter generalization error rate than pFedBayes. Numerous experiments are provided to demonstrate that the proposed approaches have better performance than other advanced personalized methods on private models in the presence of heterogeneous and limited data.

* 17 pages, 19 figures 
Viaarxiv icon

DAG Matters! GFlowNets Enhanced Explainer For Graph Neural Networks

Mar 04, 2023
Wenqian Li, Yinchuan Li, Zhigang Li, Jianye Hao, Yan Pang

Figure 1 for DAG Matters! GFlowNets Enhanced Explainer For Graph Neural Networks
Figure 2 for DAG Matters! GFlowNets Enhanced Explainer For Graph Neural Networks
Figure 3 for DAG Matters! GFlowNets Enhanced Explainer For Graph Neural Networks
Figure 4 for DAG Matters! GFlowNets Enhanced Explainer For Graph Neural Networks

Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over the years. Existing literature mainly focus on selecting a subgraph, through combinatorial optimization, to provide faithful explanations. However, the exponential size of candidate subgraphs limits the applicability of state-of-the-art methods to large-scale GNNs. We enhance on this through a different approach: by proposing a generative structure -- GFlowNets-based GNN Explainer (GFlowExplainer), we turn the optimization problem into a step-by-step generative problem. Our GFlowExplainer aims to learn a policy that generates a distribution of subgraphs for which the probability of a subgraph is proportional to its' reward. The proposed approach eliminates the influence of node sequence and thus does not need any pre-training strategies. We also propose a new cut vertex matrix to efficiently explore parent states for GFlowNets structure, thus making our approach applicable in a large-scale setting. We conduct extensive experiments on both synthetic and real datasets, and both qualitative and quantitative results show the superiority of our GFlowExplainer.

* ICLR 2023 
Viaarxiv icon

CFlowNets: Continuous Control with Generative Flow Networks

Mar 04, 2023
Yinchuan Li, Shuang Luo, Haozhi Wang, Jianye Hao

Figure 1 for CFlowNets: Continuous Control with Generative Flow Networks
Figure 2 for CFlowNets: Continuous Control with Generative Flow Networks
Figure 3 for CFlowNets: Continuous Control with Generative Flow Networks
Figure 4 for CFlowNets: Continuous Control with Generative Flow Networks

Generative flow networks (GFlowNets), as an emerging technique, can be used as an alternative to reinforcement learning for exploratory control tasks. GFlowNet aims to generate distribution proportional to the rewards over terminating states, and to sample different candidates in an active learning fashion. GFlowNets need to form a DAG and compute the flow matching loss by traversing the inflows and outflows of each node in the trajectory. No experiments have yet concluded that GFlowNets can be used to handle continuous tasks. In this paper, we propose generative continuous flow networks (CFlowNets) that can be applied to continuous control tasks. First, we present the theoretical formulation of CFlowNets. Then, a training framework for CFlowNets is proposed, including the action selection process, the flow approximation algorithm, and the continuous flow matching loss function. Afterward, we theoretically prove the error bound of the flow approximation. The error decreases rapidly as the number of flow samples increases. Finally, experimental results on continuous control tasks demonstrate the performance advantages of CFlowNets compared to many reinforcement learning methods, especially regarding exploration ability.

Viaarxiv icon

GFlowCausal: Generative Flow Networks for Causal Discovery

Oct 15, 2022
Wenqian Li, Yinchuan Li, Shengyu Zhu, Yunfeng Shao, Jianye Hao, Yan Pang

Figure 1 for GFlowCausal: Generative Flow Networks for Causal Discovery
Figure 2 for GFlowCausal: Generative Flow Networks for Causal Discovery
Figure 3 for GFlowCausal: Generative Flow Networks for Causal Discovery
Figure 4 for GFlowCausal: Generative Flow Networks for Causal Discovery

Causal discovery aims to uncover causal structure among a set of variables. Score-based approaches mainly focus on searching for the best Directed Acyclic Graph (DAG) based on a predefined score function. However, most of them are not applicable on a large scale due to the limited searchability. Inspired by the active learning in generative flow networks, we propose a novel approach to learning a DAG from observational data called GFlowCausal. It converts the graph search problem to a generation problem, in which direct edges are added gradually. GFlowCausal aims to learn the best policy to generate high-reward DAGs by sequential actions with probabilities proportional to predefined rewards. We propose a plug-and-play module based on transitive closure to ensure efficient sampling. Theoretical analysis shows that this module could guarantee acyclicity properties effectively and the consistency between final states and fully-connected graphs. We conduct extensive experiments on both synthetic and real datasets, and results show the proposed approach to be superior and also performs well in a large-scale setting.

* NeurIPS 2022 
Viaarxiv icon