Alert button
Picture for Qian Li

Qian Li

Alert button

Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence

Aug 29, 2023
Liyuan Wang, Xingxing Zhang, Qian Li, Mingtian Zhang, Hang Su, Jun Zhu, Yi Zhong

Figure 1 for Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence
Figure 2 for Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence
Figure 3 for Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence
Figure 4 for Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence

Continual learning aims to empower artificial intelligence (AI) with strong adaptability to the real world. For this purpose, a desirable solution should properly balance memory stability with learning plasticity, and acquire sufficient compatibility to capture the observed distributions. Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting, but remain difficult to flexibly accommodate incremental changes as biological intelligence (BI) does. By modeling a robust Drosophila learning system that actively regulates forgetting with multiple learning modules, here we propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity, and accordingly coordinates a multi-learner architecture to ensure solution compatibility. Through extensive theoretical and empirical validation, our approach not only clearly enhances the performance of continual learning, especially over synaptic regularization methods in task-incremental settings, but also potentially advances the understanding of neurological adaptive mechanisms, serving as a novel paradigm to progress AI and BI together.

Viaarxiv icon

Privileged Anatomical and Protocol Discrimination in Trackerless 3D Ultrasound Reconstruction

Aug 20, 2023
Qi Li, Ziyi Shen, Qian Li, Dean C. Barratt, Thomas Dowrick, Matthew J. Clarkson, Tom Vercauteren, Yipeng Hu

Three-dimensional (3D) freehand ultrasound (US) reconstruction without using any additional external tracking device has seen recent advances with deep neural networks (DNNs). In this paper, we first investigated two identified contributing factors of the learned inter-frame correlation that enable the DNN-based reconstruction: anatomy and protocol. We propose to incorporate the ability to represent these two factors - readily available during training - as the privileged information to improve existing DNN-based methods. This is implemented in a new multi-task method, where the anatomical and protocol discrimination are used as auxiliary tasks. We further develop a differentiable network architecture to optimise the branching location of these auxiliary tasks, which controls the ratio between shared and task-specific network parameters, for maximising the benefits from the two auxiliary tasks. Experimental results, on a dataset with 38 forearms of 19 volunteers acquired with 6 different scanning protocols, show that 1) both anatomical and protocol variances are enabling factors for DNN-based US reconstruction; 2) learning how to discriminate different subjects (anatomical variance) and predefined types of scanning paths (protocol variance) both significantly improve frame prediction accuracy, volume reconstruction overlap, accumulated tracking error and final drift, using the proposed algorithm.

* Accepted to Advances in Simplifying Medical UltraSound (ASMUS) workshop at MICCAI 2023 
Viaarxiv icon

Hard Adversarial Example Mining for Improving Robust Fairness

Aug 03, 2023
Chenhao Lin, Xiang Ji, Yulong Yang, Qian Li, Chao Shen, Run Wang, Liming Fang

Figure 1 for Hard Adversarial Example Mining for Improving Robust Fairness
Figure 2 for Hard Adversarial Example Mining for Improving Robust Fairness
Figure 3 for Hard Adversarial Example Mining for Improving Robust Fairness
Figure 4 for Hard Adversarial Example Mining for Improving Robust Fairness

Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AE). Nevertheless, recent studies have revealed that adversarially trained models are prone to unfairness problems, restricting their applicability. In this paper, we empirically observe that this limitation may be attributed to serious adversarial confidence overfitting, i.e., certain adversarial examples with overconfidence. To alleviate this problem, we propose HAM, a straightforward yet effective framework via adaptive Hard Adversarial example Mining.HAM concentrates on mining hard adversarial examples while discarding the easy ones in an adaptive fashion. Specifically, HAM identifies hard AEs in terms of their step sizes needed to cross the decision boundary when calculating loss value. Besides, an early-dropping mechanism is incorporated to discard the easy examples at the initial stages of AE generation, resulting in efficient AT. Extensive experimental results on CIFAR-10, SVHN, and Imagenette demonstrate that HAM achieves significant improvement in robust fairness while reducing computational cost compared to several state-of-the-art adversarial training methods. The code will be made publicly available.

Viaarxiv icon

Exploring Antitrust and Platform Power in Generative AI

Jul 10, 2023
Konrad Kollnig, Qian Li

The concentration of power in a few digital technology companies has become a subject of increasing interest in both academic and non-academic discussions. One of the most noteworthy contributions to the debate is Lina Khan's Amazon's Antitrust Paradox. In this work, Khan contends that Amazon has systematically exerted its dominance in online retail to eliminate competitors and subsequently charge above-market prices. This work contributed to Khan's appointment as the chair of the US Federal Trade Commission (FTC), one of the most influential antitrust organisations. Today, several ongoing antitrust lawsuits in the US and Europe involve major technology companies like Apple, Google/Alphabet, and Facebook/Meta. In the realm of generative AI, we are once again witnessing the same companies taking the lead in technological advancements, leaving little room for others to compete. This article examines the market dominance of these corporations in the technology stack behind generative AI from an antitrust law perspective.

* Accepted by the Workshop on Generative AI and Law (GenLaw '23) of ICML '23 
Viaarxiv icon

Counterfactual Explanation for Fairness in Recommendation

Jul 10, 2023
Xiangmeng Wang, Qian Li, Dianer Yu, Qing Li, Guandong Xu

Figure 1 for Counterfactual Explanation for Fairness in Recommendation
Figure 2 for Counterfactual Explanation for Fairness in Recommendation
Figure 3 for Counterfactual Explanation for Fairness in Recommendation
Figure 4 for Counterfactual Explanation for Fairness in Recommendation

Fairness-aware recommendation eliminates discrimination issues to build trustworthy recommendation systems.Explaining the causes of unfair recommendations is critical, as it promotes fairness diagnostics, and thus secures users' trust in recommendation models. Existing fairness explanation methods suffer high computation burdens due to the large-scale search space and the greedy nature of the explanation search process. Besides, they perform score-based optimizations with continuous values, which are not applicable to discrete attributes such as gender and race. In this work, we adopt the novel paradigm of counterfactual explanation from causal inference to explore how minimal alterations in explanations change model fairness, to abandon the greedy search for explanations. We use real-world attributes from Heterogeneous Information Networks (HINs) to empower counterfactual reasoning on discrete attributes. We propose a novel Counterfactual Explanation for Fairness (CFairER) that generates attribute-level counterfactual explanations from HINs for recommendation fairness. Our CFairER conducts off-policy reinforcement learning to seek high-quality counterfactual explanations, with an attentive action pruning reducing the search space of candidate counterfactuals. The counterfactual explanations help to provide rational and proximate explanations for model fairness, while the attentive action pruning narrows the search space of attributes. Extensive experiments demonstrate our proposed model can generate faithful explanations while maintaining favorable recommendation performance.

Viaarxiv icon

Causal Neural Graph Collaborative Filtering

Jul 10, 2023
Xiangmeng Wang, Qian Li, Dianer Yu, Wei Huang, Guandong Xu

Figure 1 for Causal Neural Graph Collaborative Filtering
Figure 2 for Causal Neural Graph Collaborative Filtering
Figure 3 for Causal Neural Graph Collaborative Filtering
Figure 4 for Causal Neural Graph Collaborative Filtering

Graph collaborative filtering (GCF) has gained considerable attention in recommendation systems by leveraging graph learning techniques to enhance collaborative filtering (CF) models. One classical approach in GCF is to learn user and item embeddings by modeling complex graph relations and utilizing these embeddings for CF models. However, the quality of the embeddings significantly impacts the recommendation performance of GCF models. In this paper, we argue that existing graph learning methods are insufficient in generating satisfactory embeddings for CF models. This is because they aggregate neighboring node messages directly, which can result in incorrect estimations of user-item correlations. To overcome this limitation, we propose a novel approach that incorporates causal modeling to explicitly encode the causal effects of neighboring nodes on the target node. This approach enables us to identify spurious correlations and uncover the root causes of user preferences. We introduce Causal Neural Graph Collaborative Filtering (CNGCF), the first causality-aware graph learning framework for CF. CNGCF integrates causal modeling into the graph representation learning process, explicitly coupling causal effects between node pairs into the core message-passing process of graph learning. As a result, CNGCF yields causality-aware embeddings that promote robust recommendations. Our extensive experiments demonstrate that CNGCF provides precise recommendations that align with user preferences. Therefore, our proposed framework can address the limitations of existing GCF models and offer a more effective solution for recommendation systems.

Viaarxiv icon

BPNet: Bézier Primitive Segmentation on 3D Point Clouds

Jul 08, 2023
Rao Fu, Cheng Wen, Qian Li, Xiao Xiao, Pierre Alliez

Figure 1 for BPNet: Bézier Primitive Segmentation on 3D Point Clouds
Figure 2 for BPNet: Bézier Primitive Segmentation on 3D Point Clouds
Figure 3 for BPNet: Bézier Primitive Segmentation on 3D Point Clouds
Figure 4 for BPNet: Bézier Primitive Segmentation on 3D Point Clouds

This paper proposes BPNet, a novel end-to-end deep learning framework to learn B\'ezier primitive segmentation on 3D point clouds. The existing works treat different primitive types separately, thus limiting them to finite shape categories. To address this issue, we seek a generalized primitive segmentation on point clouds. Taking inspiration from B\'ezier decomposition on NURBS models, we transfer it to guide point cloud segmentation casting off primitive types. A joint optimization framework is proposed to learn B\'ezier primitive segmentation and geometric fitting simultaneously on a cascaded architecture. Specifically, we introduce a soft voting regularizer to improve primitive segmentation and propose an auto-weight embedding module to cluster point features, making the network more robust and generic. We also introduce a reconstruction module where we successfully process multiple CAD models with different primitives simultaneously. We conducted extensive experiments on the synthetic ABC dataset and real-scan datasets to validate and compare our approach with different baseline methods. Experiments show superior performance over previous work in terms of segmentation, with a substantially faster inference speed.

Viaarxiv icon

Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction

Jun 19, 2023
Qian Li, Shu Guo, Cheng Ji, Xutan Peng, Shiyao Cui, Jianxin Li

Figure 1 for Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction
Figure 2 for Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction
Figure 3 for Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction
Figure 4 for Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction

Multi-Modal Relation Extraction (MMRE) aims at identifying the relation between two entities in texts that contain visual clues. Rich visual content is valuable for the MMRE task, but existing works cannot well model finer associations among different modalities, failing to capture the truly helpful visual information and thus limiting relation extraction performance. In this paper, we propose a novel MMRE framework to better capture the deeper correlations of text, entity pair, and image/objects, so as to mine more helpful information for the task, termed as DGF-PT. We first propose a prompt-based autoregressive encoder, which builds the associations of intra-modal and inter-modal features related to the task, respectively by entity-oriented and object-oriented prefixes. To better integrate helpful visual information, we design a dual-gated fusion module to distinguish the importance of image/objects and further enrich text representations. In addition, a generative decoder is introduced with entity type restriction on relations, better filtering out candidates. Extensive experiments conducted on the benchmark dataset show that our approach achieves excellent performance compared to strong competitors, even in the few-shot situation.

Viaarxiv icon