Alert button
Picture for Chen Liang

Chen Liang

Alert button

Logic-induced Diagnostic Reasoning for Semi-supervised Semantic Segmentation

Aug 24, 2023
Chen Liang, Wenguan Wang, Jiaxu Miao, Yi Yang

Figure 1 for Logic-induced Diagnostic Reasoning for Semi-supervised Semantic Segmentation
Figure 2 for Logic-induced Diagnostic Reasoning for Semi-supervised Semantic Segmentation
Figure 3 for Logic-induced Diagnostic Reasoning for Semi-supervised Semantic Segmentation
Figure 4 for Logic-induced Diagnostic Reasoning for Semi-supervised Semantic Segmentation

Recent advances in semi-supervised semantic segmentation have been heavily reliant on pseudo labeling to compensate for limited labeled data, disregarding the valuable relational knowledge among semantic concepts. To bridge this gap, we devise LogicDiag, a brand new neural-logic semi-supervised learning framework. Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals. LogicDiag resolves such conflicts via reasoning with logic-induced diagnoses, enabling the recovery of (potentially) erroneous pseudo labels, ultimately alleviating the notorious error accumulation problem. We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules. Extensive experiments on three standard semi-supervised semantic segmentation benchmarks demonstrate the effectiveness and generality of LogicDiag. Moreover, LogicDiag highlights the promising opportunities arising from the systematic integration of symbolic reasoning into the prevalent statistical, neural learning approaches.

* Accepted to ICCV 2023; Code: https://github.com/leonnnop/LogicDiag 
Viaarxiv icon

Towards Ubiquitous Intelligent Hand Interaction

Aug 21, 2023
Chen Liang

The development of ubiquitous computing and sensing devices has brought about novel interaction scenarios such as mixed reality and IoT (e.g., smart home), which pose new demands for the next generation of natural user interfaces (NUI). Human hand, benefit for the large degree-of-freedom, serves as a medium through which people interact with the external world in their daily lives, thus also being regarded as the main entry of NUI. Unfortunately, current hand tracking system is largely confined on first perspective vision-based solutions, which suffer from optical artifacts and are not practical in ubiquitous environments. In my thesis, I rethink this problem by analyzing the underlying logic in terms of sensor, behavior, and semantics, constituting a research framework for achieving ubiquitous intelligent hand interaction. Then I summarize my previous research topics and illustrated the future research directions based on my research framework.

Viaarxiv icon

LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation

Jun 26, 2023
Yixiao Li, Yifan Yu, Qingru Zhang, Chen Liang, Pengcheng He, Weizhu Chen, Tuo Zhao

Figure 1 for LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation
Figure 2 for LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation
Figure 3 for LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation
Figure 4 for LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation

Transformer models have achieved remarkable results in various natural language tasks, but they are often prohibitively large, requiring massive memories and computational resources. To reduce the size and complexity of these models, we propose LoSparse (Low-Rank and Sparse approximation), a novel model compression technique that approximates a weight matrix by the sum of a low-rank matrix and a sparse matrix. Our method combines the advantages of both low-rank approximations and pruning, while avoiding their limitations. Low-rank approximation compresses the coherent and expressive parts in neurons, while pruning removes the incoherent and non-expressive parts in neurons. Pruning enhances the diversity of low-rank approximations, and low-rank approximation prevents pruning from losing too many expressive neurons. We evaluate our method on natural language understanding, question answering, and natural language generation tasks. We show that it significantly outperforms existing compression methods.

Viaarxiv icon

Contrastive Shapelet Learning for Unsupervised Multivariate Time Series Representation Learning

Jun 02, 2023
Zhiyu Liang, Jianfeng Zhang, Chen Liang, Hongzhi Wang, Zheng Liang, Lujia Pan

Figure 1 for Contrastive Shapelet Learning for Unsupervised Multivariate Time Series Representation Learning
Figure 2 for Contrastive Shapelet Learning for Unsupervised Multivariate Time Series Representation Learning
Figure 3 for Contrastive Shapelet Learning for Unsupervised Multivariate Time Series Representation Learning
Figure 4 for Contrastive Shapelet Learning for Unsupervised Multivariate Time Series Representation Learning

Recent studies have shown great promise in unsupervised representation learning (URL) for multivariate time series, because URL has the capability in learning generalizable representation for many downstream tasks without using inaccessible labels. However, existing approaches usually adopt the models originally designed for other domains (e.g., computer vision) to encode the time series data and rely on strong assumptions to design learning objectives, which limits their ability to perform well. To deal with these problems, we propose a novel URL framework for multivariate time series by learning time-series-specific shapelet-based representation through a popular contrasting learning paradigm. To the best of our knowledge, this is the first work that explores the shapelet-based embedding in the unsupervised general-purpose representation learning. A unified shapelet-based encoder and a novel learning objective with multi-grained contrasting and multi-scale alignment are particularly designed to achieve our goal, and a data augmentation library is employed to improve the generalization. We conduct extensive experiments using tens of real-world datasets to assess the representation quality on many downstream tasks, including classification, clustering, and anomaly detection. The results demonstrate the superiority of our method against not only URL competitors, but also techniques specially designed for downstream tasks. Our code has been made publicly available at https://github.com/real2fish/CSL.

Viaarxiv icon

UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning

Mar 24, 2023
Zhiyu Liang, Chen Liang, Zheng Liang, Hongzhi Wang

Figure 1 for UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning
Figure 2 for UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning
Figure 3 for UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning
Figure 4 for UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning

Machine learning has emerged as a powerful tool for time series analysis. Existing methods are usually customized for different analysis tasks and face challenges in tackling practical problems such as partial labeling and domain shift. To achieve universal analysis and address the aforementioned problems, we develop UniTS, a novel framework that incorporates self-supervised representation learning (or pre-training). The components of UniTS are designed using sklearn-like APIs to allow flexible extensions. We demonstrate how users can easily perform an analysis task using the user-friendly GUIs, and show the superior performance of UniTS over the traditional task-specific methods without self-supervised pre-training on five mainstream tasks and two practical settings.

* 4 pages 
Viaarxiv icon

DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction

Mar 06, 2023
Bowen Wang, Chen Liang, Jiaze Wang, Furui Liu, Shaogang Hao, Dong Li, Jianye Hao, Guangyong Chen, Xiaolong Zou, Pheng-Ann Heng

Figure 1 for DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction
Figure 2 for DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction
Figure 3 for DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction
Figure 4 for DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction

Attaining the equilibrium state of a catalyst-adsorbate system is key to fundamentally assessing its effective properties, such as adsorption energy. Machine learning methods with finer supervision strategies have been applied to boost and guide the relaxation process of an atomic system and better predict its properties at the equilibrium state. In this paper, we present a novel graph neural network (GNN) supervision and prediction strategy DR-Label. The method enhances the supervision signal, reduces the multiplicity of solutions in edge representation, and encourages the model to provide node predictions that are graph structural variation robust. DR-Label first Deconstructs finer-grained equilibrium state information to the model by projecting the node-level supervision signal to each edge. Reversely, the model Reconstructs a more robust equilibrium state prediction by transforming edge-level predictions to node-level with a sphere-fitting algorithm. The DR-Label strategy was applied to three radically distinct models, each of which displayed consistent performance enhancements. Based on the DR-Label strategy, we further proposed DRFormer, which achieved a new state-of-the-art performance on the Open Catalyst 2020 (OC20) dataset and the Cu-based single-atom-alloyed CO adsorption (SAA) dataset. We expect that our work will highlight crucial steps for the development of a more accurate model in equilibrium state property prediction of a catalysis system.

* 11 pages, 3 figures 
Viaarxiv icon

HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers

Feb 19, 2023
Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao

Figure 1 for HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Figure 2 for HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Figure 3 for HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Figure 4 for HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers

Knowledge distillation has been shown to be a powerful model compression approach to facilitate the deployment of pre-trained language models in practice. This paper focuses on task-agnostic distillation. It produces a compact pre-trained model that can be easily fine-tuned on various tasks with small computational costs and memory footprints. Despite the practical benefits, task-agnostic distillation is challenging. Since the teacher model has a significantly larger capacity and stronger representation power than the student model, it is very difficult for the student to produce predictions that match the teacher's over a massive amount of open-domain training data. Such a large prediction discrepancy often diminishes the benefits of knowledge distillation. To address this challenge, we propose Homotopic Distillation (HomoDistil), a novel task-agnostic distillation approach equipped with iterative pruning. Specifically, we initialize the student model from the teacher model, and iteratively prune the student's neurons until the target width is reached. Such an approach maintains a small discrepancy between the teacher's and student's predictions throughout the distillation process, which ensures the effectiveness of knowledge transfer. Extensive experiments demonstrate that HomoDistil achieves significant improvements on existing baselines.

Viaarxiv icon

Symbolic Discovery of Optimization Algorithms

Feb 17, 2023
Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, Yifeng Lu, Quoc V. Le

Figure 1 for Symbolic Discovery of Optimization Algorithms
Figure 2 for Symbolic Discovery of Optimization Algorithms
Figure 3 for Symbolic Discovery of Optimization Algorithms
Figure 4 for Symbolic Discovery of Optimization Algorithms

We present a method to formulate algorithm discovery as program search, and apply it to discover optimization algorithms for deep neural network training. We leverage efficient search techniques to explore an infinite and sparse program space. To bridge the large generalization gap between proxy and target tasks, we also introduce program selection and simplification strategies. Our method discovers a simple and effective optimization algorithm, $\textbf{Lion}$ ($\textit{Evo$\textbf{L}$ved S$\textbf{i}$gn M$\textbf{o}$me$\textbf{n}$tum}$). It is more memory-efficient than Adam as it only keeps track of the momentum. Different from adaptive optimizers, its update has the same magnitude for each parameter calculated through the sign operation. We compare Lion with widely used optimizers, such as Adam and Adafactor, for training a variety of models on different tasks. On image classification, Lion boosts the accuracy of ViT by up to 2% on ImageNet and saves up to 5x the pre-training compute on JFT. On vision-language contrastive learning, we achieve 88.3% $\textit{zero-shot}$ and 91.1% $\textit{fine-tuning}$ accuracy on ImageNet, surpassing the previous best results by 2% and 0.1%, respectively. On diffusion models, Lion outperforms Adam by achieving a better FID score and reducing the training compute by up to 2.3x. For autoregressive, masked language modeling, and fine-tuning, Lion exhibits a similar or better performance compared to Adam. Our analysis of Lion reveals that its performance gain grows with the training batch size. It also requires a smaller learning rate than Adam due to the larger norm of the update produced by the sign function. Additionally, we examine the limitations of Lion and identify scenarios where its improvements are small or not statistically significant. The implementation of Lion is publicly available.

* 29 pages, we clarified the recommended learning rate and added some references 
Viaarxiv icon

Unified Functional Hashing in Automatic Machine Learning

Feb 10, 2023
Ryan Gillard, Stephen Jonany, Yingjie Miao, Michael Munn, Connal de Souza, Jonathan Dungay, Chen Liang, David R. So, Quoc V. Le, Esteban Real

Figure 1 for Unified Functional Hashing in Automatic Machine Learning
Figure 2 for Unified Functional Hashing in Automatic Machine Learning
Figure 3 for Unified Functional Hashing in Automatic Machine Learning
Figure 4 for Unified Functional Hashing in Automatic Machine Learning

The field of Automatic Machine Learning (AutoML) has recently attained impressive results, including the discovery of state-of-the-art machine learning solutions, such as neural image classifiers. This is often done by applying an evolutionary search method, which samples multiple candidate solutions from a large space and evaluates the quality of each candidate through a long training process. As a result, the search tends to be slow. In this paper, we show that large efficiency gains can be obtained by employing a fast unified functional hash, especially through the functional equivalence caching technique, which we also present. The central idea is to detect by hashing when the search method produces equivalent candidates, which occurs very frequently, and this way avoid their costly re-evaluation. Our hash is "functional" in that it identifies equivalent candidates even if they were represented or coded differently, and it is "unified" in that the same algorithm can hash arbitrary representations; e.g. compute graphs, imperative code, or lambda functions. As evidence, we show dramatic improvements on multiple AutoML domains, including neural architecture search and algorithm discovery. Finally, we consider the effect of hash collisions, evaluation noise, and search distribution through empirical analysis. Altogether, we hope this paper may serve as a guide to hashing techniques in AutoML.

Viaarxiv icon