Alert button
Picture for Chen Liang

Chen Liang

Alert button

UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning

Add code
Bookmark button
Alert button
Mar 24, 2023
Zhiyu Liang, Chen Liang, Zheng Liang, Hongzhi Wang

Figure 1 for UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning
Figure 2 for UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning
Figure 3 for UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning
Figure 4 for UniTS: A Universal Time Series Analysis Framework with Self-supervised Representation Learning
Viaarxiv icon

DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction

Add code
Bookmark button
Alert button
Mar 06, 2023
Bowen Wang, Chen Liang, Jiaze Wang, Furui Liu, Shaogang Hao, Dong Li, Jianye Hao, Guangyong Chen, Xiaolong Zou, Pheng-Ann Heng

Figure 1 for DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction
Figure 2 for DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction
Figure 3 for DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction
Figure 4 for DR-Label: Improving GNN Models for Catalysis Systems by Label Deconstruction and Reconstruction
Viaarxiv icon

HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers

Add code
Bookmark button
Alert button
Feb 19, 2023
Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao

Figure 1 for HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Figure 2 for HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Figure 3 for HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Figure 4 for HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Viaarxiv icon

Symbolic Discovery of Optimization Algorithms

Add code
Bookmark button
Alert button
Feb 17, 2023
Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, Yifeng Lu, Quoc V. Le

Figure 1 for Symbolic Discovery of Optimization Algorithms
Figure 2 for Symbolic Discovery of Optimization Algorithms
Figure 3 for Symbolic Discovery of Optimization Algorithms
Figure 4 for Symbolic Discovery of Optimization Algorithms
Viaarxiv icon

Unified Functional Hashing in Automatic Machine Learning

Add code
Bookmark button
Alert button
Feb 10, 2023
Ryan Gillard, Stephen Jonany, Yingjie Miao, Michael Munn, Connal de Souza, Jonathan Dungay, Chen Liang, David R. So, Quoc V. Le, Esteban Real

Figure 1 for Unified Functional Hashing in Automatic Machine Learning
Figure 2 for Unified Functional Hashing in Automatic Machine Learning
Figure 3 for Unified Functional Hashing in Automatic Machine Learning
Figure 4 for Unified Functional Hashing in Automatic Machine Learning
Viaarxiv icon

Less is More: Task-aware Layer-wise Distillation for Language Model Compression

Add code
Bookmark button
Alert button
Oct 05, 2022
Chen Liang, Simiao Zuo, Qingru Zhang, Pengcheng He, Weizhu Chen, Tuo Zhao

Figure 1 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 2 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 3 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 4 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Viaarxiv icon

GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models

Add code
Bookmark button
Alert button
Oct 05, 2022
Chen Liang, Wenguan Wang, Jiaxu Miao, Yi Yang

Figure 1 for GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models
Figure 2 for GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models
Figure 3 for GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models
Figure 4 for GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models
Viaarxiv icon

Multi-Task Mixture Density Graph Neural Networks for Predicting Cu-based Single-Atom Alloy Catalysts for CO2 Reduction Reaction

Add code
Bookmark button
Alert button
Sep 15, 2022
Chen Liang, Bowen Wang, Shaogang Hao, Guangyong Chen, Pheng-Ann Heng, Xiaolong Zou

Figure 1 for Multi-Task Mixture Density Graph Neural Networks for Predicting Cu-based Single-Atom Alloy Catalysts for CO2 Reduction Reaction
Figure 2 for Multi-Task Mixture Density Graph Neural Networks for Predicting Cu-based Single-Atom Alloy Catalysts for CO2 Reduction Reaction
Figure 3 for Multi-Task Mixture Density Graph Neural Networks for Predicting Cu-based Single-Atom Alloy Catalysts for CO2 Reduction Reaction
Figure 4 for Multi-Task Mixture Density Graph Neural Networks for Predicting Cu-based Single-Atom Alloy Catalysts for CO2 Reduction Reaction
Viaarxiv icon

PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance

Add code
Bookmark button
Alert button
Jun 25, 2022
Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, Tuo Zhao

Figure 1 for PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Figure 2 for PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Figure 3 for PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Figure 4 for PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Viaarxiv icon

MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation

Add code
Bookmark button
Alert button
Apr 28, 2022
Simiao Zuo, Qingru Zhang, Chen Liang, Pengcheng He, Tuo Zhao, Weizhu Chen

Figure 1 for MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation
Figure 2 for MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation
Figure 3 for MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation
Figure 4 for MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation
Viaarxiv icon