Alert button
Picture for Liang Ding

Liang Ding

Alert button

Random Smoothing Regularization in Kernel Gradient Descent Learning

Add code
Bookmark button
Alert button
May 05, 2023
Liang Ding, Tianyang Hu, Jiahang Jiang, Donghao Li, Wenjia Wang, Yuan Yao

Figure 1 for Random Smoothing Regularization in Kernel Gradient Descent Learning
Figure 2 for Random Smoothing Regularization in Kernel Gradient Descent Learning
Figure 3 for Random Smoothing Regularization in Kernel Gradient Descent Learning
Figure 4 for Random Smoothing Regularization in Kernel Gradient Descent Learning
Viaarxiv icon

Representing Additive Gaussian Processes by Sparse Matrices

Add code
Bookmark button
Alert button
Apr 29, 2023
Lu Zou, Haoyuan Chen, Liang Ding

Figure 1 for Representing Additive Gaussian Processes by Sparse Matrices
Figure 2 for Representing Additive Gaussian Processes by Sparse Matrices
Figure 3 for Representing Additive Gaussian Processes by Sparse Matrices
Figure 4 for Representing Additive Gaussian Processes by Sparse Matrices
Viaarxiv icon

Prompt-Learning for Cross-Lingual Relation Extraction

Add code
Bookmark button
Alert button
Apr 20, 2023
Chiaming Hsu, Changtong Zan, Liang Ding, Longyue Wang, Xiaoting Wang, Weifeng Liu, Fu Lin, Wenbin Hu

Figure 1 for Prompt-Learning for Cross-Lingual Relation Extraction
Figure 2 for Prompt-Learning for Cross-Lingual Relation Extraction
Figure 3 for Prompt-Learning for Cross-Lingual Relation Extraction
Figure 4 for Prompt-Learning for Cross-Lingual Relation Extraction
Viaarxiv icon

On Efficient Training of Large-Scale Deep Learning Models: A Literature Review

Add code
Bookmark button
Alert button
Apr 07, 2023
Li Shen, Yan Sun, Zhiyuan Yu, Liang Ding, Xinmei Tian, Dacheng Tao

Figure 1 for On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Figure 2 for On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Figure 3 for On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Figure 4 for On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Viaarxiv icon

Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT

Add code
Bookmark button
Alert button
Mar 24, 2023
Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, Dacheng Tao

Figure 1 for Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT
Figure 2 for Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT
Figure 3 for Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT
Figure 4 for Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT
Viaarxiv icon

Towards Making the Most of ChatGPT for Machine Translation

Add code
Bookmark button
Alert button
Mar 24, 2023
Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, Dacheng Tao

Figure 1 for Towards Making the Most of ChatGPT for Machine Translation
Figure 2 for Towards Making the Most of ChatGPT for Machine Translation
Figure 3 for Towards Making the Most of ChatGPT for Machine Translation
Figure 4 for Towards Making the Most of ChatGPT for Machine Translation
Viaarxiv icon

Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT

Add code
Bookmark button
Alert button
Mar 02, 2023
Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao

Figure 1 for Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT
Figure 2 for Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT
Figure 3 for Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT
Figure 4 for Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT
Viaarxiv icon

AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks

Add code
Bookmark button
Alert button
Mar 01, 2023
Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, Dacheng Tao

Figure 1 for AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Figure 2 for AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Figure 3 for AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Figure 4 for AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Viaarxiv icon

OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System

Add code
Bookmark button
Alert button
Mar 01, 2023
Chao Xue, Wei Liu, Shuai Xie, Zhenfang Wang, Jiaxing Li, Xuyang Peng, Liang Ding, Shanshan Zhao, Qiong Cao, Yibo Yang, Fengxiang He, Bohua Cai, Rongcheng Bian, Yiyan Zhao, Heliang Zheng, Xiangyang Liu, Dongkai Liu, Daqing Liu, Li Shen, Chang Li, Shijin Zhang, Yukang Zhang, Guanpu Chen, Shixiang Chen, Yibing Zhan, Jing Zhang, Chaoyue Wang, Dacheng Tao

Figure 1 for OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Figure 2 for OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Figure 3 for OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Figure 4 for OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Viaarxiv icon

FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy

Add code
Bookmark button
Alert button
Feb 21, 2023
Yan Sun, Li Shen, Tiansheng Huang, Liang Ding, Dacheng Tao

Figure 1 for FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy
Figure 2 for FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy
Figure 3 for FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy
Figure 4 for FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy
Viaarxiv icon