Alert button
Picture for Xin Jiang

Xin Jiang

Alert button

Deformation Control of a Deformable Object Based on Visual and Tactile Feedback

Add code
Bookmark button
Alert button
May 30, 2021
Yuhao Guo, Xin Jiang, Yunhui Liu

Figure 1 for Deformation Control of a Deformable Object Based on Visual and Tactile Feedback
Figure 2 for Deformation Control of a Deformable Object Based on Visual and Tactile Feedback
Figure 3 for Deformation Control of a Deformable Object Based on Visual and Tactile Feedback
Figure 4 for Deformation Control of a Deformable Object Based on Visual and Tactile Feedback
Viaarxiv icon

Improved OOD Generalization via Adversarial Training and Pre-training

Add code
Bookmark button
Alert button
May 24, 2021
Mingyang Yi, Lu Hou, Jiacheng Sun, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma

Figure 1 for Improved OOD Generalization via Adversarial Training and Pre-training
Figure 2 for Improved OOD Generalization via Adversarial Training and Pre-training
Figure 3 for Improved OOD Generalization via Adversarial Training and Pre-training
Figure 4 for Improved OOD Generalization via Adversarial Training and Pre-training
Viaarxiv icon

PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation

Add code
Bookmark button
Alert button
Apr 26, 2021
Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang, Kaisheng Wang, Xiaoda Zhang, Chen Li, Ziyan Gong, Yifan Yao, Xinjing Huang, Jun Wang, Jianfeng Yu, Qi Guo, Yue Yu, Yan Zhang, Jin Wang, Hengtao Tao, Dasen Yan, Zexuan Yi, Fang Peng, Fangqing Jiang, Han Zhang, Lingfeng Deng, Yehong Zhang, Zhe Lin, Chao Zhang, Shaojie Zhang, Mingyue Guo, Shanzhi Gu, Gaojun Fan, Yaowei Wang, Xuefeng Jin, Qun Liu, Yonghong Tian

Figure 1 for PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Figure 2 for PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Figure 3 for PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Figure 4 for PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Viaarxiv icon

Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation

Add code
Bookmark button
Alert button
Apr 24, 2021
Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu

Figure 1 for Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Figure 2 for Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Figure 3 for Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Figure 4 for Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Viaarxiv icon

An Approach to Improve Robustness of NLP Systems against ASR Errors

Add code
Bookmark button
Alert button
Mar 25, 2021
Tong Cui, Jinghui Xiao, Liangyou Li, Xin Jiang, Qun Liu

Figure 1 for An Approach to Improve Robustness of NLP Systems against ASR Errors
Figure 2 for An Approach to Improve Robustness of NLP Systems against ASR Errors
Figure 3 for An Approach to Improve Robustness of NLP Systems against ASR Errors
Figure 4 for An Approach to Improve Robustness of NLP Systems against ASR Errors
Viaarxiv icon

Reweighting Augmented Samples by Minimizing the Maximal Expected Loss

Add code
Bookmark button
Alert button
Mar 16, 2021
Mingyang Yi, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma

Figure 1 for Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Figure 2 for Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Figure 3 for Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Figure 4 for Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Viaarxiv icon

LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation

Add code
Bookmark button
Alert button
Mar 11, 2021
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

Figure 1 for LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation
Figure 2 for LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation
Figure 3 for LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation
Figure 4 for LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation
Viaarxiv icon

Training Multilingual Pre-trained Language Model with Byte-level Subwords

Add code
Bookmark button
Alert button
Jan 23, 2021
Junqiu Wei, Qun Liu, Yinpeng Guo, Xin Jiang

Figure 1 for Training Multilingual Pre-trained Language Model with Byte-level Subwords
Figure 2 for Training Multilingual Pre-trained Language Model with Byte-level Subwords
Figure 3 for Training Multilingual Pre-trained Language Model with Byte-level Subwords
Figure 4 for Training Multilingual Pre-trained Language Model with Byte-level Subwords
Viaarxiv icon

Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks

Add code
Bookmark button
Alert button
Jan 19, 2021
Zhengyan Zhang, Guangxuan Xiao, Yongwei Li, Tian Lv, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Xin Jiang, Maosong Sun

Figure 1 for Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks
Figure 2 for Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks
Figure 3 for Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks
Figure 4 for Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks
Viaarxiv icon

BinaryBERT: Pushing the Limit of BERT Quantization

Add code
Bookmark button
Alert button
Dec 31, 2020
Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, Irwin King

Figure 1 for BinaryBERT: Pushing the Limit of BERT Quantization
Figure 2 for BinaryBERT: Pushing the Limit of BERT Quantization
Figure 3 for BinaryBERT: Pushing the Limit of BERT Quantization
Figure 4 for BinaryBERT: Pushing the Limit of BERT Quantization
Viaarxiv icon