Alert button
Picture for Weixin Liu

Weixin Liu

Alert button

East: Efficient and Accurate Secure Transformer Framework for Inference

Aug 19, 2023
Yuanchao Ding, Hua Guo, Yewei Guan, Weixin Liu, Jiarong Huo, Zhenyu Guan, Xiyong Zhang

Viaarxiv icon

ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization

Jan 09, 2023
Weixin Liu, Xuyi Chen, Jiaxiang Liu, Shikun Feng, Yu Sun, Hao Tian, Hua Wu

Figure 1 for ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Figure 2 for ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Figure 3 for ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Figure 4 for ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Viaarxiv icon

Learning Weakly-Supervised Contrastive Representations

Feb 18, 2022
Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Learning Weakly-Supervised Contrastive Representations
Figure 2 for Learning Weakly-Supervised Contrastive Representations
Figure 3 for Learning Weakly-Supervised Contrastive Representations
Figure 4 for Learning Weakly-Supervised Contrastive Representations
Viaarxiv icon

ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation

Dec 23, 2021
Shuohuan Wang, Yu Sun, Yang Xiang, Zhihua Wu, Siyu Ding, Weibao Gong, Shikun Feng, Junyuan Shang, Yanbin Zhao, Chao Pang, Jiaxiang Liu, Xuyi Chen, Yuxiang Lu, Weixin Liu, Xi Wang, Yangfan Bai, Qiuliang Chen, Li Zhao, Shiyong Li, Peng Sun, Dianhai Yu, Yanjun Ma, Hao Tian, Hua Wu, Tian Wu, Wei Zeng, Ge Li, Wen Gao, Haifeng Wang

Figure 1 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 2 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 3 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 4 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Viaarxiv icon

ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation

Jul 05, 2021
Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, Dianhai Yu, Hao Tian, Hua Wu, Haifeng Wang

Figure 1 for ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 2 for ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 3 for ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 4 for ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Viaarxiv icon

Integrating Auxiliary Information in Self-supervised Learning

Jun 05, 2021
Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Integrating Auxiliary Information in Self-supervised Learning
Figure 2 for Integrating Auxiliary Information in Self-supervised Learning
Figure 3 for Integrating Auxiliary Information in Self-supervised Learning
Figure 4 for Integrating Auxiliary Information in Self-supervised Learning
Viaarxiv icon

ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression

Jun 04, 2021
Weiyue Su, Xuyi Chen, Shikun Feng, Jiaxiang Liu, Weixin Liu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Figure 1 for ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression
Figure 2 for ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression
Figure 3 for ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression
Figure 4 for ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression
Viaarxiv icon