Alert button
Picture for Ze-Feng Gao

Ze-Feng Gao

Alert button

AI-accelerated Discovery of Altermagnetic Materials

Add code
Bookmark button
Alert button
Nov 13, 2023
Ze-Feng Gao, Shuai Qu, Bocheng Zeng, Yang Liu, Ji-Rong Wen, Hao Sun, Peng-Jie Guo, Zhong-Yi Lu

Figure 1 for AI-accelerated Discovery of Altermagnetic Materials
Figure 2 for AI-accelerated Discovery of Altermagnetic Materials
Figure 3 for AI-accelerated Discovery of Altermagnetic Materials
Figure 4 for AI-accelerated Discovery of Altermagnetic Materials
Viaarxiv icon

Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study

Add code
Bookmark button
Alert button
Jul 26, 2023
Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

Figure 1 for Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study
Figure 2 for Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study
Figure 3 for Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study
Figure 4 for Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study
Viaarxiv icon

Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture

Add code
Bookmark button
Alert button
Apr 11, 2023
Peiyu Liu, Ze-Feng Gao, Yushuo Chen, Wayne Xin Zhao, Ji-Rong Wen

Figure 1 for Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Figure 2 for Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Figure 3 for Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Figure 4 for Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Viaarxiv icon

Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models

Add code
Bookmark button
Alert button
Mar 02, 2022
Ze-Feng Gao, Peiyu Liu, Wayne Xin Zhao, Zhong-Yi Lu, Ji-Rong Wen

Figure 1 for Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Figure 2 for Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Figure 3 for Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Figure 4 for Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Viaarxiv icon

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

Add code
Bookmark button
Alert button
Jun 04, 2021
Peiyu Liu, Ze-Feng Gao, Wayne Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen

Figure 1 for Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators
Figure 2 for Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators
Figure 3 for Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators
Figure 4 for Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators
Viaarxiv icon

A Model Compression Method with Matrix Product Operators for Speech Enhancement

Add code
Bookmark button
Alert button
Oct 10, 2020
Xingwei Sun, Ze-Feng Gao, Zhong-Yi Lu, Junfeng Li, Yonghong Yan

Figure 1 for A Model Compression Method with Matrix Product Operators for Speech Enhancement
Figure 2 for A Model Compression Method with Matrix Product Operators for Speech Enhancement
Figure 3 for A Model Compression Method with Matrix Product Operators for Speech Enhancement
Figure 4 for A Model Compression Method with Matrix Product Operators for Speech Enhancement
Viaarxiv icon

Compressing deep neural networks by matrix product operators

Add code
Bookmark button
Alert button
Apr 11, 2019
Ze-Feng Gao, Song Cheng, Rong-Qiang He, Z. Y. Xie, Hui-Hai Zhao, Zhong-Yi Lu, Tao Xiang

Figure 1 for Compressing deep neural networks by matrix product operators
Figure 2 for Compressing deep neural networks by matrix product operators
Figure 3 for Compressing deep neural networks by matrix product operators
Figure 4 for Compressing deep neural networks by matrix product operators
Viaarxiv icon