Picture for Zhewei Yao

Zhewei Yao

FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design

Add code
Jan 25, 2024
Figure 1 for FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design
Figure 2 for FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design
Figure 3 for FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design
Figure 4 for FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design
Viaarxiv icon

ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks

Add code
Dec 18, 2023
Figure 1 for ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Figure 2 for ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Figure 3 for ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Figure 4 for ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Viaarxiv icon

ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers

Add code
Oct 26, 2023
Figure 1 for ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers
Figure 2 for ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers
Figure 3 for ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers
Figure 4 for ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers
Viaarxiv icon

DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention

Add code
Sep 29, 2023
Figure 1 for DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention
Figure 2 for DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention
Figure 3 for DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention
Figure 4 for DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention
Viaarxiv icon

RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model

Add code
Sep 02, 2023
Figure 1 for RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model
Figure 2 for RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model
Figure 3 for RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model
Figure 4 for RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model
Viaarxiv icon

DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales

Add code
Aug 02, 2023
Figure 1 for DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Figure 2 for DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Figure 3 for DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Figure 4 for DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Viaarxiv icon

ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats

Add code
Jul 20, 2023
Figure 1 for ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats
Figure 2 for ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats
Figure 3 for ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats
Figure 4 for ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats
Viaarxiv icon

Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important?

Add code
May 16, 2023
Figure 1 for Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important?
Figure 2 for Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important?
Figure 3 for Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important?
Figure 4 for Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important?
Viaarxiv icon

A Comprehensive Study on Post-Training Quantization for Large Language Models

Add code
Mar 16, 2023
Figure 1 for A Comprehensive Study on Post-Training Quantization for Large Language Models
Figure 2 for A Comprehensive Study on Post-Training Quantization for Large Language Models
Figure 3 for A Comprehensive Study on Post-Training Quantization for Large Language Models
Figure 4 for A Comprehensive Study on Post-Training Quantization for Large Language Models
Viaarxiv icon

Scaling Vision-Language Models with Sparse Mixture of Experts

Add code
Mar 13, 2023
Figure 1 for Scaling Vision-Language Models with Sparse Mixture of Experts
Figure 2 for Scaling Vision-Language Models with Sparse Mixture of Experts
Figure 3 for Scaling Vision-Language Models with Sparse Mixture of Experts
Figure 4 for Scaling Vision-Language Models with Sparse Mixture of Experts
Viaarxiv icon