Alert button
Picture for Lizhong Chen

Lizhong Chen

Alert button

Simul-LLM: A Framework for Exploring High-Quality Simultaneous Translation with Large Language Models

Add code
Bookmark button
Alert button
Dec 12, 2023
Victor Agostinelli, Max Wild, Matthew Raffel, Kazi Ahmed Asif Fuad, Lizhong Chen

Viaarxiv icon

Implicit Memory Transformer for Computationally Efficient Simultaneous Speech Translation

Add code
Bookmark button
Alert button
Jul 03, 2023
Matthew Raffel, Lizhong Chen

Viaarxiv icon

Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation

Add code
Bookmark button
Alert button
Jul 03, 2023
Matthew Raffel, Drew Penney, Lizhong Chen

Figure 1 for Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation
Figure 2 for Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation
Figure 3 for Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation
Figure 4 for Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation
Viaarxiv icon

Partitioning-Guided K-Means: Extreme Empty Cluster Resolution for Extreme Model Compression

Add code
Bookmark button
Alert button
Jun 24, 2023
Tianhong Huang, Victor Agostinelli, Lizhong Chen

Figure 1 for Partitioning-Guided K-Means: Extreme Empty Cluster Resolution for Extreme Model Compression
Figure 2 for Partitioning-Guided K-Means: Extreme Empty Cluster Resolution for Extreme Model Compression
Figure 3 for Partitioning-Guided K-Means: Extreme Empty Cluster Resolution for Extreme Model Compression
Figure 4 for Partitioning-Guided K-Means: Extreme Empty Cluster Resolution for Extreme Model Compression
Viaarxiv icon

Improving Autoregressive NLP Tasks via Modular Linearized Attention

Add code
Bookmark button
Alert button
Apr 24, 2023
Victor Agostinelli, Lizhong Chen

Figure 1 for Improving Autoregressive NLP Tasks via Modular Linearized Attention
Figure 2 for Improving Autoregressive NLP Tasks via Modular Linearized Attention
Figure 3 for Improving Autoregressive NLP Tasks via Modular Linearized Attention
Figure 4 for Improving Autoregressive NLP Tasks via Modular Linearized Attention
Viaarxiv icon

RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments

Add code
Bookmark button
Alert button
Apr 10, 2023
Drew Penney, Bin Li, Lizhong Chen, Jaroslaw J. Sydir, Anna Drewek-Ossowicka, Ramesh Illikkal, Charlie Tai, Ravi Iyer, Andrew Herdrich

Figure 1 for RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments
Figure 2 for RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments
Figure 3 for RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments
Figure 4 for RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments
Viaarxiv icon

PROMPT: Learning Dynamic Resource Allocation Policies for Edge-Network Applications

Add code
Bookmark button
Alert button
Jan 19, 2022
Drew Penney, Bin Li, Jaroslaw Sydir, Charlie Tai, Eoin Walsh, Thomas Long, Stefan Lee, Lizhong Chen

Figure 1 for PROMPT: Learning Dynamic Resource Allocation Policies for Edge-Network Applications
Figure 2 for PROMPT: Learning Dynamic Resource Allocation Policies for Edge-Network Applications
Figure 3 for PROMPT: Learning Dynamic Resource Allocation Policies for Edge-Network Applications
Figure 4 for PROMPT: Learning Dynamic Resource Allocation Policies for Edge-Network Applications
Viaarxiv icon

A Survey of Machine Learning Applied to Computer Architecture Design

Add code
Bookmark button
Alert button
Sep 26, 2019
Drew D. Penney, Lizhong Chen

Figure 1 for A Survey of Machine Learning Applied to Computer Architecture Design
Viaarxiv icon