Alert button
Picture for Alvin Cheung

Alvin Cheung

Alert button

Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks

Add code
Bookmark button
Alert button
Mar 07, 2024
Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung

Figure 1 for Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
Figure 2 for Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
Figure 3 for Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
Figure 4 for Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
Viaarxiv icon

AST-T5: Structure-Aware Pretraining for Code Generation and Understanding

Add code
Bookmark button
Alert button
Jan 05, 2024
Linyuan Gong, Mostafa Elhoushi, Alvin Cheung

Viaarxiv icon

Online Speculative Decoding

Add code
Bookmark button
Alert button
Oct 17, 2023
Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Ion Stoica, Zhijie Deng, Alvin Cheung, Hao Zhang

Figure 1 for Online Speculative Decoding
Figure 2 for Online Speculative Decoding
Figure 3 for Online Speculative Decoding
Figure 4 for Online Speculative Decoding
Viaarxiv icon

Spatialyze: A Geospatial Video Analytics System with Spatial-Aware Optimizations

Add code
Bookmark button
Alert button
Aug 08, 2023
Chanwut Kittivorawong, Yongming Ge, Yousef Helal, Alvin Cheung

Figure 1 for Spatialyze: A Geospatial Video Analytics System with Spatial-Aware Optimizations
Figure 2 for Spatialyze: A Geospatial Video Analytics System with Spatial-Aware Optimizations
Figure 3 for Spatialyze: A Geospatial Video Analytics System with Spatial-Aware Optimizations
Figure 4 for Spatialyze: A Geospatial Video Analytics System with Spatial-Aware Optimizations
Viaarxiv icon

SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics

Add code
Bookmark button
Alert button
May 29, 2023
Arash Ardakani, Altan Haan, Shangyin Tan, Doru Thom Popovici, Alvin Cheung, Costin Iancu, Koushik Sen

Figure 1 for SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics
Figure 2 for SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics
Figure 3 for SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics
Figure 4 for SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics
Viaarxiv icon

Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers

Add code
Bookmark button
Alert button
May 21, 2023
Linyuan Gong, Chenyan Xiong, Xiaodong Liu, Payal Bajaj, Yiqing Xie, Alvin Cheung, Jianfeng Gao, Xia Song

Figure 1 for Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers
Figure 2 for Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers
Figure 3 for Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers
Figure 4 for Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers
Viaarxiv icon

What is the State of Memory Saving for Model Training?

Add code
Bookmark button
Alert button
Mar 26, 2023
Xiaoxuan Liu, Siddharth Jha, Chuyan Zhu, Zhuohan Li, Alvin Cheung

Figure 1 for What is the State of Memory Saving for Model Training?
Figure 2 for What is the State of Memory Saving for Model Training?
Figure 3 for What is the State of Memory Saving for Model Training?
Figure 4 for What is the State of Memory Saving for Model Training?
Viaarxiv icon

ADELT: Transpilation Between Deep Learning Frameworks

Add code
Bookmark button
Alert button
Mar 07, 2023
Linyuan Gong, Jiayi Wang, Alvin Cheung

Figure 1 for ADELT: Transpilation Between Deep Learning Frameworks
Figure 2 for ADELT: Transpilation Between Deep Learning Frameworks
Figure 3 for ADELT: Transpilation Between Deep Learning Frameworks
Figure 4 for ADELT: Transpilation Between Deep Learning Frameworks
Viaarxiv icon