Alert button
Picture for Tianyi Zhou

Tianyi Zhou

Alert button

Fast Heavy Inner Product Identification Between Weights and Inputs in Neural Network Training

Add code
Bookmark button
Alert button
Nov 19, 2023
Lianke Qin, Saayan Mitra, Zhao Song, Yuanyuan Yang, Tianyi Zhou

Figure 1 for Fast Heavy Inner Product Identification Between Weights and Inputs in Neural Network Training
Viaarxiv icon

Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time

Add code
Bookmark button
Alert button
Oct 26, 2023
Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, Beidi Chen

Figure 1 for Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Figure 2 for Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Figure 3 for Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Figure 4 for Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Viaarxiv icon

HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models

Add code
Bookmark button
Alert button
Oct 23, 2023
Fuxiao Liu, Tianrui Guan, Zongxia Li, Lichang Chen, Yaser Yacoob, Dinesh Manocha, Tianyi Zhou

Figure 1 for HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Figure 2 for HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Figure 3 for HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Figure 4 for HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Viaarxiv icon

Merging Experts into One: Improving Computational Efficiency of Mixture of Experts

Add code
Bookmark button
Alert button
Oct 22, 2023
Shwai He, Run-Ze Fan, Liang Ding, Li Shen, Tianyi Zhou, Dacheng Tao

Figure 1 for Merging Experts into One: Improving Computational Efficiency of Mixture of Experts
Figure 2 for Merging Experts into One: Improving Computational Efficiency of Mixture of Experts
Figure 3 for Merging Experts into One: Improving Computational Efficiency of Mixture of Experts
Figure 4 for Merging Experts into One: Improving Computational Efficiency of Mixture of Experts
Viaarxiv icon

Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning

Add code
Bookmark button
Alert button
Oct 18, 2023
Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, Heng Huang, Jiuxiang Gu, Tianyi Zhou

Figure 1 for Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Figure 2 for Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Figure 3 for Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Figure 4 for Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Viaarxiv icon

Superiority of Softmax: Unveiling the Performance Edge Over Linear Attention

Add code
Bookmark button
Alert button
Oct 18, 2023
Yichuan Deng, Zhao Song, Tianyi Zhou

Figure 1 for Superiority of Softmax: Unveiling the Performance Edge Over Linear Attention
Figure 2 for Superiority of Softmax: Unveiling the Performance Edge Over Linear Attention
Figure 3 for Superiority of Softmax: Unveiling the Performance Edge Over Linear Attention
Figure 4 for Superiority of Softmax: Unveiling the Performance Edge Over Linear Attention
Viaarxiv icon

NLPBench: Evaluating Large Language Models on Solving NLP Problems

Add code
Bookmark button
Alert button
Oct 08, 2023
Linxin Song, Jieyu Zhang, Lechao Cheng, Pengyuan Zhou, Tianyi Zhou, Irene Li

Viaarxiv icon

Module-wise Adaptive Distillation for Multimodality Foundation Models

Add code
Bookmark button
Alert button
Oct 06, 2023
Chen Liang, Jiahui Yu, Ming-Hsuan Yang, Matthew Brown, Yin Cui, Tuo Zhao, Boqing Gong, Tianyi Zhou

Figure 1 for Module-wise Adaptive Distillation for Multimodality Foundation Models
Figure 2 for Module-wise Adaptive Distillation for Multimodality Foundation Models
Figure 3 for Module-wise Adaptive Distillation for Multimodality Foundation Models
Figure 4 for Module-wise Adaptive Distillation for Multimodality Foundation Models
Viaarxiv icon

When to Learn What: Model-Adaptive Data Augmentation Curriculum

Add code
Bookmark button
Alert button
Sep 30, 2023
Chengkai Hou, Jieyu Zhang, Tianyi Zhou

Figure 1 for When to Learn What: Model-Adaptive Data Augmentation Curriculum
Figure 2 for When to Learn What: Model-Adaptive Data Augmentation Curriculum
Figure 3 for When to Learn What: Model-Adaptive Data Augmentation Curriculum
Figure 4 for When to Learn What: Model-Adaptive Data Augmentation Curriculum
Viaarxiv icon