Alert button
Picture for Yunyang Xiong

Yunyang Xiong

Alert button

MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases

Add code
Bookmark button
Alert button
Feb 22, 2024
Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, Liangzhen Lai, Vikas Chandra

Viaarxiv icon

SqueezeSAM: User friendly mobile interactive segmentation

Add code
Bookmark button
Alert button
Dec 11, 2023
Balakrishnan Varadarajan, Bilge Soran, Forrest Iandola, Xiaoyu Xiang, Yunyang Xiong, Lemeng Wu, Chenchen Zhu, Raghuraman Krishnamoorthi, Vikas Chandra

Viaarxiv icon

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

Add code
Bookmark button
Alert button
Dec 01, 2023
Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra

Viaarxiv icon

MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning

Add code
Bookmark button
Alert button
Oct 26, 2023
Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, Mohamed Elhoseiny

Viaarxiv icon

Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts

Add code
Bookmark button
Alert button
Jun 08, 2023
Ganesh Jawahar, Haichuan Yang, Yunyang Xiong, Zechun Liu, Dilin Wang, Fei Sun, Meng Li, Aasish Pappu, Barlas Oguz, Muhammad Abdul-Mageed, Laks V. S. Lakshmanan, Raghuraman Krishnamoorthi, Vikas Chandra

Figure 1 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 2 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 3 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 4 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Viaarxiv icon

Self-positioning Point-based Transformer for Point Cloud Understanding

Add code
Bookmark button
Alert button
Mar 29, 2023
Jinyoung Park, Sanghyeok Lee, Sihyeon Kim, Yunyang Xiong, Hyunwoo J. Kim

Figure 1 for Self-positioning Point-based Transformer for Point Cloud Understanding
Figure 2 for Self-positioning Point-based Transformer for Point Cloud Understanding
Figure 3 for Self-positioning Point-based Transformer for Point Cloud Understanding
Figure 4 for Self-positioning Point-based Transformer for Point Cloud Understanding
Viaarxiv icon

PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion

Add code
Bookmark button
Alert button
Dec 12, 2022
Lemeng Wu, Dilin Wang, Meng Li, Yunyang Xiong, Raghuraman Krishnamoorthi, Qiang Liu, Vikas Chandra

Figure 1 for PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion
Figure 2 for PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion
Figure 3 for PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion
Figure 4 for PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion
Viaarxiv icon

Fast Point Cloud Generation with Straight Flows

Add code
Bookmark button
Alert button
Dec 04, 2022
Lemeng Wu, Dilin Wang, Chengyue Gong, Xingchao Liu, Yunyang Xiong, Rakesh Ranjan, Raghuraman Krishnamoorthi, Vikas Chandra, Qiang Liu

Figure 1 for Fast Point Cloud Generation with Straight Flows
Figure 2 for Fast Point Cloud Generation with Straight Flows
Figure 3 for Fast Point Cloud Generation with Straight Flows
Figure 4 for Fast Point Cloud Generation with Straight Flows
Viaarxiv icon

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference

Add code
Bookmark button
Alert button
Nov 18, 2022
Haoran You, Yunyang Xiong, Xiaoliang Dai, Bichen Wu, Peizhao Zhang, Haoqi Fan, Peter Vajda, Yingyan Lin

Figure 1 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 2 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 3 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 4 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Viaarxiv icon