Alert button
Picture for Trishul Chilimbi

Trishul Chilimbi

Alert button

Multi-modal Alignment using Representation Codebook

Add code
Bookmark button
Alert button
Mar 28, 2022
Jiali Duan, Liqun Chen, Son Tran, Jinyu Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for Multi-modal Alignment using Representation Codebook
Figure 2 for Multi-modal Alignment using Representation Codebook
Figure 3 for Multi-modal Alignment using Representation Codebook
Figure 4 for Multi-modal Alignment using Representation Codebook
Viaarxiv icon

Vision-Language Pre-Training with Triple Contrastive Learning

Add code
Bookmark button
Alert button
Mar 03, 2022
Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, Junzhou Huang

Figure 1 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 2 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 3 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 4 for Vision-Language Pre-Training with Triple Contrastive Learning
Viaarxiv icon

Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning

Add code
Bookmark button
Alert button
Oct 30, 2021
Xuanli He, Iman Keivanloo, Yi Xu, Xiang He, Belinda Zeng, Santosh Rajagopalan, Trishul Chilimbi

Figure 1 for Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning
Figure 2 for Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning
Figure 3 for Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning
Figure 4 for Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning
Viaarxiv icon

MLIM: Vision-and-Language Model Pre-training with Masked Language and Image Modeling

Add code
Bookmark button
Alert button
Sep 24, 2021
Tarik Arici, Mehmet Saygin Seyfioglu, Tal Neiman, Yi Xu, Son Train, Trishul Chilimbi, Belinda Zeng, Ismail Tutar

Figure 1 for MLIM: Vision-and-Language Model Pre-training with Masked Language and Image Modeling
Figure 2 for MLIM: Vision-and-Language Model Pre-training with Masked Language and Image Modeling
Figure 3 for MLIM: Vision-and-Language Model Pre-training with Masked Language and Image Modeling
Viaarxiv icon

Tiering as a Stochastic Submodular Optimization Problem

Add code
Bookmark button
Alert button
May 16, 2020
Hyokun Yun, Michael Froh, Roshan Makhijani, Brian Luc, Alex Smola, Trishul Chilimbi

Figure 1 for Tiering as a Stochastic Submodular Optimization Problem
Figure 2 for Tiering as a Stochastic Submodular Optimization Problem
Figure 3 for Tiering as a Stochastic Submodular Optimization Problem
Figure 4 for Tiering as a Stochastic Submodular Optimization Problem
Viaarxiv icon