Alert button
Picture for Trishul Chilimbi

Trishul Chilimbi

Alert button

VidLA: Video-Language Alignment at Scale

Add code
Bookmark button
Alert button
Mar 21, 2024
Mamshad Nayeem Rizve, Fan Fei, Jayakrishnan Unnikrishnan, Son Tran, Benjamin Z. Yao, Belinda Zeng, Mubarak Shah, Trishul Chilimbi

Viaarxiv icon

Robust Multi-Task Learning with Excess Risks

Add code
Bookmark button
Alert button
Feb 14, 2024
Yifei He, Shiji Zhou, Guojun Zhang, Hyokun Yun, Yi Xu, Belinda Zeng, Trishul Chilimbi, Han Zhao

Viaarxiv icon

Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications

Add code
Bookmark button
Alert button
Jun 05, 2023
Han Xie, Da Zheng, Jun Ma, Houyu Zhang, Vassilis N. Ioannidis, Xiang Song, Qing Ping, Sheng Wang, Carl Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 2 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 3 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 4 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Viaarxiv icon

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

Add code
Bookmark button
Alert button
Mar 10, 2023
Qian Jiang, Changyou Chen, Han Zhao, Liqun Chen, Qing Ping, Son Dinh Tran, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 2 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 3 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 4 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Viaarxiv icon

SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing

Add code
Bookmark button
Alert button
Dec 10, 2022
Chaoyang He, Shuai Zheng, Aston Zhang, George Karypis, Trishul Chilimbi, Mahdi Soltanolkotabi, Salman Avestimehr

Figure 1 for SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing
Figure 2 for SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing
Figure 3 for SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing
Figure 4 for SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing
Viaarxiv icon

MICO: Selective Search with Mutual Information Co-training

Add code
Bookmark button
Alert button
Sep 09, 2022
Zhanyu Wang, Xiao Zhang, Hyokun Yun, Choon Hui Teo, Trishul Chilimbi

Figure 1 for MICO: Selective Search with Mutual Information Co-training
Figure 2 for MICO: Selective Search with Mutual Information Co-training
Figure 3 for MICO: Selective Search with Mutual Information Co-training
Figure 4 for MICO: Selective Search with Mutual Information Co-training
Viaarxiv icon

Efficient and effective training of language and graph neural network models

Add code
Bookmark button
Alert button
Jun 22, 2022
Vassilis N. Ioannidis, Xiang Song, Da Zheng, Houyu Zhang, Jun Ma, Yi Xu, Belinda Zeng, Trishul Chilimbi, George Karypis

Figure 1 for Efficient and effective training of language and graph neural network models
Figure 2 for Efficient and effective training of language and graph neural network models
Figure 3 for Efficient and effective training of language and graph neural network models
Figure 4 for Efficient and effective training of language and graph neural network models
Viaarxiv icon

DynaMaR: Dynamic Prompt with Mask Token Representation

Add code
Bookmark button
Alert button
Jun 07, 2022
Xiaodi Sun, Sunny Rajagopalan, Priyanka Nigam, Weiyi Lu, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for DynaMaR: Dynamic Prompt with Mask Token Representation
Figure 2 for DynaMaR: Dynamic Prompt with Mask Token Representation
Figure 3 for DynaMaR: Dynamic Prompt with Mask Token Representation
Figure 4 for DynaMaR: Dynamic Prompt with Mask Token Representation
Viaarxiv icon

Vision-Language Pre-Training with Triple Contrastive Learning

Add code
Bookmark button
Alert button
Mar 28, 2022
Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, Junzhou Huang

Figure 1 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 2 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 3 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 4 for Vision-Language Pre-Training with Triple Contrastive Learning
Viaarxiv icon