Alert button
Picture for Belinda Zeng

Belinda Zeng

Alert button

VidLA: Video-Language Alignment at Scale

Add code
Bookmark button
Alert button
Mar 21, 2024
Mamshad Nayeem Rizve, Fan Fei, Jayakrishnan Unnikrishnan, Son Tran, Benjamin Z. Yao, Belinda Zeng, Mubarak Shah, Trishul Chilimbi

Viaarxiv icon

Robust Multi-Task Learning with Excess Risks

Add code
Bookmark button
Alert button
Feb 14, 2024
Yifei He, Shiji Zhou, Guojun Zhang, Hyokun Yun, Yi Xu, Belinda Zeng, Trishul Chilimbi, Han Zhao

Viaarxiv icon

Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective

Add code
Bookmark button
Alert button
Jan 26, 2024
Yue Xing, Xiaofeng Lin, Qifan Song, Yi Xu, Belinda Zeng, Guang Cheng

Viaarxiv icon

ForeSeer: Product Aspect Forecasting Using Temporal Graph Embedding

Add code
Bookmark button
Alert button
Oct 07, 2023
Zixuan Liu, Gaurush Hiranandani, Kun Qian, Eddie W. Huang, Yi Xu, Belinda Zeng, Karthik Subbian, Sheng Wang

Figure 1 for ForeSeer: Product Aspect Forecasting Using Temporal Graph Embedding
Figure 2 for ForeSeer: Product Aspect Forecasting Using Temporal Graph Embedding
Figure 3 for ForeSeer: Product Aspect Forecasting Using Temporal Graph Embedding
Figure 4 for ForeSeer: Product Aspect Forecasting Using Temporal Graph Embedding
Viaarxiv icon

Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications

Add code
Bookmark button
Alert button
Jun 05, 2023
Han Xie, Da Zheng, Jun Ma, Houyu Zhang, Vassilis N. Ioannidis, Xiang Song, Qing Ping, Sheng Wang, Carl Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 2 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 3 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 4 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Viaarxiv icon

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

Add code
Bookmark button
Alert button
Mar 10, 2023
Qian Jiang, Changyou Chen, Han Zhao, Liqun Chen, Qing Ping, Son Dinh Tran, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 2 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 3 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 4 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Viaarxiv icon

Efficient and effective training of language and graph neural network models

Add code
Bookmark button
Alert button
Jun 22, 2022
Vassilis N. Ioannidis, Xiang Song, Da Zheng, Houyu Zhang, Jun Ma, Yi Xu, Belinda Zeng, Trishul Chilimbi, George Karypis

Figure 1 for Efficient and effective training of language and graph neural network models
Figure 2 for Efficient and effective training of language and graph neural network models
Figure 3 for Efficient and effective training of language and graph neural network models
Figure 4 for Efficient and effective training of language and graph neural network models
Viaarxiv icon

DynaMaR: Dynamic Prompt with Mask Token Representation

Add code
Bookmark button
Alert button
Jun 07, 2022
Xiaodi Sun, Sunny Rajagopalan, Priyanka Nigam, Weiyi Lu, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for DynaMaR: Dynamic Prompt with Mask Token Representation
Figure 2 for DynaMaR: Dynamic Prompt with Mask Token Representation
Figure 3 for DynaMaR: Dynamic Prompt with Mask Token Representation
Figure 4 for DynaMaR: Dynamic Prompt with Mask Token Representation
Viaarxiv icon

Vision-Language Pre-Training with Triple Contrastive Learning

Add code
Bookmark button
Alert button
Mar 28, 2022
Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, Junzhou Huang

Figure 1 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 2 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 3 for Vision-Language Pre-Training with Triple Contrastive Learning
Figure 4 for Vision-Language Pre-Training with Triple Contrastive Learning
Viaarxiv icon