Picture for Qiying Yu

Qiying Yu

EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters

Add code
Feb 06, 2024
Viaarxiv icon

Generative Multimodal Models are In-Context Learners

Add code
Dec 20, 2023
Viaarxiv icon

CapsFusion: Rethinking Image-Text Data at Scale

Add code
Nov 02, 2023
Viaarxiv icon

Unified Molecular Modeling via Modality Blending

Add code
Jul 12, 2023
Figure 1 for Unified Molecular Modeling via Modality Blending
Figure 2 for Unified Molecular Modeling via Modality Blending
Figure 3 for Unified Molecular Modeling via Modality Blending
Figure 4 for Unified Molecular Modeling via Modality Blending
Viaarxiv icon

Generative Pretraining in Multimodality

Add code
Jul 11, 2023
Figure 1 for Generative Pretraining in Multimodality
Figure 2 for Generative Pretraining in Multimodality
Figure 3 for Generative Pretraining in Multimodality
Figure 4 for Generative Pretraining in Multimodality
Viaarxiv icon

Multimodal Federated Learning via Contrastive Representation Ensemble

Add code
Feb 17, 2023
Figure 1 for Multimodal Federated Learning via Contrastive Representation Ensemble
Figure 2 for Multimodal Federated Learning via Contrastive Representation Ensemble
Figure 3 for Multimodal Federated Learning via Contrastive Representation Ensemble
Figure 4 for Multimodal Federated Learning via Contrastive Representation Ensemble
Viaarxiv icon

Adversarial Contrastive Learning via Asymmetric InfoNCE

Add code
Jul 18, 2022
Figure 1 for Adversarial Contrastive Learning via Asymmetric InfoNCE
Figure 2 for Adversarial Contrastive Learning via Asymmetric InfoNCE
Figure 3 for Adversarial Contrastive Learning via Asymmetric InfoNCE
Figure 4 for Adversarial Contrastive Learning via Asymmetric InfoNCE
Viaarxiv icon