Alert button
Picture for Zhiqiu Lin

Zhiqiu Lin

Alert button

The Neglected Tails of Vision-Language Models

Feb 02, 2024
Shubham Parashar, Zhiqiu Lin, Tian Liu, Xiangjue Dong, Yanan Li, Deva Ramanan, James Caverlee, Shu Kong

Viaarxiv icon

Prompting Scientific Names for Zero-Shot Species Recognition

Oct 15, 2023
Shubham Parashar, Zhiqiu Lin, Yanan Li, Shu Kong

Figure 1 for Prompting Scientific Names for Zero-Shot Species Recognition
Figure 2 for Prompting Scientific Names for Zero-Shot Species Recognition
Figure 3 for Prompting Scientific Names for Zero-Shot Species Recognition
Figure 4 for Prompting Scientific Names for Zero-Shot Species Recognition
Viaarxiv icon

Language Models as Black-Box Optimizers for Vision-Language Models

Sep 25, 2023
Shihong Liu, Samuel Yu, Zhiqiu Lin, Deepak Pathak, Deva Ramanan

Figure 1 for Language Models as Black-Box Optimizers for Vision-Language Models
Figure 2 for Language Models as Black-Box Optimizers for Vision-Language Models
Figure 3 for Language Models as Black-Box Optimizers for Vision-Language Models
Figure 4 for Language Models as Black-Box Optimizers for Vision-Language Models
Viaarxiv icon

VisualGPTScore: Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores

Jun 02, 2023
Zhiqiu Lin, Xinyue Chen, Deepak Pathak, Pengchuan Zhang, Deva Ramanan

Figure 1 for VisualGPTScore: Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores
Figure 2 for VisualGPTScore: Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores
Figure 3 for VisualGPTScore: Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores
Figure 4 for VisualGPTScore: Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores
Viaarxiv icon

Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models

Jan 18, 2023
Zhiqiu Lin, Samuel Yu, Zhiyi Kuang, Deepak Pathak, Deva Ramanan

Figure 1 for Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models
Figure 2 for Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models
Figure 3 for Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models
Figure 4 for Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models
Viaarxiv icon

Learning with an Evolving Class Ontology

Oct 12, 2022
Zhiqiu Lin, Deepak Pathak, Yu-Xiong Wang, Deva Ramanan, Shu Kong

Figure 1 for Learning with an Evolving Class Ontology
Figure 2 for Learning with an Evolving Class Ontology
Figure 3 for Learning with an Evolving Class Ontology
Figure 4 for Learning with an Evolving Class Ontology
Viaarxiv icon

The CLEAR Benchmark: Continual LEArning on Real-World Imagery

Jan 17, 2022
Zhiqiu Lin, Jia Shi, Deepak Pathak, Deva Ramanan

Figure 1 for The CLEAR Benchmark: Continual LEArning on Real-World Imagery
Figure 2 for The CLEAR Benchmark: Continual LEArning on Real-World Imagery
Figure 3 for The CLEAR Benchmark: Continual LEArning on Real-World Imagery
Figure 4 for The CLEAR Benchmark: Continual LEArning on Real-World Imagery
Viaarxiv icon

Streaming Self-Training via Domain-Agnostic Unlabeled Images

Apr 07, 2021
Zhiqiu Lin, Deva Ramanan, Aayush Bansal

Figure 1 for Streaming Self-Training via Domain-Agnostic Unlabeled Images
Figure 2 for Streaming Self-Training via Domain-Agnostic Unlabeled Images
Figure 3 for Streaming Self-Training via Domain-Agnostic Unlabeled Images
Figure 4 for Streaming Self-Training via Domain-Agnostic Unlabeled Images
Viaarxiv icon