Picture for Xu Sun

Xu Sun

Towards Multimodal Video Paragraph Captioning Models Robust to Missing Modality

Add code
Mar 28, 2024
Figure 1 for Towards Multimodal Video Paragraph Captioning Models Robust to Missing Modality
Figure 2 for Towards Multimodal Video Paragraph Captioning Models Robust to Missing Modality
Figure 3 for Towards Multimodal Video Paragraph Captioning Models Robust to Missing Modality
Figure 4 for Towards Multimodal Video Paragraph Captioning Models Robust to Missing Modality
Viaarxiv icon

TempCompass: Do Video LLMs Really Understand Videos?

Add code
Mar 01, 2024
Figure 1 for TempCompass: Do Video LLMs Really Understand Videos?
Figure 2 for TempCompass: Do Video LLMs Really Understand Videos?
Figure 3 for TempCompass: Do Video LLMs Really Understand Videos?
Figure 4 for TempCompass: Do Video LLMs Really Understand Videos?
Viaarxiv icon

Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents

Add code
Feb 17, 2024
Viaarxiv icon

TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding

Add code
Dec 04, 2023
Figure 1 for TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding
Figure 2 for TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding
Figure 3 for TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding
Figure 4 for TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding
Viaarxiv icon

VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models

Add code
Nov 29, 2023
Figure 1 for VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models
Figure 2 for VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models
Figure 3 for VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models
Figure 4 for VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models
Viaarxiv icon

RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge

Add code
Nov 14, 2023
Figure 1 for RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge
Figure 2 for RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge
Figure 3 for RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge
Figure 4 for RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge
Viaarxiv icon

FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation

Add code
Nov 08, 2023
Figure 1 for FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation
Figure 2 for FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation
Figure 3 for FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation
Figure 4 for FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation
Viaarxiv icon

TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding

Add code
Oct 29, 2023
Viaarxiv icon

Incorporating Pre-trained Model Prompting in Multimodal Stock Volume Movement Prediction

Add code
Sep 11, 2023
Viaarxiv icon

MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning

Add code
Aug 25, 2023
Figure 1 for MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
Figure 2 for MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
Figure 3 for MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
Figure 4 for MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
Viaarxiv icon