Picture for Tiancheng Zhao

Tiancheng Zhao

How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection

Add code
Aug 25, 2023
Figure 1 for How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection
Figure 2 for How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection
Figure 3 for How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection
Figure 4 for How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection
Viaarxiv icon

RS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation Model

Add code
Jun 20, 2023
Viaarxiv icon

OmDet: Language-Aware Object Detection with Large-scale Vision-Language Multi-dataset Pre-training

Add code
Sep 10, 2022
Figure 1 for OmDet: Language-Aware Object Detection with Large-scale Vision-Language Multi-dataset Pre-training
Figure 2 for OmDet: Language-Aware Object Detection with Large-scale Vision-Language Multi-dataset Pre-training
Figure 3 for OmDet: Language-Aware Object Detection with Large-scale Vision-Language Multi-dataset Pre-training
Figure 4 for OmDet: Language-Aware Object Detection with Large-scale Vision-Language Multi-dataset Pre-training
Viaarxiv icon

Understanding the Effect of Data Augmentation in Self-supervised Anomaly Detection

Add code
Aug 30, 2022
Figure 1 for Understanding the Effect of Data Augmentation in Self-supervised Anomaly Detection
Figure 2 for Understanding the Effect of Data Augmentation in Self-supervised Anomaly Detection
Figure 3 for Understanding the Effect of Data Augmentation in Self-supervised Anomaly Detection
Figure 4 for Understanding the Effect of Data Augmentation in Self-supervised Anomaly Detection
Viaarxiv icon

VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations

Add code
Jul 01, 2022
Figure 1 for VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Figure 2 for VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Figure 3 for VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Figure 4 for VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Viaarxiv icon

SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering

Add code
Jan 06, 2021
Figure 1 for SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering
Figure 2 for SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering
Figure 3 for SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering
Figure 4 for SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering
Viaarxiv icon

VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search

Add code
Jan 01, 2021
Figure 1 for VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search
Figure 2 for VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search
Figure 3 for VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search
Figure 4 for VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search
Viaarxiv icon

SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval

Add code
Sep 28, 2020
Figure 1 for SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval
Figure 2 for SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval
Figure 3 for SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval
Figure 4 for SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval
Viaarxiv icon

Report from the NSF Future Directions Workshop, Toward User-Oriented Agents: Research Directions and Challenges

Add code
Jun 10, 2020
Figure 1 for Report from the NSF Future Directions Workshop, Toward User-Oriented Agents: Research Directions and Challenges
Figure 2 for Report from the NSF Future Directions Workshop, Toward User-Oriented Agents: Research Directions and Challenges
Figure 3 for Report from the NSF Future Directions Workshop, Toward User-Oriented Agents: Research Directions and Challenges
Viaarxiv icon

"None of the Above":Measure Uncertainty in Dialog Response Retrieval

Add code
Apr 04, 2020
Figure 1 for "None of the Above":Measure Uncertainty in Dialog Response Retrieval
Figure 2 for "None of the Above":Measure Uncertainty in Dialog Response Retrieval
Figure 3 for "None of the Above":Measure Uncertainty in Dialog Response Retrieval
Figure 4 for "None of the Above":Measure Uncertainty in Dialog Response Retrieval
Viaarxiv icon