Alert button
Picture for Qing Yu

Qing Yu

Alert button

Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models

Add code
Bookmark button
Alert button
Mar 29, 2024
Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Qing Yu, Go Irie, Yixuan Li, Hai Li, Ziwei Liu, Kiyoharu Aizawa

Figure 1 for Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
Figure 2 for Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
Figure 3 for Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
Figure 4 for Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
Viaarxiv icon

Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?

Add code
Bookmark button
Alert button
Oct 12, 2023
Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

Figure 1 for Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?
Figure 2 for Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?
Figure 3 for Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?
Figure 4 for Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?
Viaarxiv icon

CPR-Coach: Recognizing Composite Error Actions based on Single-class Training

Add code
Bookmark button
Alert button
Sep 21, 2023
Shunli Wang, Qing Yu, Shuaibing Wang, Dingkang Yang, Liuzhen Su, Xiao Zhao, Haopeng Kuang, Peixuan Zhang, Peng Zhai, Lihua Zhang

Figure 1 for CPR-Coach: Recognizing Composite Error Actions based on Single-class Training
Figure 2 for CPR-Coach: Recognizing Composite Error Actions based on Single-class Training
Figure 3 for CPR-Coach: Recognizing Composite Error Actions based on Single-class Training
Figure 4 for CPR-Coach: Recognizing Composite Error Actions based on Single-class Training
Viaarxiv icon

Open-Set Domain Adaptation with Visual-Language Foundation Models

Add code
Bookmark button
Alert button
Jul 30, 2023
Qing Yu, Go Irie, Kiyoharu Aizawa

Figure 1 for Open-Set Domain Adaptation with Visual-Language Foundation Models
Figure 2 for Open-Set Domain Adaptation with Visual-Language Foundation Models
Figure 3 for Open-Set Domain Adaptation with Visual-Language Foundation Models
Figure 4 for Open-Set Domain Adaptation with Visual-Language Foundation Models
Viaarxiv icon

LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning

Add code
Bookmark button
Alert button
Jun 10, 2023
Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

Figure 1 for LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Figure 2 for LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Figure 3 for LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Figure 4 for LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
Viaarxiv icon

Noisy Universal Domain Adaptation via Divergence Optimization for Visual Recognition

Add code
Bookmark button
Alert button
Apr 20, 2023
Qing Yu, Atsushi Hashimoto, Yoshitaka Ushiku

Figure 1 for Noisy Universal Domain Adaptation via Divergence Optimization for Visual Recognition
Figure 2 for Noisy Universal Domain Adaptation via Divergence Optimization for Visual Recognition
Figure 3 for Noisy Universal Domain Adaptation via Divergence Optimization for Visual Recognition
Figure 4 for Noisy Universal Domain Adaptation via Divergence Optimization for Visual Recognition
Viaarxiv icon

Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models

Add code
Bookmark button
Alert button
Apr 10, 2023
Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

Figure 1 for Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Figure 2 for Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Figure 3 for Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Figure 4 for Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Viaarxiv icon

Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation

Add code
Bookmark button
Alert button
Oct 23, 2022
Atsuyuki Miyai, Qing Yu, Daiki Ikami, Go Irie, Kiyoharu Aizawa

Figure 1 for Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
Figure 2 for Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
Figure 3 for Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
Figure 4 for Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation
Viaarxiv icon

A Survey of Video-based Action Quality Assessment

Add code
Bookmark button
Alert button
Apr 20, 2022
Shunli Wang, Dingkang Yang, Peng Zhai, Qing Yu, Tao Suo, Zhan Sun, Ka Li, Lihua Zhang

Figure 1 for A Survey of Video-based Action Quality Assessment
Figure 2 for A Survey of Video-based Action Quality Assessment
Figure 3 for A Survey of Video-based Action Quality Assessment
Figure 4 for A Survey of Video-based Action Quality Assessment
Viaarxiv icon

Noisy Annotation Refinement for Object Detection

Add code
Bookmark button
Alert button
Oct 20, 2021
Jiafeng Mao, Qing Yu, Yoko Yamakata, Kiyoharu Aizawa

Figure 1 for Noisy Annotation Refinement for Object Detection
Figure 2 for Noisy Annotation Refinement for Object Detection
Figure 3 for Noisy Annotation Refinement for Object Detection
Figure 4 for Noisy Annotation Refinement for Object Detection
Viaarxiv icon