Alert button

"Information": models, code, and papers
Alert button

Multi-Contextual Predictions with Vision Transformer for Video Anomaly Detection

Jun 17, 2022
Joo-Yeon Lee, Woo-Jeoung Nam, Seong-Whan Lee

Figure 1 for Multi-Contextual Predictions with Vision Transformer for Video Anomaly Detection
Figure 2 for Multi-Contextual Predictions with Vision Transformer for Video Anomaly Detection
Figure 3 for Multi-Contextual Predictions with Vision Transformer for Video Anomaly Detection
Figure 4 for Multi-Contextual Predictions with Vision Transformer for Video Anomaly Detection
Viaarxiv icon

Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition

Jul 02, 2022
Guangzhi Sun, Chao Zhang, Philip C. Woodland

Figure 1 for Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition
Figure 2 for Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition
Figure 3 for Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition
Figure 4 for Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition
Viaarxiv icon

Outpainting by Queries

Jul 12, 2022
Kai Yao, Penglei Gao, Xi Yang, Kaizhu Huang, Jie Sun, Rui Zhang

Figure 1 for Outpainting by Queries
Figure 2 for Outpainting by Queries
Figure 3 for Outpainting by Queries
Figure 4 for Outpainting by Queries
Viaarxiv icon

Towards Highly Expressive Machine Learning Models of Non-Melanoma Skin Cancer

Jul 09, 2022
Simon M. Thomas, James G. Lefevre, Glenn Baxter, Nicholas A. Hamilton

Figure 1 for Towards Highly Expressive Machine Learning Models of Non-Melanoma Skin Cancer
Figure 2 for Towards Highly Expressive Machine Learning Models of Non-Melanoma Skin Cancer
Figure 3 for Towards Highly Expressive Machine Learning Models of Non-Melanoma Skin Cancer
Figure 4 for Towards Highly Expressive Machine Learning Models of Non-Melanoma Skin Cancer
Viaarxiv icon

Eliciting and Learning with Soft Labels from Every Annotator

Jul 02, 2022
Katherine M. Collins, Umang Bhatt, Adrian Weller

Figure 1 for Eliciting and Learning with Soft Labels from Every Annotator
Figure 2 for Eliciting and Learning with Soft Labels from Every Annotator
Figure 3 for Eliciting and Learning with Soft Labels from Every Annotator
Figure 4 for Eliciting and Learning with Soft Labels from Every Annotator
Viaarxiv icon

Deep Learning to See: Towards New Foundations of Computer Vision

Jun 30, 2022
Alessandro Betti, Marco Gori, Stefano Melacci

Viaarxiv icon

Building Korean Sign Language Augmentation (KoSLA) Corpus with Data Augmentation Technique

Jul 12, 2022
Changnam An, Eunkyung Han, Dongmyeong Noh, Ohkyoon Kwon, Sumi Lee, Hyunshim Han

Figure 1 for Building Korean Sign Language Augmentation (KoSLA) Corpus with Data Augmentation Technique
Figure 2 for Building Korean Sign Language Augmentation (KoSLA) Corpus with Data Augmentation Technique
Figure 3 for Building Korean Sign Language Augmentation (KoSLA) Corpus with Data Augmentation Technique
Figure 4 for Building Korean Sign Language Augmentation (KoSLA) Corpus with Data Augmentation Technique
Viaarxiv icon

Language-Based Causal Representation Learning

Jul 12, 2022
Blai Bonet, Hector Geffner

Figure 1 for Language-Based Causal Representation Learning
Viaarxiv icon

Using UAS Imagery and Computer Vision to Support Site-Specific Weed Control in Corn

Jun 02, 2022
Ranjan Sapkota, Paulo Flores

Figure 1 for Using UAS Imagery and Computer Vision to Support Site-Specific Weed Control in Corn
Figure 2 for Using UAS Imagery and Computer Vision to Support Site-Specific Weed Control in Corn
Figure 3 for Using UAS Imagery and Computer Vision to Support Site-Specific Weed Control in Corn
Figure 4 for Using UAS Imagery and Computer Vision to Support Site-Specific Weed Control in Corn
Viaarxiv icon

Weakly Supervised Grounding for VQA in Vision-Language Transformers

Jul 05, 2022
Aisha Urooj Khan, Hilde Kuehne, Chuang Gan, Niels Da Vitoria Lobo, Mubarak Shah

Figure 1 for Weakly Supervised Grounding for VQA in Vision-Language Transformers
Figure 2 for Weakly Supervised Grounding for VQA in Vision-Language Transformers
Figure 3 for Weakly Supervised Grounding for VQA in Vision-Language Transformers
Figure 4 for Weakly Supervised Grounding for VQA in Vision-Language Transformers
Viaarxiv icon