Alert button
Picture for Avinash Ravichandran

Avinash Ravichandran

Alert button

Learning Expressive Prompting With Residuals for Vision Transformers

Add code
Bookmark button
Alert button
Mar 27, 2023
Rajshekhar Das, Yonatan Dukler, Avinash Ravichandran, Ashwin Swaminathan

Figure 1 for Learning Expressive Prompting With Residuals for Vision Transformers
Figure 2 for Learning Expressive Prompting With Residuals for Vision Transformers
Figure 3 for Learning Expressive Prompting With Residuals for Vision Transformers
Figure 4 for Learning Expressive Prompting With Residuals for Vision Transformers
Viaarxiv icon

WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation

Add code
Bookmark button
Alert button
Mar 26, 2023
Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, Onkar Dabeer

Figure 1 for WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation
Figure 2 for WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation
Figure 3 for WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation
Figure 4 for WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation
Viaarxiv icon

Introspective Cross-Attention Probing for Lightweight Transfer of Pre-trained Models

Add code
Bookmark button
Alert button
Mar 07, 2023
Yonatan Dukler, Alessandro Achille, Hao Yang, Varsha Vivek, Luca Zancato, Ben Bowman, Avinash Ravichandran, Charless Fowlkes, Ashwin Swaminathan, Stefano Soatto

Figure 1 for Introspective Cross-Attention Probing for Lightweight Transfer of Pre-trained Models
Figure 2 for Introspective Cross-Attention Probing for Lightweight Transfer of Pre-trained Models
Figure 3 for Introspective Cross-Attention Probing for Lightweight Transfer of Pre-trained Models
Figure 4 for Introspective Cross-Attention Probing for Lightweight Transfer of Pre-trained Models
Viaarxiv icon

A Meta-Learning Approach to Predicting Performance and Data Requirements

Add code
Bookmark button
Alert button
Mar 02, 2023
Achin Jain, Gurumurthy Swaminathan, Paolo Favaro, Hao Yang, Avinash Ravichandran, Hrayr Harutyunyan, Alessandro Achille, Onkar Dabeer, Bernt Schiele, Ashwin Swaminathan, Stefano Soatto

Figure 1 for A Meta-Learning Approach to Predicting Performance and Data Requirements
Figure 2 for A Meta-Learning Approach to Predicting Performance and Data Requirements
Figure 3 for A Meta-Learning Approach to Predicting Performance and Data Requirements
Figure 4 for A Meta-Learning Approach to Predicting Performance and Data Requirements
Viaarxiv icon

ComplETR: Reducing the cost of annotations for object detection in dense scenes with vision transformers

Add code
Bookmark button
Alert button
Sep 13, 2022
Achin Jain, Kibok Lee, Gurumurthy Swaminathan, Hao Yang, Bernt Schiele, Avinash Ravichandran, Onkar Dabeer

Figure 1 for ComplETR: Reducing the cost of annotations for object detection in dense scenes with vision transformers
Figure 2 for ComplETR: Reducing the cost of annotations for object detection in dense scenes with vision transformers
Figure 3 for ComplETR: Reducing the cost of annotations for object detection in dense scenes with vision transformers
Figure 4 for ComplETR: Reducing the cost of annotations for object detection in dense scenes with vision transformers
Viaarxiv icon

Semi-supervised Vision Transformers at Scale

Add code
Bookmark button
Alert button
Aug 11, 2022
Zhaowei Cai, Avinash Ravichandran, Paolo Favaro, Manchen Wang, Davide Modolo, Rahul Bhotika, Zhuowen Tu, Stefano Soatto

Figure 1 for Semi-supervised Vision Transformers at Scale
Figure 2 for Semi-supervised Vision Transformers at Scale
Figure 3 for Semi-supervised Vision Transformers at Scale
Figure 4 for Semi-supervised Vision Transformers at Scale
Viaarxiv icon

Masked Vision and Language Modeling for Multi-modal Representation Learning

Add code
Bookmark button
Alert button
Aug 03, 2022
Gukyeong Kwon, Zhaowei Cai, Avinash Ravichandran, Erhan Bas, Rahul Bhotika, Stefano Soatto

Figure 1 for Masked Vision and Language Modeling for Multi-modal Representation Learning
Figure 2 for Masked Vision and Language Modeling for Multi-modal Representation Learning
Figure 3 for Masked Vision and Language Modeling for Multi-modal Representation Learning
Figure 4 for Masked Vision and Language Modeling for Multi-modal Representation Learning
Viaarxiv icon

Rethinking Few-Shot Object Detection on a Multi-Domain Benchmark

Add code
Bookmark button
Alert button
Jul 22, 2022
Kibok Lee, Hao Yang, Satyaki Chakraborty, Zhaowei Cai, Gurumurthy Swaminathan, Avinash Ravichandran, Onkar Dabeer

Figure 1 for Rethinking Few-Shot Object Detection on a Multi-Domain Benchmark
Figure 2 for Rethinking Few-Shot Object Detection on a Multi-Domain Benchmark
Figure 3 for Rethinking Few-Shot Object Detection on a Multi-Domain Benchmark
Figure 4 for Rethinking Few-Shot Object Detection on a Multi-Domain Benchmark
Viaarxiv icon

X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks

Add code
Bookmark button
Alert button
Apr 12, 2022
Zhaowei Cai, Gukyeong Kwon, Avinash Ravichandran, Erhan Bas, Zhuowen Tu, Rahul Bhotika, Stefano Soatto

Figure 1 for X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks
Figure 2 for X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks
Figure 3 for X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks
Figure 4 for X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks
Viaarxiv icon

Class-Incremental Learning with Strong Pre-trained Models

Add code
Bookmark button
Alert button
Apr 07, 2022
Tz-Ying Wu, Gurumurthy Swaminathan, Zhizhong Li, Avinash Ravichandran, Nuno Vasconcelos, Rahul Bhotika, Stefano Soatto

Figure 1 for Class-Incremental Learning with Strong Pre-trained Models
Figure 2 for Class-Incremental Learning with Strong Pre-trained Models
Figure 3 for Class-Incremental Learning with Strong Pre-trained Models
Figure 4 for Class-Incremental Learning with Strong Pre-trained Models
Viaarxiv icon