Picture for Ameya Prabhu

Ameya Prabhu

kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies

Apr 15, 2024
Figure 1 for kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies
Figure 2 for kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies
Figure 3 for kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies
Figure 4 for kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies
Viaarxiv icon

Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry

Add code
Apr 11, 2024
Viaarxiv icon

No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance

Add code
Apr 08, 2024
Figure 1 for No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Figure 2 for No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Figure 3 for No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Figure 4 for No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Viaarxiv icon

Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress

Add code
Feb 29, 2024
Viaarxiv icon

Corrective Machine Unlearning

Feb 21, 2024
Viaarxiv icon

RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning

Add code
Feb 13, 2024
Figure 1 for RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
Figure 2 for RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
Figure 3 for RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
Figure 4 for RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
Viaarxiv icon

From Categories to Classifier: Name-Only Continual Learning by Exploring the Web

Add code
Nov 19, 2023
Figure 1 for From Categories to Classifier: Name-Only Continual Learning by Exploring the Web
Figure 2 for From Categories to Classifier: Name-Only Continual Learning by Exploring the Web
Figure 3 for From Categories to Classifier: Name-Only Continual Learning by Exploring the Web
Figure 4 for From Categories to Classifier: Name-Only Continual Learning by Exploring the Web
Viaarxiv icon

Inverse Scaling: When Bigger Isn't Better

Add code
Jun 15, 2023
Figure 1 for Inverse Scaling: When Bigger Isn't Better
Figure 2 for Inverse Scaling: When Bigger Isn't Better
Figure 3 for Inverse Scaling: When Bigger Isn't Better
Figure 4 for Inverse Scaling: When Bigger Isn't Better
Viaarxiv icon

Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?

Add code
May 16, 2023
Figure 1 for Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?
Figure 2 for Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?
Figure 3 for Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?
Figure 4 for Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?
Viaarxiv icon

Online Continual Learning Without the Storage Constraint

Add code
May 16, 2023
Figure 1 for Online Continual Learning Without the Storage Constraint
Figure 2 for Online Continual Learning Without the Storage Constraint
Figure 3 for Online Continual Learning Without the Storage Constraint
Figure 4 for Online Continual Learning Without the Storage Constraint
Viaarxiv icon