Picture for Mayug Maniparambil

Mayug Maniparambil

TopoBench: Benchmarking LLMs on Hard Topological Reasoning

Add code
Mar 12, 2026
Viaarxiv icon

Underrepresented in Foundation Model Pretraining Data? A One-Shot Probe

Add code
Mar 04, 2026
Viaarxiv icon

Hold-One-Shot-Out (HOSO) for Validation-Free Few-Shot CLIP Adapters

Add code
Mar 04, 2026
Viaarxiv icon

Pinpoint Counterfactuals: Reducing social bias in foundation models via localized counterfactual generation

Add code
Dec 12, 2024
Figure 1 for Pinpoint Counterfactuals: Reducing social bias in foundation models via localized counterfactual generation
Figure 2 for Pinpoint Counterfactuals: Reducing social bias in foundation models via localized counterfactual generation
Figure 3 for Pinpoint Counterfactuals: Reducing social bias in foundation models via localized counterfactual generation
Figure 4 for Pinpoint Counterfactuals: Reducing social bias in foundation models via localized counterfactual generation
Viaarxiv icon

From Unimodal to Multimodal: Scaling up Projectors to Align Modalities

Add code
Sep 28, 2024
Figure 1 for From Unimodal to Multimodal: Scaling up Projectors to Align Modalities
Figure 2 for From Unimodal to Multimodal: Scaling up Projectors to Align Modalities
Figure 3 for From Unimodal to Multimodal: Scaling up Projectors to Align Modalities
Figure 4 for From Unimodal to Multimodal: Scaling up Projectors to Align Modalities
Viaarxiv icon

Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation

Add code
Apr 09, 2024
Figure 1 for Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation
Figure 2 for Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation
Figure 3 for Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation
Figure 4 for Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation
Viaarxiv icon

Do Vision and Language Encoders Represent the World Similarly?

Add code
Jan 10, 2024
Figure 1 for Do Vision and Language Encoders Represent the World Similarly?
Figure 2 for Do Vision and Language Encoders Represent the World Similarly?
Figure 3 for Do Vision and Language Encoders Represent the World Similarly?
Figure 4 for Do Vision and Language Encoders Represent the World Similarly?
Viaarxiv icon

Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts

Add code
Aug 08, 2023
Figure 1 for Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts
Figure 2 for Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts
Figure 3 for Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts
Figure 4 for Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts
Viaarxiv icon

The STOIC2021 COVID-19 AI challenge: applying reusable training methodologies to private data

Add code
Jun 25, 2023
Viaarxiv icon

An Ensemble Deep Learning Approach for COVID-19 Severity Prediction Using Chest CT Scans

Add code
May 17, 2023
Figure 1 for An Ensemble Deep Learning Approach for COVID-19 Severity Prediction Using Chest CT Scans
Figure 2 for An Ensemble Deep Learning Approach for COVID-19 Severity Prediction Using Chest CT Scans
Figure 3 for An Ensemble Deep Learning Approach for COVID-19 Severity Prediction Using Chest CT Scans
Figure 4 for An Ensemble Deep Learning Approach for COVID-19 Severity Prediction Using Chest CT Scans
Viaarxiv icon