Picture for Puneet K. Dokania

Puneet K. Dokania

What Makes and Breaks Safety Fine-tuning? A Mechanistic Study

Add code
Jul 16, 2024
Viaarxiv icon

What Makes and Breaks Safety Fine-tuning? Mechanistic Study

Add code
Jul 14, 2024
Viaarxiv icon

On Calibration of Object Detectors: Pitfalls, Evaluation and Baselines

Add code
May 30, 2024
Viaarxiv icon

Placing Objects in Context via Inpainting for Out-of-distribution Segmentation

Add code
Feb 26, 2024
Figure 1 for Placing Objects in Context via Inpainting for Out-of-distribution Segmentation
Figure 2 for Placing Objects in Context via Inpainting for Out-of-distribution Segmentation
Figure 3 for Placing Objects in Context via Inpainting for Out-of-distribution Segmentation
Figure 4 for Placing Objects in Context via Inpainting for Out-of-distribution Segmentation
Viaarxiv icon

RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning

Add code
Feb 13, 2024
Figure 1 for RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
Figure 2 for RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
Figure 3 for RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
Figure 4 for RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
Viaarxiv icon

Segment, Select, Correct: A Framework for Weakly-Supervised Referring Segmentation

Add code
Oct 23, 2023
Viaarxiv icon

MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection

Add code
Sep 27, 2023
Figure 1 for MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection
Figure 2 for MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection
Figure 3 for MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection
Figure 4 for MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection
Viaarxiv icon

Fine-tuning can cripple your foundation model; preserving features may be the solution

Add code
Aug 25, 2023
Figure 1 for Fine-tuning can cripple your foundation model; preserving features may be the solution
Figure 2 for Fine-tuning can cripple your foundation model; preserving features may be the solution
Figure 3 for Fine-tuning can cripple your foundation model; preserving features may be the solution
Figure 4 for Fine-tuning can cripple your foundation model; preserving features may be the solution
Viaarxiv icon

Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration

Add code
Jul 03, 2023
Figure 1 for Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration
Figure 2 for Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration
Figure 3 for Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration
Figure 4 for Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration
Viaarxiv icon

Graph Inductive Biases in Transformers without Message Passing

Add code
May 27, 2023
Figure 1 for Graph Inductive Biases in Transformers without Message Passing
Figure 2 for Graph Inductive Biases in Transformers without Message Passing
Figure 3 for Graph Inductive Biases in Transformers without Message Passing
Figure 4 for Graph Inductive Biases in Transformers without Message Passing
Viaarxiv icon