Picture for Bishwamittra Ghosh

Bishwamittra Ghosh

In Agents We Trust, but Who Do Agents Trust? Latent Source Preferences Steer LLM Generations

Add code
Feb 17, 2026
Viaarxiv icon

Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs

Add code
Jul 29, 2025
Viaarxiv icon

Revisiting Privacy, Utility, and Efficiency Trade-offs when Fine-Tuning Large Language Models

Add code
Feb 18, 2025
Viaarxiv icon

Logical Consistency of Large Language Models in Fact-checking

Add code
Dec 20, 2024
Viaarxiv icon

Active Fourier Auditor for Estimating Distributional Properties of ML Models

Add code
Oct 10, 2024
Figure 1 for Active Fourier Auditor for Estimating Distributional Properties of ML Models
Figure 2 for Active Fourier Auditor for Estimating Distributional Properties of ML Models
Figure 3 for Active Fourier Auditor for Estimating Distributional Properties of ML Models
Figure 4 for Active Fourier Auditor for Estimating Distributional Properties of ML Models
Viaarxiv icon

Understanding Memorisation in LLMs: Dynamics, Influencing Factors, and Implications

Add code
Jul 27, 2024
Figure 1 for Understanding Memorisation in LLMs: Dynamics, Influencing Factors, and Implications
Figure 2 for Understanding Memorisation in LLMs: Dynamics, Influencing Factors, and Implications
Figure 3 for Understanding Memorisation in LLMs: Dynamics, Influencing Factors, and Implications
Figure 4 for Understanding Memorisation in LLMs: Dynamics, Influencing Factors, and Implications
Viaarxiv icon

Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction

Add code
Apr 19, 2024
Figure 1 for Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction
Figure 2 for Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction
Figure 3 for Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction
Figure 4 for Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction
Viaarxiv icon

Don't Forget What I did?: Assessing Client Contributions in Federated Learning

Add code
Mar 11, 2024
Figure 1 for Don't Forget What I did?: Assessing Client Contributions in Federated Learning
Figure 2 for Don't Forget What I did?: Assessing Client Contributions in Federated Learning
Figure 3 for Don't Forget What I did?: Assessing Client Contributions in Federated Learning
Figure 4 for Don't Forget What I did?: Assessing Client Contributions in Federated Learning
Viaarxiv icon

How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis

Add code
Jun 01, 2022
Figure 1 for How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis
Figure 2 for How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis
Figure 3 for How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis
Figure 4 for How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis
Viaarxiv icon

Efficient Learning of Interpretable Classification Rules

Add code
May 14, 2022
Figure 1 for Efficient Learning of Interpretable Classification Rules
Figure 2 for Efficient Learning of Interpretable Classification Rules
Figure 3 for Efficient Learning of Interpretable Classification Rules
Figure 4 for Efficient Learning of Interpretable Classification Rules
Viaarxiv icon