Alert button
Picture for Moninder Singh

Moninder Singh

Alert button

Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations

Add code
Bookmark button
Alert button
Mar 08, 2024
Swapnaja Achintalwar, Ioana Baldini, Djallel Bouneffouf, Joan Byamugisha, Maria Chang, Pierre Dognin, Eitan Farchi, Ndivhuwo Makondo, Aleksandra Mojsilovic, Manish Nagireddy, Karthikeyan Natesan Ramamurthy, Inkit Padhi, Orna Raz, Jesus Rios, Prasanna Sattigeri, Moninder Singh, Siphiwe Thwala, Rosario A. Uceda-Sosa, Kush R. Varshney

Figure 1 for Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
Figure 2 for Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
Figure 3 for Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
Figure 4 for Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
Viaarxiv icon

Ranking Large Language Models without Ground Truth

Add code
Bookmark button
Alert button
Feb 21, 2024
Amit Dhurandhar, Rahul Nair, Moninder Singh, Elizabeth Daly, Karthikeyan Natesan Ramamurthy

Viaarxiv icon

SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models

Add code
Bookmark button
Alert button
Dec 27, 2023
Manish Nagireddy, Lamogha Chiazor, Moninder Singh, Ioana Baldini

Viaarxiv icon

Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions

Add code
Bookmark button
Alert button
Feb 17, 2023
Manish Nagireddy, Moninder Singh, Samuel C. Hoffman, Evaline Ju, Karthikeyan Natesan Ramamurthy, Kush R. Varshney

Figure 1 for Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Figure 2 for Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Figure 3 for Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Figure 4 for Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Bookmark button
Alert button
Nov 02, 2022
Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh

Figure 1 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 2 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 3 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 4 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Viaarxiv icon

Anomaly Attribution with Likelihood Compensation

Add code
Bookmark button
Alert button
Aug 23, 2022
Tsuyoshi Idé, Amit Dhurandhar, Jiří Navrátil, Moninder Singh, Naoki Abe

Figure 1 for Anomaly Attribution with Likelihood Compensation
Figure 2 for Anomaly Attribution with Likelihood Compensation
Figure 3 for Anomaly Attribution with Likelihood Compensation
Figure 4 for Anomaly Attribution with Likelihood Compensation
Viaarxiv icon

Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations

Add code
Bookmark button
Alert button
May 08, 2022
Hammaad Adam, Ming Ying Yang, Kenrick Cato, Ioana Baldini, Charles Senteio, Leo Anthony Celi, Jiaming Zeng, Moninder Singh, Marzyeh Ghassemi

Figure 1 for Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations
Figure 2 for Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations
Figure 3 for Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations
Figure 4 for Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations
Viaarxiv icon

Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets

Add code
Bookmark button
Alert button
Dec 07, 2021
Kofi Arhin, Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Moninder Singh

Figure 1 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 2 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 3 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 4 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Viaarxiv icon

An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness

Add code
Bookmark button
Alert button
Sep 29, 2021
Moninder Singh, Gevorg Ghalachyan, Kush R. Varshney, Reginald E. Bryant

Figure 1 for An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness
Figure 2 for An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness
Figure 3 for An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness
Viaarxiv icon