Alert button
Picture for Mrinmaya Sachan

Mrinmaya Sachan

Alert button

Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference

Add code
Bookmark button
Alert button
Jan 28, 2023
Vilém Zouhar, Shehzaad Dhuliawala, Wangchunshu Zhou, Nico Daheim, Tom Kocmi, Yuchen Eleanor Jiang, Mrinmaya Sachan

Figure 1 for Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference
Figure 2 for Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference
Figure 3 for Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference
Figure 4 for Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference
Viaarxiv icon

Opportunities and Challenges in Neural Dialog Tutoring

Add code
Bookmark button
Alert button
Jan 24, 2023
Jakub Macina, Nico Daheim, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Iryna Gurevych, Mrinmaya Sachan

Figure 1 for Opportunities and Challenges in Neural Dialog Tutoring
Figure 2 for Opportunities and Challenges in Neural Dialog Tutoring
Figure 3 for Opportunities and Challenges in Neural Dialog Tutoring
Figure 4 for Opportunities and Challenges in Neural Dialog Tutoring
Viaarxiv icon

Understanding Stereotypes in Language Models: Towards Robust Measurement and Zero-Shot Debiasing

Add code
Bookmark button
Alert button
Dec 20, 2022
Justus Mattern, Zhijing Jin, Mrinmaya Sachan, Rada Mihalcea, Bernhard Schölkopf

Figure 1 for Understanding Stereotypes in Language Models: Towards Robust Measurement and Zero-Shot Debiasing
Figure 2 for Understanding Stereotypes in Language Models: Towards Robust Measurement and Zero-Shot Debiasing
Figure 3 for Understanding Stereotypes in Language Models: Towards Robust Measurement and Zero-Shot Debiasing
Figure 4 for Understanding Stereotypes in Language Models: Towards Robust Measurement and Zero-Shot Debiasing
Viaarxiv icon

Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions

Add code
Bookmark button
Alert button
Dec 01, 2022
Kumar Shridhar, Alessandro Stolfo, Mrinmaya Sachan

Figure 1 for Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions
Figure 2 for Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions
Figure 3 for Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions
Figure 4 for Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions
Viaarxiv icon

Automatic Generation of Socratic Subquestions for Teaching Math Word Problems

Add code
Bookmark button
Alert button
Nov 23, 2022
Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, Mrinmaya Sachan

Figure 1 for Automatic Generation of Socratic Subquestions for Teaching Math Word Problems
Figure 2 for Automatic Generation of Socratic Subquestions for Teaching Math Word Problems
Figure 3 for Automatic Generation of Socratic Subquestions for Teaching Math Word Problems
Figure 4 for Automatic Generation of Socratic Subquestions for Teaching Math Word Problems
Viaarxiv icon

Autoregressive Structured Prediction with Language Models

Add code
Bookmark button
Alert button
Nov 17, 2022
Tianyu Liu, Yuchen Jiang, Nicholas Monath, Ryan Cotterell, Mrinmaya Sachan

Figure 1 for Autoregressive Structured Prediction with Language Models
Figure 2 for Autoregressive Structured Prediction with Language Models
Figure 3 for Autoregressive Structured Prediction with Language Models
Figure 4 for Autoregressive Structured Prediction with Language Models
Viaarxiv icon

Beyond prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations

Add code
Bookmark button
Alert button
Oct 29, 2022
Yu Fei, Ping Nie, Zhao Meng, Roger Wattenhofer, Mrinmaya Sachan

Figure 1 for Beyond prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations
Figure 2 for Beyond prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations
Figure 3 for Beyond prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations
Figure 4 for Beyond prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations
Viaarxiv icon

Differentially Private Language Models for Secure Data Sharing

Add code
Bookmark button
Alert button
Oct 26, 2022
Justus Mattern, Zhijing Jin, Benjamin Weggenmann, Bernhard Schoelkopf, Mrinmaya Sachan

Figure 1 for Differentially Private Language Models for Secure Data Sharing
Figure 2 for Differentially Private Language Models for Secure Data Sharing
Figure 3 for Differentially Private Language Models for Secure Data Sharing
Figure 4 for Differentially Private Language Models for Secure Data Sharing
Viaarxiv icon