Alert button
Picture for Preksha Nema

Preksha Nema

Alert button

STOAT: Structured Data to Analytical Text With Controls

Add code
Bookmark button
Alert button
May 19, 2023
Deepanway Ghosal, Preksha Nema, Aravindan Raghuveer

Figure 1 for STOAT: Structured Data to Analytical Text With Controls
Figure 2 for STOAT: Structured Data to Analytical Text With Controls
Figure 3 for STOAT: Structured Data to Analytical Text With Controls
Figure 4 for STOAT: Structured Data to Analytical Text With Controls
Viaarxiv icon

T-STAR: Truthful Style Transfer using AMR Graph as Intermediate Representation

Add code
Bookmark button
Alert button
Dec 03, 2022
Anubhav Jangra, Preksha Nema, Aravindan Raghuveer

Figure 1 for T-STAR: Truthful Style Transfer using AMR Graph as Intermediate Representation
Figure 2 for T-STAR: Truthful Style Transfer using AMR Graph as Intermediate Representation
Figure 3 for T-STAR: Truthful Style Transfer using AMR Graph as Intermediate Representation
Figure 4 for T-STAR: Truthful Style Transfer using AMR Graph as Intermediate Representation
Viaarxiv icon

A Framework for Rationale Extraction for Deep QA models

Add code
Bookmark button
Alert button
Oct 09, 2021
Sahana Ramnath, Preksha Nema, Deep Sahni, Mitesh M. Khapra

Figure 1 for A Framework for Rationale Extraction for Deep QA models
Figure 2 for A Framework for Rationale Extraction for Deep QA models
Figure 3 for A Framework for Rationale Extraction for Deep QA models
Figure 4 for A Framework for Rationale Extraction for Deep QA models
Viaarxiv icon

The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT

Add code
Bookmark button
Alert button
Jan 22, 2021
Madhura Pande, Aakriti Budhraja, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra

Figure 1 for The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Figure 2 for The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Figure 3 for The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Figure 4 for The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Viaarxiv icon

Towards Interpreting BERT for Reading Comprehension Based QA

Add code
Bookmark button
Alert button
Oct 18, 2020
Sahana Ramnath, Preksha Nema, Deep Sahni, Mitesh M. Khapra

Figure 1 for Towards Interpreting BERT for Reading Comprehension Based QA
Figure 2 for Towards Interpreting BERT for Reading Comprehension Based QA
Figure 3 for Towards Interpreting BERT for Reading Comprehension Based QA
Figure 4 for Towards Interpreting BERT for Reading Comprehension Based QA
Viaarxiv icon

On the Importance of Local Information in Transformer Based Models

Add code
Bookmark button
Alert button
Aug 13, 2020
Madhura Pande, Aakriti Budhraja, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra

Figure 1 for On the Importance of Local Information in Transformer Based Models
Figure 2 for On the Importance of Local Information in Transformer Based Models
Figure 3 for On the Importance of Local Information in Transformer Based Models
Figure 4 for On the Importance of Local Information in Transformer Based Models
Viaarxiv icon

Towards Transparent and Explainable Attention Models

Add code
Bookmark button
Alert button
Apr 29, 2020
Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

Figure 1 for Towards Transparent and Explainable Attention Models
Figure 2 for Towards Transparent and Explainable Attention Models
Figure 3 for Towards Transparent and Explainable Attention Models
Figure 4 for Towards Transparent and Explainable Attention Models
Viaarxiv icon

Let's Ask Again: Refine Network for Automatic Question Generation

Add code
Bookmark button
Alert button
Aug 31, 2019
Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

Figure 1 for Let's Ask Again: Refine Network for Automatic Question Generation
Figure 2 for Let's Ask Again: Refine Network for Automatic Question Generation
Figure 3 for Let's Ask Again: Refine Network for Automatic Question Generation
Figure 4 for Let's Ask Again: Refine Network for Automatic Question Generation
Viaarxiv icon

Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples

Add code
Bookmark button
Alert button
Apr 04, 2019
Soham Parikh, Ananya B. Sai, Preksha Nema, Mitesh M. Khapra

Figure 1 for Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
Figure 2 for Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
Figure 3 for Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
Figure 4 for Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
Viaarxiv icon

ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions

Add code
Bookmark button
Alert button
Apr 04, 2019
Soham Parikh, Ananya B. Sai, Preksha Nema, Mitesh M. Khapra

Figure 1 for ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
Figure 2 for ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
Figure 3 for ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
Figure 4 for ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
Viaarxiv icon