Picture for Mahsa Yarmohammadi

Mahsa Yarmohammadi

Improving Neural Biasing for Contextual Speech Recognition by Early Context Injection and Text Perturbation

Add code
Jul 14, 2024
Viaarxiv icon

MultiMUC: Multilingual Template Filling on MUC-4

Add code
Jan 29, 2024
Figure 1 for MultiMUC: Multilingual Template Filling on MUC-4
Figure 2 for MultiMUC: Multilingual Template Filling on MUC-4
Figure 3 for MultiMUC: Multilingual Template Filling on MUC-4
Figure 4 for MultiMUC: Multilingual Template Filling on MUC-4
Viaarxiv icon

MegaWika: Millions of reports and their sources across 50 diverse languages

Add code
Jul 13, 2023
Figure 1 for MegaWika: Millions of reports and their sources across 50 diverse languages
Figure 2 for MegaWika: Millions of reports and their sources across 50 diverse languages
Figure 3 for MegaWika: Millions of reports and their sources across 50 diverse languages
Figure 4 for MegaWika: Millions of reports and their sources across 50 diverse languages
Viaarxiv icon

Multilingual Coreference Resolution in Multiparty Dialogue

Add code
Aug 02, 2022
Figure 1 for Multilingual Coreference Resolution in Multiparty Dialogue
Figure 2 for Multilingual Coreference Resolution in Multiparty Dialogue
Figure 3 for Multilingual Coreference Resolution in Multiparty Dialogue
Figure 4 for Multilingual Coreference Resolution in Multiparty Dialogue
Viaarxiv icon

Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction

Add code
Sep 14, 2021
Figure 1 for Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction
Figure 2 for Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction
Figure 3 for Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction
Figure 4 for Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction
Viaarxiv icon

Gradual Fine-Tuning for Low-Resource Domain Adaptation

Add code
Mar 03, 2021
Figure 1 for Gradual Fine-Tuning for Low-Resource Domain Adaptation
Figure 2 for Gradual Fine-Tuning for Low-Resource Domain Adaptation
Figure 3 for Gradual Fine-Tuning for Low-Resource Domain Adaptation
Figure 4 for Gradual Fine-Tuning for Low-Resource Domain Adaptation
Viaarxiv icon

CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models

Add code
Oct 28, 2020
Figure 1 for CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models
Figure 2 for CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models
Figure 3 for CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models
Figure 4 for CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models
Viaarxiv icon