Alert button
Picture for Kentaro Inui

Kentaro Inui

Alert button

Lower Perplexity is Not Always Human-Like

Jun 02, 2021
Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui

Figure 1 for Lower Perplexity is Not Always Human-Like
Figure 2 for Lower Perplexity is Not Always Human-Like
Figure 3 for Lower Perplexity is Not Always Human-Like
Figure 4 for Lower Perplexity is Not Always Human-Like
Viaarxiv icon

SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics

Jun 02, 2021
Hitomi Yanaka, Koji Mineshima, Kentaro Inui

Figure 1 for SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics
Figure 2 for SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics
Figure 3 for SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics
Figure 4 for SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics
Viaarxiv icon

Learning to Learn to be Right for the Right Reasons

Apr 23, 2021
Pride Kavumba, Benjamin Heinzerling, Ana Brassard, Kentaro Inui

Figure 1 for Learning to Learn to be Right for the Right Reasons
Figure 2 for Learning to Learn to be Right for the Right Reasons
Figure 3 for Learning to Learn to be Right for the Right Reasons
Viaarxiv icon

A Comparative Study on Collecting High-Quality Implicit Reasonings at a Large-scale

Apr 16, 2021
Keshav Singh, Paul Reisert, Naoya Inoue, Kentaro Inui

Figure 1 for A Comparative Study on Collecting High-Quality Implicit Reasonings at a Large-scale
Figure 2 for A Comparative Study on Collecting High-Quality Implicit Reasonings at a Large-scale
Figure 3 for A Comparative Study on Collecting High-Quality Implicit Reasonings at a Large-scale
Figure 4 for A Comparative Study on Collecting High-Quality Implicit Reasonings at a Large-scale
Viaarxiv icon

Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution

Apr 15, 2021
Ryuto Konno, Shun Kiyono, Yuichiroh Matsubayashi, Hiroki Ouchi, Kentaro Inui

Figure 1 for Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution
Figure 2 for Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution
Figure 3 for Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution
Figure 4 for Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution
Viaarxiv icon

Two Training Strategies for Improving Relation Extraction over Universal Graph

Feb 12, 2021
Qin Dai, Naoya Inoue, Ryo Takahashi, Kentaro Inui

Figure 1 for Two Training Strategies for Improving Relation Extraction over Universal Graph
Figure 2 for Two Training Strategies for Improving Relation Extraction over Universal Graph
Figure 3 for Two Training Strategies for Improving Relation Extraction over Universal Graph
Figure 4 for Two Training Strategies for Improving Relation Extraction over Universal Graph
Viaarxiv icon

Exploring Transitivity in Neural NLI Models through Veridicality

Jan 26, 2021
Hitomi Yanaka, Koji Mineshima, Kentaro Inui

Figure 1 for Exploring Transitivity in Neural NLI Models through Veridicality
Figure 2 for Exploring Transitivity in Neural NLI Models through Veridicality
Figure 3 for Exploring Transitivity in Neural NLI Models through Veridicality
Figure 4 for Exploring Transitivity in Neural NLI Models through Veridicality
Viaarxiv icon

Efficient Estimation of Influence of a Training Instance

Dec 08, 2020
Sosuke Kobayashi, Sho Yokoi, Jun Suzuki, Kentaro Inui

Figure 1 for Efficient Estimation of Influence of a Training Instance
Figure 2 for Efficient Estimation of Influence of a Training Instance
Figure 3 for Efficient Estimation of Influence of a Training Instance
Figure 4 for Efficient Estimation of Influence of a Training Instance
Viaarxiv icon

An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution

Nov 04, 2020
Ryuto Konno, Yuichiroh Matsubayashi, Shun Kiyono, Hiroki Ouchi, Ryo Takahashi, Kentaro Inui

Figure 1 for An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution
Figure 2 for An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution
Figure 3 for An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution
Figure 4 for An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution
Viaarxiv icon

PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents

Nov 04, 2020
Ryo Fujii, Masato Mita, Kaori Abe, Kazuaki Hanawa, Makoto Morishita, Jun Suzuki, Kentaro Inui

Figure 1 for PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents
Figure 2 for PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents
Figure 3 for PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents
Figure 4 for PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents
Viaarxiv icon