Alert button
Picture for Masaaki Nagata

Masaaki Nagata

Alert button

Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction

Add code
Bookmark button
Alert button
May 29, 2019
Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita

Figure 1 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 2 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 3 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 4 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Viaarxiv icon

Direct Output Connection for a High-Rank Language Model

Add code
Bookmark button
Alert button
Aug 31, 2018
Sho Takase, Jun Suzuki, Masaaki Nagata

Figure 1 for Direct Output Connection for a High-Rank Language Model
Figure 2 for Direct Output Connection for a High-Rank Language Model
Figure 3 for Direct Output Connection for a High-Rank Language Model
Figure 4 for Direct Output Connection for a High-Rank Language Model
Viaarxiv icon

Source-side Prediction for Neural Headline Generation

Add code
Bookmark button
Alert button
Dec 22, 2017
Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, Masaaki Nagata

Figure 1 for Source-side Prediction for Neural Headline Generation
Figure 2 for Source-side Prediction for Neural Headline Generation
Figure 3 for Source-side Prediction for Neural Headline Generation
Figure 4 for Source-side Prediction for Neural Headline Generation
Viaarxiv icon

Input-to-Output Gate to Improve RNN Language Models

Add code
Bookmark button
Alert button
Sep 28, 2017
Sho Takase, Jun Suzuki, Masaaki Nagata

Figure 1 for Input-to-Output Gate to Improve RNN Language Models
Figure 2 for Input-to-Output Gate to Improve RNN Language Models
Figure 3 for Input-to-Output Gate to Improve RNN Language Models
Figure 4 for Input-to-Output Gate to Improve RNN Language Models
Viaarxiv icon

Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization

Add code
Bookmark button
Alert button
Feb 13, 2017
Jun Suzuki, Masaaki Nagata

Figure 1 for Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
Figure 2 for Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
Figure 3 for Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
Figure 4 for Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
Viaarxiv icon

Reading Comprehension using Entity-based Memory Network

Add code
Bookmark button
Alert button
Feb 01, 2017
Xun Wang, Katsuhito Sudoh, Masaaki Nagata, Tomohide Shibata, Daisuke Kawahara, Sadao Kurohashi

Figure 1 for Reading Comprehension using Entity-based Memory Network
Figure 2 for Reading Comprehension using Entity-based Memory Network
Figure 3 for Reading Comprehension using Entity-based Memory Network
Figure 4 for Reading Comprehension using Entity-based Memory Network
Viaarxiv icon

Enumeration of Extractive Oracle Summaries

Add code
Bookmark button
Alert button
Jan 06, 2017
Tsutomu Hirao, Masaaki Nishino, Jun Suzuki, Masaaki Nagata

Figure 1 for Enumeration of Extractive Oracle Summaries
Figure 2 for Enumeration of Extractive Oracle Summaries
Figure 3 for Enumeration of Extractive Oracle Summaries
Figure 4 for Enumeration of Extractive Oracle Summaries
Viaarxiv icon