Alert button
Picture for Kosuke Nishida

Kosuke Nishida

Alert button

Robust Text-driven Image Editing Method that Adaptively Explores Directions in Latent Spaces of StyleGAN and CLIP

Add code
Bookmark button
Alert button
Apr 03, 2023
Tsuyoshi Baba, Kosuke Nishida, Kyosuke Nishida

Figure 1 for Robust Text-driven Image Editing Method that Adaptively Explores Directions in Latent Spaces of StyleGAN and CLIP
Figure 2 for Robust Text-driven Image Editing Method that Adaptively Explores Directions in Latent Spaces of StyleGAN and CLIP
Figure 3 for Robust Text-driven Image Editing Method that Adaptively Explores Directions in Latent Spaces of StyleGAN and CLIP
Figure 4 for Robust Text-driven Image Editing Method that Adaptively Explores Directions in Latent Spaces of StyleGAN and CLIP
Viaarxiv icon

SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images

Add code
Bookmark button
Alert button
Jan 12, 2023
Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi Saito, Kuniko Saito

Figure 1 for SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images
Figure 2 for SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images
Figure 3 for SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images
Figure 4 for SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images
Viaarxiv icon

Self-Adaptive Named Entity Recognition by Retrieving Unstructured Knowledge

Add code
Bookmark button
Alert button
Oct 14, 2022
Kosuke Nishida, Naoki Yoshinaga, Kyosuke Nishida

Figure 1 for Self-Adaptive Named Entity Recognition by Retrieving Unstructured Knowledge
Figure 2 for Self-Adaptive Named Entity Recognition by Retrieving Unstructured Knowledge
Figure 3 for Self-Adaptive Named Entity Recognition by Retrieving Unstructured Knowledge
Figure 4 for Self-Adaptive Named Entity Recognition by Retrieving Unstructured Knowledge
Viaarxiv icon

Improving Few-Shot Image Classification Using Machine- and User-Generated Natural Language Descriptions

Add code
Bookmark button
Alert button
Jul 07, 2022
Kosuke Nishida, Kyosuke Nishida, Shuichi Nishioka

Figure 1 for Improving Few-Shot Image Classification Using Machine- and User-Generated Natural Language Descriptions
Figure 2 for Improving Few-Shot Image Classification Using Machine- and User-Generated Natural Language Descriptions
Figure 3 for Improving Few-Shot Image Classification Using Machine- and User-Generated Natural Language Descriptions
Figure 4 for Improving Few-Shot Image Classification Using Machine- and User-Generated Natural Language Descriptions
Viaarxiv icon

Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction

Add code
Bookmark button
Alert button
Nov 18, 2021
Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Sen Yoshida

Figure 1 for Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction
Figure 2 for Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction
Figure 3 for Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction
Figure 4 for Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction
Viaarxiv icon

Task-adaptive Pre-training of Language Models with Word Embedding Regularization

Add code
Bookmark button
Alert button
Sep 17, 2021
Kosuke Nishida, Kyosuke Nishida, Sen Yoshida

Figure 1 for Task-adaptive Pre-training of Language Models with Word Embedding Regularization
Figure 2 for Task-adaptive Pre-training of Language Models with Word Embedding Regularization
Figure 3 for Task-adaptive Pre-training of Language Models with Word Embedding Regularization
Figure 4 for Task-adaptive Pre-training of Language Models with Word Embedding Regularization
Viaarxiv icon

Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models

Add code
Bookmark button
Alert button
Mar 29, 2020
Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita

Figure 1 for Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Figure 2 for Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Figure 3 for Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Figure 4 for Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Viaarxiv icon

Length-controllable Abstractive Summarization by Guiding with Summary Prototype

Add code
Bookmark button
Alert button
Jan 21, 2020
Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Atsushi Otsuka, Hisako Asano, Junji Tomita, Hiroyuki Shindo, Yuji Matsumoto

Figure 1 for Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Figure 2 for Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Figure 3 for Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Figure 4 for Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Viaarxiv icon

Unsupervised Domain Adaptation of Language Models for Reading Comprehension

Add code
Bookmark button
Alert button
Nov 25, 2019
Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Hisako Asano, Junji Tomita

Figure 1 for Unsupervised Domain Adaptation of Language Models for Reading Comprehension
Figure 2 for Unsupervised Domain Adaptation of Language Models for Reading Comprehension
Figure 3 for Unsupervised Domain Adaptation of Language Models for Reading Comprehension
Figure 4 for Unsupervised Domain Adaptation of Language Models for Reading Comprehension
Viaarxiv icon

Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction

Add code
Bookmark button
Alert button
May 29, 2019
Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita

Figure 1 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 2 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 3 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 4 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Viaarxiv icon