Alert button
Picture for Shunsuke Kitada

Shunsuke Kitada

Alert button

Majority or Minority: Data Imbalance Learning Method for Named Entity Recognition

Add code
Bookmark button
Alert button
Jan 21, 2024
Sota Nemoto, Shunsuke Kitada, Hitoshi Iyatomi

Viaarxiv icon

Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives

Add code
Bookmark button
Alert button
Mar 24, 2023
Shunsuke Kitada

Figure 1 for Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Figure 2 for Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Figure 3 for Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Figure 4 for Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Viaarxiv icon

Feedback is Needed for Retakes: An Explainable Poor Image Notification Framework for the Visually Impaired

Add code
Bookmark button
Alert button
Nov 17, 2022
Kazuya Ohata, Shunsuke Kitada, Hitoshi Iyatomi

Figure 1 for Feedback is Needed for Retakes: An Explainable Poor Image Notification Framework for the Visually Impaired
Figure 2 for Feedback is Needed for Retakes: An Explainable Poor Image Notification Framework for the Visually Impaired
Figure 3 for Feedback is Needed for Retakes: An Explainable Poor Image Notification Framework for the Visually Impaired
Figure 4 for Feedback is Needed for Retakes: An Explainable Poor Image Notification Framework for the Visually Impaired
Viaarxiv icon

DM$^2$S$^2$: Deep Multi-Modal Sequence Sets with Hierarchical Modality Attention

Add code
Bookmark button
Alert button
Sep 07, 2022
Shunsuke Kitada, Yuki Iwazaki, Riku Togashi, Hitoshi Iyatomi

Figure 1 for DM$^2$S$^2$: Deep Multi-Modal Sequence Sets with Hierarchical Modality Attention
Figure 2 for DM$^2$S$^2$: Deep Multi-Modal Sequence Sets with Hierarchical Modality Attention
Figure 3 for DM$^2$S$^2$: Deep Multi-Modal Sequence Sets with Hierarchical Modality Attention
Figure 4 for DM$^2$S$^2$: Deep Multi-Modal Sequence Sets with Hierarchical Modality Attention
Viaarxiv icon

Expressions Causing Differences in Emotion Recognition in Social Networking Service Documents

Add code
Bookmark button
Alert button
Aug 30, 2022
Tsubasa Nakagawa, Shunsuke Kitada, Hitoshi Iyatomi

Figure 1 for Expressions Causing Differences in Emotion Recognition in Social Networking Service Documents
Figure 2 for Expressions Causing Differences in Emotion Recognition in Social Networking Service Documents
Figure 3 for Expressions Causing Differences in Emotion Recognition in Social Networking Service Documents
Figure 4 for Expressions Causing Differences in Emotion Recognition in Social Networking Service Documents
Viaarxiv icon

Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training for Semi-Supervised Text Classification

Add code
Bookmark button
Alert button
Apr 18, 2021
Shunsuke Kitada, Hitoshi Iyatomi

Figure 1 for Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training for Semi-Supervised Text Classification
Figure 2 for Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training for Semi-Supervised Text Classification
Figure 3 for Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training for Semi-Supervised Text Classification
Figure 4 for Making Attention Mechanisms More Robust and Interpretable with Virtual Adversarial Training for Semi-Supervised Text Classification
Viaarxiv icon

Text Classification through Glyph-aware Disentangled Character Embedding and Semantic Sub-character Augmentation

Add code
Bookmark button
Alert button
Nov 09, 2020
Takumi Aoki, Shunsuke Kitada, Hitoshi Iyatomi

Figure 1 for Text Classification through Glyph-aware Disentangled Character Embedding and Semantic Sub-character Augmentation
Figure 2 for Text Classification through Glyph-aware Disentangled Character Embedding and Semantic Sub-character Augmentation
Figure 3 for Text Classification through Glyph-aware Disentangled Character Embedding and Semantic Sub-character Augmentation
Figure 4 for Text Classification through Glyph-aware Disentangled Character Embedding and Semantic Sub-character Augmentation
Viaarxiv icon

Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training

Add code
Bookmark button
Alert button
Sep 25, 2020
Shunsuke Kitada, Hitoshi Iyatomi

Figure 1 for Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training
Figure 2 for Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training
Figure 3 for Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training
Figure 4 for Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training
Viaarxiv icon

AraDIC: Arabic Document Classification using Image-Based Character Embeddings and Class-Balanced Loss

Add code
Bookmark button
Alert button
Jun 20, 2020
Mahmoud Daif, Shunsuke Kitada, Hitoshi Iyatomi

Figure 1 for AraDIC: Arabic Document Classification using Image-Based Character Embeddings and Class-Balanced Loss
Figure 2 for AraDIC: Arabic Document Classification using Image-Based Character Embeddings and Class-Balanced Loss
Figure 3 for AraDIC: Arabic Document Classification using Image-Based Character Embeddings and Class-Balanced Loss
Figure 4 for AraDIC: Arabic Document Classification using Image-Based Character Embeddings and Class-Balanced Loss
Viaarxiv icon

Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creative

Add code
Bookmark button
Alert button
May 17, 2019
Shunsuke Kitada, Hitoshi Iyatomi, Yoshifumi Seki

Figure 1 for Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creative
Figure 2 for Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creative
Figure 3 for Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creative
Figure 4 for Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creative
Viaarxiv icon