Alert button
Picture for Kazushi Ikeda

Kazushi Ikeda

Alert button

Two-Stage Triplet Loss Training with Curriculum Augmentation for Audio-Visual Retrieval

Add code
Bookmark button
Alert button
Oct 20, 2023
Donghuo Zeng, Kazushi Ikeda

Viaarxiv icon

Topic-switch adapted Japanese Dialogue System based on PLATO-2

Add code
Bookmark button
Alert button
Feb 22, 2023
Donghuo Zeng, Jianming Wu, Yanan Wang, Kazunori Matsumoto, Gen Hattori, Kazushi Ikeda

Figure 1 for Topic-switch adapted Japanese Dialogue System based on PLATO-2
Figure 2 for Topic-switch adapted Japanese Dialogue System based on PLATO-2
Figure 3 for Topic-switch adapted Japanese Dialogue System based on PLATO-2
Figure 4 for Topic-switch adapted Japanese Dialogue System based on PLATO-2
Viaarxiv icon

Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines

Add code
Bookmark button
Alert button
Feb 01, 2023
Monisha Singh, Ximi Hoque, Donghuo Zeng, Yanan Wang, Kazushi Ikeda, Abhinav Dhall

Figure 1 for Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines
Figure 2 for Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines
Figure 3 for Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines
Figure 4 for Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines
Viaarxiv icon

Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval

Add code
Bookmark button
Alert button
Nov 07, 2022
Donghuo Zeng, Yanan Wang, Jianming Wu, Kazushi Ikeda

Figure 1 for Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval
Figure 2 for Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval
Figure 3 for Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval
Figure 4 for Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval
Viaarxiv icon

Compositionality-Aware Graph2Seq Learning

Add code
Bookmark button
Alert button
Jan 28, 2022
Takeshi D. Itoh, Takatomi Kubo, Kazushi Ikeda

Figure 1 for Compositionality-Aware Graph2Seq Learning
Figure 2 for Compositionality-Aware Graph2Seq Learning
Viaarxiv icon

Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph Representations with Multiple Localities

Add code
Bookmark button
Alert button
Mar 02, 2021
Takeshi D. Itoh, Takatomi Kubo, Kazushi Ikeda

Figure 1 for Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph Representations with Multiple Localities
Figure 2 for Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph Representations with Multiple Localities
Figure 3 for Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph Representations with Multiple Localities
Figure 4 for Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph Representations with Multiple Localities
Viaarxiv icon

Detecting Unknown Behaviors by Pre-defined Behaviours: An Bayesian Non-parametric Approach

Add code
Bookmark button
Alert button
Dec 11, 2019
Jin Watanabe, Takatomi Kubo, Fan Yang, Kazushi Ikeda

Figure 1 for Detecting Unknown Behaviors by Pre-defined Behaviours: An Bayesian Non-parametric Approach
Figure 2 for Detecting Unknown Behaviors by Pre-defined Behaviours: An Bayesian Non-parametric Approach
Figure 3 for Detecting Unknown Behaviors by Pre-defined Behaviours: An Bayesian Non-parametric Approach
Figure 4 for Detecting Unknown Behaviors by Pre-defined Behaviours: An Bayesian Non-parametric Approach
Viaarxiv icon

A Hierarchical Mixture Density Network

Add code
Bookmark button
Alert button
Oct 23, 2019
Fan Yang, Jaymar Soriano, Takatomi Kubo, Kazushi Ikeda

Figure 1 for A Hierarchical Mixture Density Network
Figure 2 for A Hierarchical Mixture Density Network
Figure 3 for A Hierarchical Mixture Density Network
Figure 4 for A Hierarchical Mixture Density Network
Viaarxiv icon

Towards Generation of Visual Attention Map for Source Code

Add code
Bookmark button
Alert button
Aug 13, 2019
Takeshi D. Itoh, Takatomi Kubo, Kiyoka Ikeda, Yuki Maruno, Yoshiharu Ikutani, Hideaki Hata, Kenichi Matsumoto, Kazushi Ikeda

Figure 1 for Towards Generation of Visual Attention Map for Source Code
Figure 2 for Towards Generation of Visual Attention Map for Source Code
Figure 3 for Towards Generation of Visual Attention Map for Source Code
Viaarxiv icon