Alert button
Picture for Koji Inoue

Koji Inoue

Alert button

Multilingual Turn-taking Prediction Using Voice Activity Projection

Add code
Bookmark button
Alert button
Mar 14, 2024
Koji Inoue, Bing'er Jiang, Erik Ekstedt, Tatsuya Kawahara, Gabriel Skantze

Figure 1 for Multilingual Turn-taking Prediction Using Voice Activity Projection
Figure 2 for Multilingual Turn-taking Prediction Using Voice Activity Projection
Figure 3 for Multilingual Turn-taking Prediction Using Voice Activity Projection
Figure 4 for Multilingual Turn-taking Prediction Using Voice Activity Projection
Viaarxiv icon

Evaluation of a semi-autonomous attentive listening system with takeover prompting

Add code
Bookmark button
Alert button
Feb 21, 2024
Haruki Kawai, Divesh Lala, Koji Inoue, Keiko Ochi, Tatsuya Kawahara

Viaarxiv icon

Acknowledgment of Emotional States: Generating Validating Responses for Empathetic Dialogue

Add code
Bookmark button
Alert button
Feb 20, 2024
Zi Haur Pang, Yahui Fu, Divesh Lala, Keiko Ochi, Koji Inoue, Tatsuya Kawahara

Viaarxiv icon

An Analysis of User Behaviors for Objectively Evaluating Spoken Dialogue Systems

Add code
Bookmark button
Alert button
Jan 23, 2024
Koji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara, Gabriel Skantze

Viaarxiv icon

Real-time and Continuous Turn-taking Prediction Using Voice Activity Projection

Add code
Bookmark button
Alert button
Jan 10, 2024
Koji Inoue, Bing'er Jiang, Erik Ekstedt, Tatsuya Kawahara, Gabriel Skantze

Viaarxiv icon

An Analysis of User Behaviours for Objectively Evaluating Spoken Dialogue Systems

Add code
Bookmark button
Alert button
Jan 10, 2024
Koji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara, Gabriel Skantze

Viaarxiv icon

Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors

Add code
Bookmark button
Alert button
Aug 21, 2023
Koji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara, Gabriel Skantze

Figure 1 for Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors
Figure 2 for Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors
Figure 3 for Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors
Figure 4 for Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors
Viaarxiv icon

Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation

Add code
Bookmark button
Alert button
Jul 28, 2023
Yahui Fu, Koji Inoue, Chenhui Chu, Tatsuya Kawahara

Figure 1 for Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation
Figure 2 for Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation
Figure 3 for Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation
Figure 4 for Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation
Viaarxiv icon

I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue

Add code
Bookmark button
Alert button
Mar 17, 2023
Yuanchao Li, Koji Inoue, Leimin Tian, Changzeng Fu, Carlos Ishi, Hiroshi Ishiguro, Tatsuya Kawahara, Catherine Lai

Figure 1 for I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Figure 2 for I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Figure 3 for I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Figure 4 for I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Viaarxiv icon