Alert button
Picture for Angelica Lim

Angelica Lim

Alert button

MotionScript: Natural Language Descriptions for Expressive 3D Human Motions

Dec 19, 2023
Payam Jome Yazdian, Eric Liu, Li Cheng, Angelica Lim

Viaarxiv icon

Emotional Theory of Mind: Bridging Fast Visual Processing with Slow Linguistic Reasoning

Oct 30, 2023
Yasaman Etesam, Ozge Nilay Yalcin, Chuxuan Zhang, Angelica Lim

Viaarxiv icon

An MCTS-DRL Based Obstacle and Occlusion Avoidance Methodology in Robotic Follow-Ahead Applications

Sep 28, 2023
Sahar Leisiazar, Edward J. Park, Angelica Lim, Mo Chen

Viaarxiv icon

Contextual Emotion Estimation from Image Captions

Sep 22, 2023
Vera Yang, Archita Srivastava, Yasaman Etesam, Chuxuan Zhang, Angelica Lim

Figure 1 for Contextual Emotion Estimation from Image Captions
Figure 2 for Contextual Emotion Estimation from Image Captions
Figure 3 for Contextual Emotion Estimation from Image Captions
Figure 4 for Contextual Emotion Estimation from Image Captions
Viaarxiv icon

Towards Inclusive HRI: Using Sim2Real to Address Underrepresentation in Emotion Expression Recognition

Aug 15, 2022
Saba Akhyani, Mehryar Abbasi Boroujeni, Mo Chen, Angelica Lim

Figure 1 for Towards Inclusive HRI: Using Sim2Real to Address Underrepresentation in Emotion Expression Recognition
Figure 2 for Towards Inclusive HRI: Using Sim2Real to Address Underrepresentation in Emotion Expression Recognition
Figure 3 for Towards Inclusive HRI: Using Sim2Real to Address Underrepresentation in Emotion Expression Recognition
Figure 4 for Towards Inclusive HRI: Using Sim2Real to Address Underrepresentation in Emotion Expression Recognition
Viaarxiv icon

Read the Room: Adapting a Robot's Voice to Ambient and Social Contexts

May 10, 2022
Emma Hughson, Paige Tuttosi, Akihiro Matsufuji, Angelica Lim

Figure 1 for Read the Room: Adapting a Robot's Voice to Ambient and Social Contexts
Figure 2 for Read the Room: Adapting a Robot's Voice to Ambient and Social Contexts
Figure 3 for Read the Room: Adapting a Robot's Voice to Ambient and Social Contexts
Figure 4 for Read the Room: Adapting a Robot's Voice to Ambient and Social Contexts
Viaarxiv icon

Data-driven emotional body language generation for social robotics

May 02, 2022
Mina Marmpena, Fernando Garcia, Angelica Lim, Nikolas Hemion, Thomas Wennekers

Figure 1 for Data-driven emotional body language generation for social robotics
Figure 2 for Data-driven emotional body language generation for social robotics
Figure 3 for Data-driven emotional body language generation for social robotics
Figure 4 for Data-driven emotional body language generation for social robotics
Viaarxiv icon

The Many Faces of Anger: A Multicultural Video Dataset of Negative Emotions in the Wild (MFA-Wild)

Dec 10, 2021
Roya Javadi, Angelica Lim

Figure 1 for The Many Faces of Anger: A Multicultural Video Dataset of Negative Emotions in the Wild (MFA-Wild)
Figure 2 for The Many Faces of Anger: A Multicultural Video Dataset of Negative Emotions in the Wild (MFA-Wild)
Figure 3 for The Many Faces of Anger: A Multicultural Video Dataset of Negative Emotions in the Wild (MFA-Wild)
Figure 4 for The Many Faces of Anger: A Multicultural Video Dataset of Negative Emotions in the Wild (MFA-Wild)
Viaarxiv icon

Developing a Data-Driven Categorical Taxonomy of Emotional Expressions in Real World Human Robot Interactions

Mar 07, 2021
Ghazal Saheb Jam, Jimin Rhim, Angelica Lim

Figure 1 for Developing a Data-Driven Categorical Taxonomy of Emotional Expressions in Real World Human Robot Interactions
Figure 2 for Developing a Data-Driven Categorical Taxonomy of Emotional Expressions in Real World Human Robot Interactions
Figure 3 for Developing a Data-Driven Categorical Taxonomy of Emotional Expressions in Real World Human Robot Interactions
Figure 4 for Developing a Data-Driven Categorical Taxonomy of Emotional Expressions in Real World Human Robot Interactions
Viaarxiv icon

SFU-Store-Nav: A Multimodal Dataset for Indoor Human Navigation

Oct 28, 2020
Zhitian Zhang, Jimin Rhim, Taher Ahmadi, Kefan Yang, Angelica Lim, Mo Chen

Figure 1 for SFU-Store-Nav: A Multimodal Dataset for Indoor Human Navigation
Figure 2 for SFU-Store-Nav: A Multimodal Dataset for Indoor Human Navigation
Figure 3 for SFU-Store-Nav: A Multimodal Dataset for Indoor Human Navigation
Figure 4 for SFU-Store-Nav: A Multimodal Dataset for Indoor Human Navigation
Viaarxiv icon