Alert button
Picture for Hyeokhyen Kwon

Hyeokhyen Kwon

Alert button

IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition

Add code
Bookmark button
Alert button
Feb 01, 2024
Zikang Leng, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, Lizhe Zhang, Elizabeth Bruda, Hyeokhyen Kwon, Thomas Plötz

Viaarxiv icon

On the Benefit of Generative Foundation Models for Human Activity Recognition

Add code
Bookmark button
Alert button
Oct 18, 2023
Zikang Leng, Hyeokhyen Kwon, Thomas Plötz

Figure 1 for On the Benefit of Generative Foundation Models for Human Activity Recognition
Figure 2 for On the Benefit of Generative Foundation Models for Human Activity Recognition
Viaarxiv icon

Indoor Localization and Multi-person Tracking Using Privacy Preserving Distributed Camera Network with Edge Computing

Add code
Bookmark button
Alert button
May 08, 2023
Hyeokhyen Kwon, Chaitra Hedge, Yashar Kiarashi, Venkata Siva Krishna Madala, Ratan Singh, ArjunSinh Nakum, Robert Tweedy, Leandro Miletto Tonetto, Craig M. Zimring, Gari D. Clifford

Figure 1 for Indoor Localization and Multi-person Tracking Using Privacy Preserving Distributed Camera Network with Edge Computing
Figure 2 for Indoor Localization and Multi-person Tracking Using Privacy Preserving Distributed Camera Network with Edge Computing
Figure 3 for Indoor Localization and Multi-person Tracking Using Privacy Preserving Distributed Camera Network with Edge Computing
Figure 4 for Indoor Localization and Multi-person Tracking Using Privacy Preserving Distributed Camera Network with Edge Computing
Viaarxiv icon

Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition

Add code
Bookmark button
Alert button
May 04, 2023
Zikang Leng, Hyeokhyen Kwon, Thomas Plötz

Figure 1 for Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition
Figure 2 for Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition
Figure 3 for Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition
Figure 4 for Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition
Viaarxiv icon

Fine-grained Human Activity Recognition Using Virtual On-body Acceleration Data

Add code
Bookmark button
Alert button
Nov 02, 2022
Zikang Leng, Yash Jain, Hyeokhyen Kwon, Thomas Plötz

Figure 1 for Fine-grained Human Activity Recognition Using Virtual On-body Acceleration Data
Figure 2 for Fine-grained Human Activity Recognition Using Virtual On-body Acceleration Data
Figure 3 for Fine-grained Human Activity Recognition Using Virtual On-body Acceleration Data
Figure 4 for Fine-grained Human Activity Recognition Using Virtual On-body Acceleration Data
Viaarxiv icon

IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition

Add code
Bookmark button
Alert button
May 29, 2020
Hyeokhyen Kwon, Catherine Tong, Harish Haresamudram, Yan Gao, Gregory D. Abowd, Nicholas D. Lane, Thomas Ploetz

Figure 1 for IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition
Figure 2 for IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition
Figure 3 for IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition
Figure 4 for IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition
Viaarxiv icon