Alert button
Picture for Pan Hui

Pan Hui

Alert button

VR PreM+ : An Immersive Pre-learning Branching Visualization System for Museum Tours

Nov 01, 2023
Ze Gao, Xiang Li, Changkun Liu, Xian Wang, Anqi Wang, Liang Yang, Yuyang Wang, Pan Hui, Tristan Braud

Figure 1 for VR PreM+ : An Immersive Pre-learning Branching Visualization System for Museum Tours
Figure 2 for VR PreM+ : An Immersive Pre-learning Branching Visualization System for Museum Tours
Figure 3 for VR PreM+ : An Immersive Pre-learning Branching Visualization System for Museum Tours
Figure 4 for VR PreM+ : An Immersive Pre-learning Branching Visualization System for Museum Tours

We present VR PreM+, an innovative VR system designed to enhance web exploration beyond traditional computer screens. Unlike static 2D displays, VR PreM+ leverages 3D environments to create an immersive pre-learning experience. Using keyword-based information retrieval allows users to manage and connect various content sources in a dynamic 3D space, improving communication and data comparison. We conducted preliminary and user studies that demonstrated efficient information retrieval, increased user engagement, and a greater sense of presence. These findings yielded three design guidelines for future VR information systems: display, interaction, and user-centric design. VR PreM+ bridges the gap between traditional web browsing and immersive VR, offering an interactive and comprehensive approach to information acquisition. It holds promise for research, education, and beyond.

* Accepted for publication at The Eleventh International Symposium of Chinese CHI (Chinese CHI 2023), Bali 
Viaarxiv icon

A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities

Aug 01, 2023
Yanxin Xi, Yu Liu, Tong Li, Jintao Ding, Yunke Zhang, Sasu Tarkoma, Yong Li, Pan Hui

Figure 1 for A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities
Figure 2 for A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities
Figure 3 for A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities
Figure 4 for A Satellite Imagery Dataset for Long-Term Sustainable Development in United States Cities

Cities play an important role in achieving sustainable development goals (SDGs) to promote economic growth and meet social needs. Especially satellite imagery is a potential data source for studying sustainable urban development. However, a comprehensive dataset in the United States (U.S.) covering multiple cities, multiple years, multiple scales, and multiple indicators for SDG monitoring is lacking. To support the research on SDGs in U.S. cities, we develop a satellite imagery dataset using deep learning models for five SDGs containing 25 sustainable development indicators. The proposed dataset covers the 100 most populated U.S. cities and corresponding Census Block Groups from 2014 to 2023. Specifically, we collect satellite imagery and identify objects with state-of-the-art object detection and semantic segmentation models to observe cities' bird's-eye view. We further gather population, nighttime light, survey, and built environment data to depict SDGs regarding poverty, health, education, inequality, and living environment. We anticipate the dataset to help urban policymakers and researchers to advance SDGs-related studies, especially applying satellite imagery to monitor long-term and multi-scale SDGs in cities.

* 20 pages, 5 figures 
Viaarxiv icon

Efficient Task Offloading Algorithm for Digital Twin in Edge/Cloud Computing Environment

Jul 13, 2023
Ziru Zhang, Xuling Zhang, Guangzhi Zhu, Yuyang Wang, Pan Hui

Figure 1 for Efficient Task Offloading Algorithm for Digital Twin in Edge/Cloud Computing Environment
Figure 2 for Efficient Task Offloading Algorithm for Digital Twin in Edge/Cloud Computing Environment
Figure 3 for Efficient Task Offloading Algorithm for Digital Twin in Edge/Cloud Computing Environment
Figure 4 for Efficient Task Offloading Algorithm for Digital Twin in Edge/Cloud Computing Environment

In the era of Internet of Things (IoT), Digital Twin (DT) is envisioned to empower various areas as a bridge between physical objects and the digital world. Through virtualization and simulation techniques, multiple functions can be achieved by leveraging computing resources. In this process, Mobile Cloud Computing (MCC) and Mobile Edge Computing (MEC) have become two of the key factors to achieve real-time feedback. However, current works only considered edge servers or cloud servers in the DT system models. Besides, The models ignore the DT with not only one data resource. In this paper, we propose a new DT system model considering a heterogeneous MEC/MCC environment. Each DT in the model is maintained in one of the servers via multiple data collection devices. The offloading decision-making problem is also considered and a new offloading scheme is proposed based on Distributed Deep Learning (DDL). Simulation results demonstrate that our proposed algorithm can effectively and efficiently decrease the system's average latency and energy consumption. Significant improvement is achieved compared with the baselines under the dynamic environment of DTs.

Viaarxiv icon

Lightweight Modeling of User Context Combining Physical and Virtual Sensor Data

Jun 28, 2023
Mattia Giovanni Campana, Dimitris Chatzopoulos, Franca Delmastro, Pan Hui

Figure 1 for Lightweight Modeling of User Context Combining Physical and Virtual Sensor Data
Figure 2 for Lightweight Modeling of User Context Combining Physical and Virtual Sensor Data

The multitude of data generated by sensors available on users' mobile devices, combined with advances in machine learning techniques, support context-aware services in recognizing the current situation of a user (i.e., physical context) and optimizing the system's personalization features. However, context-awareness performances mainly depend on the accuracy of the context inference process, which is strictly tied to the availability of large-scale and labeled datasets. In this work, we present a framework developed to collect datasets containing heterogeneous sensing data derived from personal mobile devices. The framework has been used by 3 voluntary users for two weeks, generating a dataset with more than 36K samples and 1331 features. We also propose a lightweight approach to model the user context able to efficiently perform the entire reasoning process on the user mobile device. To this aim, we used six dimensionality reduction techniques in order to optimize the context classification. Experimental results on the generated dataset show that we achieve a 10x speed up and a feature reduction of more than 90% while keeping the accuracy loss less than 3%.

Viaarxiv icon

Towards Computational Architecture of Liberty: A Comprehensive Survey on Deep Learning for Generating Virtual Architecture in the Metaverse

Apr 30, 2023
Anqi Wang, Jiahua Dong, Jiachuan Shen, Lik-Hang Lee, Pan Hui

Figure 1 for Towards Computational Architecture of Liberty: A Comprehensive Survey on Deep Learning for Generating Virtual Architecture in the Metaverse
Figure 2 for Towards Computational Architecture of Liberty: A Comprehensive Survey on Deep Learning for Generating Virtual Architecture in the Metaverse
Figure 3 for Towards Computational Architecture of Liberty: A Comprehensive Survey on Deep Learning for Generating Virtual Architecture in the Metaverse
Figure 4 for Towards Computational Architecture of Liberty: A Comprehensive Survey on Deep Learning for Generating Virtual Architecture in the Metaverse

3D shape generation techniques utilizing deep learning are increasing attention from both computer vision and architectural design. This survey focuses on investigating and comparing the current latest approaches to 3D object generation with deep generative models (DGMs), including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), 3D-aware images, and diffusion models. We discuss 187 articles (80.7% of articles published between 2018-2022) to review the field of generated possibilities of architecture in virtual environments, limited to the architecture form. We provide an overview of architectural research, virtual environment, and related technical approaches, followed by a review of recent trends in discrete voxel generation, 3D models generated from 2D images, and conditional parameters. We highlight under-explored issues in 3D generation and parameterized control that is worth further investigation. Moreover, we speculate that four research agendas including data limitation, editability, evaluation metrics, and human-computer interaction are important enablers of ubiquitous interaction with immersive systems in architecture for computer-aided design Our work contributes to researchers' understanding of the current potential and future needs of deep learnings in generating virtual architecture.

* 35 pages, 14 figures, and 3 tables 
Viaarxiv icon

Can ChatGPT Reproduce Human-Generated Labels? A Study of Social Computing Tasks

Apr 22, 2023
Yiming Zhu, Peixian Zhang, Ehsan-Ul Haq, Pan Hui, Gareth Tyson

Figure 1 for Can ChatGPT Reproduce Human-Generated Labels? A Study of Social Computing Tasks
Figure 2 for Can ChatGPT Reproduce Human-Generated Labels? A Study of Social Computing Tasks
Figure 3 for Can ChatGPT Reproduce Human-Generated Labels? A Study of Social Computing Tasks
Figure 4 for Can ChatGPT Reproduce Human-Generated Labels? A Study of Social Computing Tasks

The release of ChatGPT has uncovered a range of possibilities whereby large language models (LLMs) can substitute human intelligence. In this paper, we seek to understand whether ChatGPT has the potential to reproduce human-generated label annotations in social computing tasks. Such an achievement could significantly reduce the cost and complexity of social computing research. As such, we use ChatGPT to relabel five seminal datasets covering stance detection (2x), sentiment analysis, hate speech, and bot detection. Our results highlight that ChatGPT does have the potential to handle these data annotation tasks, although a number of challenges remain. ChatGPT obtains an average accuracy 0.609. Performance is highest for the sentiment analysis dataset, with ChatGPT correctly annotating 64.9% of tweets. Yet, we show that performance varies substantially across individual labels. We believe this work can open up new lines of analysis and act as a basis for future research into the exploitation of ChatGPT for human annotation tasks.

Viaarxiv icon

One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era

Apr 04, 2023
Chaoning Zhang, Chenshuang Zhang, Chenghao Li, Yu Qiao, Sheng Zheng, Sumit Kumar Dam, Mengchun Zhang, Jung Uk Kim, Seong Tae Kim, Jinwoo Choi, Gyeong-Moon Park, Sung-Ho Bae, Lik-Hang Lee, Pan Hui, In So Kweon, Choong Seon Hong

Figure 1 for One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era
Figure 2 for One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era
Figure 3 for One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era
Figure 4 for One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era

OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be one small step for generative AI (GAI), but one giant leap for artificial general intelligence (AGI). Since its official release in November 2022, ChatGPT has quickly attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to investigate ChatGPT from various aspects. According to Google scholar, there are more than 500 articles with ChatGPT in their titles or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work fills this gap. Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a significant milestone for the development of AGI.

* A Survey on ChatGPT and GPT-4, 29 pages. Feedback is appreciated (chaoningzhang1990@gmail.com) 
Viaarxiv icon

Bi-directional Digital Twin and Edge Computing in the Metaverse

Nov 16, 2022
Jiadong Yu, Ahmad Alhilal, Pan Hui, Danny H. K. Tsang

Figure 1 for Bi-directional Digital Twin and Edge Computing in the Metaverse
Figure 2 for Bi-directional Digital Twin and Edge Computing in the Metaverse
Figure 3 for Bi-directional Digital Twin and Edge Computing in the Metaverse
Figure 4 for Bi-directional Digital Twin and Edge Computing in the Metaverse

The Metaverse has emerged to extend our lifestyle beyond physical limitations. As essential components in the Metaverse, digital twins (DTs) are the digital replicas of physical items. End users access the Metaverse using a variety of devices (e.g., head-mounted devices (HMDs)), mostly lightweight. Multi-access edge computing (MEC) and edge networks provide responsive services to the end users, leading to an immersive Metaverse experience. With the anticipation to represent physical objects, end users, and edge computing systems as DTs in the Metaverse, the construction of these DTs and the interplay between them have not been investigated. In this paper, we discuss the bidirectional reliance between the DT and the MEC system and investigate the creation of DTs of objects and users on the MEC servers and DT-assisted edge computing (DTEC). We also study the interplay between the DTs and DTECs to allocate the resources fairly and adequately and provide an immersive experience in the Metaverse. Owing to the dynamic network states (e.g., channel states) and mobility of the users, we discuss the interplay between local DTECs (on local MEC servers) and the global DTEC (on cloud server) to cope with the handover among MEC servers and avoid intermittent Metaverse services.

Viaarxiv icon

Towards Reproducible Evaluations for Flying Drone Controllers in Virtual Environments

Jul 29, 2022
Zheng Li, Yiming Huang, Yui-Pan Yau, Pan Hui, Lik-Hang Lee

Figure 1 for Towards Reproducible Evaluations for Flying Drone Controllers in Virtual Environments
Figure 2 for Towards Reproducible Evaluations for Flying Drone Controllers in Virtual Environments
Figure 3 for Towards Reproducible Evaluations for Flying Drone Controllers in Virtual Environments
Figure 4 for Towards Reproducible Evaluations for Flying Drone Controllers in Virtual Environments

Research attention on natural user interfaces (NUIs) for drone flights are rising. Nevertheless, NUIs are highly diversified, and primarily evaluated by different physical environments leading to hard-to-compare performance between such solutions. We propose a virtual environment, namely VRFlightSim, enabling comparative evaluations with enriched drone flight details to address this issue. We first replicated a state-of-the-art (SOTA) interface and designed two tasks (crossing and pointing) in our virtual environment. Then, two user studies with 13 participants demonstrate the necessity of VRFlightSim and further highlight the potential of open-data interface designs.

* Accepted in IROS 2022 
Viaarxiv icon