Alert button
Picture for Harold Soh

Harold Soh

Alert button

Probable Object Location (POLo) Score Estimation for Efficient Object Goal Navigation

Nov 14, 2023
Jiaming Wang, Harold Soh

To advance the field of autonomous robotics, particularly in object search tasks within unexplored environments, we introduce a novel framework centered around the Probable Object Location (POLo) score. Utilizing a 3D object probability map, the POLo score allows the agent to make data-driven decisions for efficient object search. We further enhance the framework's practicality by introducing POLoNet, a neural network trained to approximate the computationally intensive POLo score. Our approach addresses critical limitations of both end-to-end reinforcement learning methods, which suffer from memory decay over long-horizon tasks, and traditional map-based methods that neglect visibility constraints. Our experiments, involving the first phase of the OVMM 2023 challenge, demonstrate that an agent equipped with POLoNet significantly outperforms a range of baseline methods, including end-to-end RL techniques and prior map-based strategies. To provide a comprehensive evaluation, we introduce new performance metrics that offer insights into the efficiency and effectiveness of various agents in object goal navigation.

* Under review 
Viaarxiv icon

GRaCE: Optimizing Grasps to Satisfy Ranked Criteria in Complex Scenarios

Oct 02, 2023
Tasbolat Taunyazov, Kelvin Lin, Harold Soh

This paper addresses the multi-faceted problem of robot grasping, where multiple criteria may conflict and differ in importance. We introduce Grasp Ranking and Criteria Evaluation (GRaCE), a novel approach that employs hierarchical rule-based logic and a rank-preserving utility function to optimize grasps based on various criteria such as stability, kinematic constraints, and goal-oriented functionalities. Additionally, we propose GRaCE-OPT, a hybrid optimization strategy that combines gradient-based and gradient-free methods to effectively navigate the complex, non-convex utility function. Experimental results in both simulated and real-world scenarios show that GRaCE requires fewer samples to achieve comparable or superior performance relative to existing methods. The modular architecture of GRaCE allows for easy customization and adaptation to specific application needs.

Viaarxiv icon

Refining 6-DoF Grasps with Context-Specific Classifiers

Aug 14, 2023
Tasbolat Taunyazov, Heng Zhang, John Patrick Eala, Na Zhao, Harold Soh

Figure 1 for Refining 6-DoF Grasps with Context-Specific Classifiers
Figure 2 for Refining 6-DoF Grasps with Context-Specific Classifiers
Figure 3 for Refining 6-DoF Grasps with Context-Specific Classifiers
Figure 4 for Refining 6-DoF Grasps with Context-Specific Classifiers

In this work, we present GraspFlow, a refinement approach for generating context-specific grasps. We formulate the problem of grasp synthesis as a sampling problem: we seek to sample from a context-conditioned probability distribution of successful grasps. However, this target distribution is unknown. As a solution, we devise a discriminator gradient-flow method to evolve grasps obtained from a simpler distribution in a manner that mimics sampling from the desired target distribution. Unlike existing approaches, GraspFlow is modular, allowing grasps that satisfy multiple criteria to be obtained simply by incorporating the relevant discriminators. It is also simple to implement, requiring minimal code given existing auto-differentiation libraries and suitable discriminators. Experiments show that GraspFlow generates stable and executable grasps on a real-world Panda robot for a diverse range of objects. In particular, in 60 trials on 20 different household objects, the first attempted grasp was successful 94% of the time, and 100% grasp success was achieved by the second grasp. Moreover, incorporating a functional discriminator for robot-human handover improved the functional aspect of the grasp by up to 33%.

* IROS 2023, Code and Datasets are available at https://github.com/tasbolat1/graspflow 
Viaarxiv icon

Latent Emission-Augmented Perspective-Taking (LEAPT) for Human-Robot Interaction

Aug 12, 2023
Kaiqi Chen, Jing Yu Lim, Kingsley Kuan, Harold Soh

Figure 1 for Latent Emission-Augmented Perspective-Taking (LEAPT) for Human-Robot Interaction
Figure 2 for Latent Emission-Augmented Perspective-Taking (LEAPT) for Human-Robot Interaction
Figure 3 for Latent Emission-Augmented Perspective-Taking (LEAPT) for Human-Robot Interaction
Figure 4 for Latent Emission-Augmented Perspective-Taking (LEAPT) for Human-Robot Interaction

Perspective-taking is the ability to perceive or understand a situation or concept from another individual's point of view, and is crucial in daily human interactions. Enabling robots to perform perspective-taking remains an unsolved problem; existing approaches that use deterministic or handcrafted methods are unable to accurately account for uncertainty in partially-observable settings. This work proposes to address this limitation via a deep world model that enables a robot to perform both perception and conceptual perspective taking, i.e., the robot is able to infer what a human sees and believes. The key innovation is a decomposed multi-modal latent state space model able to generate and augment fictitious observations/emissions. Optimizing the ELBO that arises from this probabilistic graphical model enables the learning of uncertainty in latent space, which facilitates uncertainty estimation from high-dimensional observations. We tasked our model to predict human observations and beliefs on three partially-observable HRI tasks. Experiments show that our method significantly outperforms existing baselines and is able to infer visual observations available to other agent and their internal beliefs.

Viaarxiv icon

Towards Regulatable AI Systems: Technical Gaps and Policy Opportunities

Jun 22, 2023
Xudong Shen, Hannah Brown, Jiashu Tao, Martin Strobel, Yao Tong, Akshay Narayan, Harold Soh, Finale Doshi-Velez

There is increasing attention being given to how to regulate AI systems. As governing bodies grapple with what values to encapsulate into regulation, we consider the technical half of the question: To what extent can AI experts vet an AI system for adherence to regulatory requirements? We investigate this question through two public sector procurement checklists, identifying what we can do now, what we should be able to do with technical innovation in AI, and what requirements necessitate a more interdisciplinary approach.

Viaarxiv icon

Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models

May 17, 2023
Alvin Heng, Harold Soh

Figure 1 for Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
Figure 2 for Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
Figure 3 for Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
Figure 4 for Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models

The recent proliferation of large-scale text-to-image models has led to growing concerns that such models may be misused to generate harmful, misleading, and inappropriate content. Motivated by this issue, we derive a technique inspired by continual learning to selectively forget concepts in pretrained deep generative models. Our method, dubbed Selective Amnesia, enables controllable forgetting where a user can specify how a concept should be forgotten. Selective Amnesia can be applied to conditional variational likelihood models, which encompass a variety of popular deep generative frameworks, including variational autoencoders and large-scale text-to-image diffusion models. Experiments across different models demonstrate that our approach induces forgetting on a variety of concepts, from entire classes in standard datasets to celebrity and nudity prompts in text-to-image models. Our code is publicly available at https://github.com/clear-nus/selective-amnesia.

Viaarxiv icon

Generative Modeling with Flow-Guided Density Ratio Learning

Mar 07, 2023
Alvin Heng, Abdul Fatir Ansari, Harold Soh

Figure 1 for Generative Modeling with Flow-Guided Density Ratio Learning
Figure 2 for Generative Modeling with Flow-Guided Density Ratio Learning
Figure 3 for Generative Modeling with Flow-Guided Density Ratio Learning
Figure 4 for Generative Modeling with Flow-Guided Density Ratio Learning

We present Flow-Guided Density Ratio Learning (FDRL), a simple and scalable approach to generative modeling which builds on the stale (time-independent) approximation of the gradient flow of entropy-regularized f-divergences introduced in DGflow. In DGflow, the intractable time-dependent density ratio is approximated by a stale estimator given by a GAN discriminator. This is sufficient in the case of sample refinement, where the source and target distributions of the flow are close to each other. However, this assumption is invalid for generation and a naive application of the stale estimator fails due to the large chasm between the two distributions. FDRL proposes to train a density ratio estimator such that it learns from progressively improving samples during the training process. We show that this simple method alleviates the density chasm problem, allowing FDRL to generate images of dimensions as high as $128\times128$, as well as outperform existing gradient flow baselines on quantitative benchmarks. We also show the flexibility of FDRL with two use cases. First, unconditional FDRL can be easily composed with external classifiers to perform class-conditional generation. Second, FDRL can be directly applied to unpaired image-to-image translation with no modifications needed to the framework. Code is publicly available at https://github.com/ajrheng/FDRL.

Viaarxiv icon

Large Language Models as Zero-Shot Human Models for Human-Robot Interaction

Mar 06, 2023
Bowen Zhang, Harold Soh

Figure 1 for Large Language Models as Zero-Shot Human Models for Human-Robot Interaction
Figure 2 for Large Language Models as Zero-Shot Human Models for Human-Robot Interaction
Figure 3 for Large Language Models as Zero-Shot Human Models for Human-Robot Interaction
Figure 4 for Large Language Models as Zero-Shot Human Models for Human-Robot Interaction

Human models play a crucial role in human-robot interaction (HRI), enabling robots to consider the impact of their actions on people and plan their behavior accordingly. However, crafting good human models is challenging; capturing context-dependent human behavior requires significant prior knowledge and/or large amounts of interaction data, both of which are difficult to obtain. In this work, we explore the potential of large-language models (LLMs) -- which have consumed vast amounts of human-generated text data -- to act as zero-shot human models for HRI. Our experiments on three social datasets yield promising results; the LLMs are able to achieve performance comparable to purpose-built models. That said, we also discuss current limitations, such as sensitivity to prompts and spatial/numerical reasoning mishaps. Based on our findings, we demonstrate how LLM-based human models can be integrated into a social robot's planning process and applied in HRI scenarios. Specifically, we present one case study on a simulated trust-based table-clearing task and replicate past results that relied on custom models. Next, we conduct a new robot utensil-passing experiment (n = 65) where preliminary results show that planning with a LLM-based human model can achieve gains over a basic myopic plan. In summary, our results show that LLMs offer a promising (but incomplete) approach to human modeling for HRI.

* 8 pages 
Viaarxiv icon

Translating Natural Language to Planning Goals with Large-Language Models

Feb 10, 2023
Yaqi Xie, Chen Yu, Tongyao Zhu, Jinbin Bai, Ze Gong, Harold Soh

Figure 1 for Translating Natural Language to Planning Goals with Large-Language Models
Figure 2 for Translating Natural Language to Planning Goals with Large-Language Models
Figure 3 for Translating Natural Language to Planning Goals with Large-Language Models
Figure 4 for Translating Natural Language to Planning Goals with Large-Language Models

Recent large language models (LLMs) have demonstrated remarkable performance on a variety of natural language processing (NLP) tasks, leading to intense excitement about their applicability across various domains. Unfortunately, recent work has also shown that LLMs are unable to perform accurate reasoning nor solve planning problems, which may limit their usefulness for robotics-related tasks. In this work, our central question is whether LLMs are able to translate goals specified in natural language to a structured planning language. If so, LLM can act as a natural interface between the planner and human users; the translated goal can be handed to domain-independent AI planners that are very effective at planning. Our empirical results on GPT 3.5 variants show that LLMs are much better suited towards translation rather than planning. We find that LLMs are able to leverage commonsense knowledge and reasoning to furnish missing details from under-specified goals (as is often the case in natural language). However, our experiments also reveal that LLMs can fail to generate goals in tasks that involve numerical or physical (e.g., spatial) reasoning, and that LLMs are sensitive to the prompts used. As such, these models are promising for translation to structured planning languages, but care should be taken in their use.

Viaarxiv icon