Alert button
Picture for Alex Zhuang

Alex Zhuang

Alert button

Few-shot In-context Learning for Knowledge Base Question Answering

May 04, 2023
Tianle Li, Xueguang Ma, Alex Zhuang, Yu Gu, Yu Su, Wenhu Chen

Figure 1 for Few-shot In-context Learning for Knowledge Base Question Answering
Figure 2 for Few-shot In-context Learning for Knowledge Base Question Answering
Figure 3 for Few-shot In-context Learning for Knowledge Base Question Answering
Figure 4 for Few-shot In-context Learning for Knowledge Base Question Answering

Question answering over knowledge bases is considered a difficult problem due to the challenge of generalizing to a wide variety of possible natural language questions. Additionally, the heterogeneity of knowledge base schema items between different knowledge bases often necessitates specialized training for different knowledge base question-answering (KBQA) datasets. To handle questions over diverse KBQA datasets with a unified training-free framework, we propose KB-BINDER, which for the first time enables few-shot in-context learning over KBQA tasks. Firstly, KB-BINDER leverages large language models like Codex to generate logical forms as the draft for a specific question by imitating a few demonstrations. Secondly, KB-BINDER grounds on the knowledge base to bind the generated draft to an executable one with BM25 score matching. The experimental results on four public heterogeneous KBQA datasets show that KB-BINDER can achieve a strong performance with only a few in-context demonstrations. Especially on GraphQA and 3-hop MetaQA, KB-BINDER can even outperform the state-of-the-art trained models. On GrailQA and WebQSP, our model is also on par with other fully-trained models. We believe KB-BINDER can serve as an important baseline for future research. Our code is available at https://github.com/ltl3A87/KB-BINDER.

* Accepted to ACL 2023 
Viaarxiv icon

RADACS: Towards Higher-Order Reasoning using Action Recognition in Autonomous Vehicles

Sep 28, 2022
Alex Zhuang, Eddy Zhou, Quanquan Li, Rowan Dempster, Alikasim Budhwani, Mohammad Al-Sharman, Derek Rayside, William Melek

Figure 1 for RADACS: Towards Higher-Order Reasoning using Action Recognition in Autonomous Vehicles
Figure 2 for RADACS: Towards Higher-Order Reasoning using Action Recognition in Autonomous Vehicles
Figure 3 for RADACS: Towards Higher-Order Reasoning using Action Recognition in Autonomous Vehicles
Figure 4 for RADACS: Towards Higher-Order Reasoning using Action Recognition in Autonomous Vehicles

When applied to autonomous vehicle settings, action recognition can help enrich an environment model's understanding of the world and improve plans for future action. Towards these improvements in autonomous vehicle decision-making, we propose in this work a novel two-stage online action recognition system, termed RADACS. RADACS formulates the problem of active agent detection and adapts ideas about actor-context relations from human activity recognition in a straightforward two-stage pipeline for action detection and classification. We show that our proposed scheme can outperform the baseline on the ICCV2021 Road Challenge dataset and by deploying it on a real vehicle platform, we demonstrate how a higher-order understanding of agent actions in an environment can improve decisions on a real autonomous vehicle.

Viaarxiv icon

Counting Fish and Dolphins in Sonar Images Using Deep Learning

Jul 24, 2020
Stefan Schneider, Alex Zhuang

Figure 1 for Counting Fish and Dolphins in Sonar Images Using Deep Learning
Figure 2 for Counting Fish and Dolphins in Sonar Images Using Deep Learning
Figure 3 for Counting Fish and Dolphins in Sonar Images Using Deep Learning
Figure 4 for Counting Fish and Dolphins in Sonar Images Using Deep Learning

Deep learning provides the opportunity to improve upon conflicting reports considering the relationship between the Amazon river's fish and dolphin abundance and reduced canopy cover as a result of deforestation. Current methods of fish and dolphin abundance estimates are performed by on-site sampling using visual and capture/release strategies. We propose a novel approach to calculating fish abundance using deep learning for fish and dolphin estimates from sonar images taken from the back of a trolling boat. We consider a data set of 143 images ranging from 0-34 fish, and 0-3 dolphins provided by the Fund Amazonia research group. To overcome the data limitation, we test the capabilities of data augmentation on an unconventional 15/85 training/testing split. Using 20 training images, we simulate a gradient of data up to 25,000 images using augmented backgrounds and randomly placed/rotation cropped fish and dolphin taken from the training set. We then train four multitask network architectures: DenseNet201, InceptionNetV2, Xception, and MobileNetV2 to predict fish and dolphin numbers using two function approximation methods: regression and classification. For regression, Densenet201 performed best for fish and Xception best for dolphin with mean squared errors of 2.11 and 0.133 respectively. For classification, InceptionResNetV2 performed best for fish and MobileNetV2 best for dolphins with a mean error of 2.07 and 0.245 respectively. Considering the 123 testing images, our results show the success of data simulation for limited sonar data sets. We find DenseNet201 is able to identify dolphins after approximately 5000 training images, while fish required the full 25,000. Our method can be used to lower costs and expedite the data analysis of fish and dolphin abundance to real-time along the Amazon river and river systems worldwide.

* 19 pages, 5 figures, 1 table 
Viaarxiv icon