Alert button
Picture for Yongliang Wang

Yongliang Wang

Alert button

Box2Poly: Memory-Efficient Polygon Prediction of Arbitrarily Shaped and Rotated Text

Sep 20, 2023
Xuyang Chen, Dong Wang, Konrad Schindler, Mingwei Sun, Yongliang Wang, Nicolo Savioli, Liqiu Meng

Recently, Transformer-based text detection techniques have sought to predict polygons by encoding the coordinates of individual boundary vertices using distinct query features. However, this approach incurs a significant memory overhead and struggles to effectively capture the intricate relationships between vertices belonging to the same instance. Consequently, irregular text layouts often lead to the prediction of outlined vertices, diminishing the quality of results. To address these challenges, we present an innovative approach rooted in Sparse R-CNN: a cascade decoding pipeline for polygon prediction. Our method ensures precision by iteratively refining polygon predictions, considering both the scale and location of preceding results. Leveraging this stabilized regression pipeline, even employing just a single feature vector to guide polygon instance regression yields promising detection results. Simultaneously, the leverage of instance-level feature proposal substantially enhances memory efficiency (>50% less vs. the state-of-the-art method DPText-DETR) and reduces inference speed (>40% less vs. DPText-DETR) with minor performance drop on benchmarks.

Viaarxiv icon

NeMO: Neural Map Growing System for Spatiotemporal Fusion in Bird's-Eye-View and BDD-Map Benchmark

Jun 07, 2023
Xi Zhu, Xiya Cao, Zhiwei Dong, Caifa Zhou, Qiangbo Liu, Wei Li, Yongliang Wang

Figure 1 for NeMO: Neural Map Growing System for Spatiotemporal Fusion in Bird's-Eye-View and BDD-Map Benchmark
Figure 2 for NeMO: Neural Map Growing System for Spatiotemporal Fusion in Bird's-Eye-View and BDD-Map Benchmark
Figure 3 for NeMO: Neural Map Growing System for Spatiotemporal Fusion in Bird's-Eye-View and BDD-Map Benchmark
Figure 4 for NeMO: Neural Map Growing System for Spatiotemporal Fusion in Bird's-Eye-View and BDD-Map Benchmark

Vision-centric Bird's-Eye View (BEV) representation is essential for autonomous driving systems (ADS). Multi-frame temporal fusion which leverages historical information has been demonstrated to provide more comprehensive perception results. While most research focuses on ego-centric maps of fixed settings, long-range local map generation remains less explored. This work outlines a new paradigm, named NeMO, for generating local maps through the utilization of a readable and writable big map, a learning-based fusion module, and an interaction mechanism between the two. With an assumption that the feature distribution of all BEV grids follows an identical pattern, we adopt a shared-weight neural network for all grids to update the big map. This paradigm supports the fusion of longer time series and the generation of long-range BEV local maps. Furthermore, we release BDD-Map, a BDD100K-based dataset incorporating map element annotations, including lane lines, boundaries, and pedestrian crossing. Experiments on the NuScenes and BDD-Map datasets demonstrate that NeMO outperforms state-of-the-art map segmentation methods. We also provide a new scene-level BEV map evaluation setting along with the corresponding baseline for a more comprehensive comparison.

Viaarxiv icon

Towards Personalized Review Summarization by Modeling Historical Reviews from Customer and Product Separately

Jan 27, 2023
Xin Cheng, Shen Gao, Yuchi Zhang, Yongliang Wang, Xiuying Chen, Mingzhe Li, Dongyan Zhao, Rui Yan

Figure 1 for Towards Personalized Review Summarization by Modeling Historical Reviews from Customer and Product Separately
Figure 2 for Towards Personalized Review Summarization by Modeling Historical Reviews from Customer and Product Separately
Figure 3 for Towards Personalized Review Summarization by Modeling Historical Reviews from Customer and Product Separately
Figure 4 for Towards Personalized Review Summarization by Modeling Historical Reviews from Customer and Product Separately

Review summarization is a non-trivial task that aims to summarize the main idea of the product review in the E-commerce website. Different from the document summary which only needs to focus on the main facts described in the document, review summarization should not only summarize the main aspects mentioned in the review but also reflect the personal style of the review author. Although existing review summarization methods have incorporated the historical reviews of both customer and product, they usually simply concatenate and indiscriminately model this two heterogeneous information into a long sequence. Moreover, the rating information can also provide a high-level abstraction of customer preference, it has not been used by the majority of methods. In this paper, we propose the Heterogeneous Historical Review aware Review Summarization Model (HHRRS) which separately models the two types of historical reviews with the rating information by a graph reasoning module with a contrastive loss. We employ a multi-task framework that conducts the review sentiment classification and summarization jointly. Extensive experiments on four benchmark datasets demonstrate the superiority of HHRRS on both tasks.

Viaarxiv icon

3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons

Dec 11, 2022
Xikun Liu, Weisong Wen, Feng Huang, Han Gao, Yongliang Wang, Li-Ta Hsu

Figure 1 for 3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons
Figure 2 for 3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons
Figure 3 for 3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons
Figure 4 for 3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons

GNSS and LiDAR odometry are complementary as they provide absolute and relative positioning, respectively. Their integration in a loosely-coupled manner is straightforward but is challenged in urban canyons due to the GNSS signal reflections. Recent proposed 3D LiDAR-aided (3DLA) GNSS methods employ the point cloud map to identify the non-line-of-sight (NLOS) reception of GNSS signals. This facilitates the GNSS receiver to obtain improved urban positioning but not achieve a sub-meter level. GNSS real-time kinematics (RTK) uses carrier phase measurements to obtain decimeter-level positioning. In urban areas, the GNSS RTK is not only challenged by multipath and NLOS-affected measurement but also suffers from signal blockage by the building. The latter will impose a challenge in solving the ambiguity within the carrier phase measurements. In the other words, the model observability of the ambiguity resolution (AR) is greatly decreased. This paper proposes to generate virtual satellite (VS) measurements using the selected LiDAR landmarks from the accumulated 3D point cloud maps (PCM). These LiDAR-PCM-made VS measurements are tightly-coupled with GNSS pseudorange and carrier phase measurements. Thus, the VS measurements can provide complementary constraints, meaning providing low-elevation-angle measurements in the across-street directions. The implementation is done using factor graph optimization to solve an accurate float solution of the ambiguity before it is fed into LAMBDA. The effectiveness of the proposed method has been validated by the evaluation conducted on our recently open-sourced challenging dataset, UrbanNav. The result shows the fix rate of the proposed 3DLA GNSS RTK is about 30% while the conventional GNSS-RTK only achieves about 14%. In addition, the proposed method achieves sub-meter positioning accuracy in most of the data collected in challenging urban areas.

Viaarxiv icon

Obstacle Avoidance for Robotic Manipulator in Joint Space via Improved Proximal Policy Optimization

Oct 03, 2022
Yongliang Wang, Hamidreza Kasaei

Figure 1 for Obstacle Avoidance for Robotic Manipulator in Joint Space via Improved Proximal Policy Optimization
Figure 2 for Obstacle Avoidance for Robotic Manipulator in Joint Space via Improved Proximal Policy Optimization
Figure 3 for Obstacle Avoidance for Robotic Manipulator in Joint Space via Improved Proximal Policy Optimization
Figure 4 for Obstacle Avoidance for Robotic Manipulator in Joint Space via Improved Proximal Policy Optimization

Reaching tasks with random targets and obstacles can still be challenging when the robotic arm is operating in unstructured environments. In contrast to traditional model-based methods, model-free reinforcement learning methods do not require complex inverse kinematics or dynamics equations to be calculated. In this paper, we train a deep neural network via an improved Proximal Policy Optimization (PPO) algorithm, which aims to map from task space to joint space for a 6-DoF manipulator. In particular, we modify the original PPO and design an effective representation for environmental inputs and outputs to train the robot faster in a larger workspace. Firstly, a type of action ensemble is adopted to improve output efficiency. Secondly, the policy is designed to join in value function updates directly. Finally, the distance between obstacles and links of the manipulator is calculated based on a geometry method as part of the representation of states. Since training such a task in real-robot is time-consuming and strenuous, we develop a simulation environment to train the model. We choose Gazebo as our first simulation environment since it often produces a smaller Sim-to-Real gap than other simulators. However, the training process in Gazebo is time-consuming and takes a long time. Therefore, to address this limitation, we propose a Sim-to-Sim method to reduce the training time significantly. The trained model is finally used in a real-robot setup without fine-tuning. Experimental results showed that using our method, the robot was capable of tracking a single target or reaching multiple targets in unstructured environments.

Viaarxiv icon

GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection

Apr 14, 2022
Yue Liao, Aixi Zhang, Miao Lu, Yongliang Wang, Xiaobo Li, Si Liu

Figure 1 for GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection
Figure 2 for GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection
Figure 3 for GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection
Figure 4 for GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection

The task of Human-Object Interaction~(HOI) detection could be divided into two core problems, i.e., human-object association and interaction understanding. In this paper, we reveal and address the disadvantages of the conventional query-driven HOI detectors from the two aspects. For the association, previous two-branch methods suffer from complex and costly post-matching, while single-branch methods ignore the features distinction in different tasks. We propose Guided-Embedding Network~(GEN) to attain a two-branch pipeline without post-matching. In GEN, we design an instance decoder to detect humans and objects with two independent query sets and a position Guided Embedding~(p-GE) to mark the human and object in the same position as a pair. Besides, we design an interaction decoder to classify interactions, where the interaction queries are made of instance Guided Embeddings (i-GE) generated from the outputs of each instance decoder layer. For the interaction understanding, previous methods suffer from long-tailed distribution and zero-shot discovery. This paper proposes a Visual-Linguistic Knowledge Transfer (VLKT) training strategy to enhance interaction understanding by transferring knowledge from a visual-linguistic pre-trained model CLIP. In specific, we extract text embeddings for all labels with CLIP to initialize the classifier and adopt a mimic loss to minimize the visual feature distance between GEN and CLIP. As a result, GEN-VLKT outperforms the state of the art by large margins on multiple datasets, e.g., +5.05 mAP on HICO-Det. The source codes are available at https://github.com/YueLiao/gen-vlkt.

* CVPR 2022 
Viaarxiv icon

Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue

Mar 22, 2022
Zhihao Wang, Tangjian Duan, Zihao Wang, Minghui Yang, Zujie Wen, Yongliang Wang

Figure 1 for Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue
Figure 2 for Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue
Figure 3 for Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue
Figure 4 for Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue

Context modeling plays a significant role in building multi-turn dialogue systems. In order to make full use of context information, systems can use Incomplete Utterance Rewriting(IUR) methods to simplify the multi-turn dialogue into single-turn by merging current utterance and context information into a self-contained utterance. However, previous approaches ignore the intent consistency between the original query and rewritten query. The detection of omitted or coreferred locations in the original query can be further improved. In this paper, we introduce contrastive learning and multi-task learning to jointly model the problem. Our method benefits from carefully designed self-supervised objectives, which act as auxiliary tasks to capture semantics at both sentence-level and token-level. The experiments show that our proposed model achieves state-of-the-art performance on several public datasets.

Viaarxiv icon

Towards Generalized Models for Task-oriented Dialogue Modeling on Spoken Conversations

Mar 08, 2022
Ruijie Yan, Shuang Peng, Haitao Mi, Liang Jiang, Shihui Yang, Yuchi Zhang, Jiajun Li, Liangrui Peng, Yongliang Wang, Zujie Wen

Figure 1 for Towards Generalized Models for Task-oriented Dialogue Modeling on Spoken Conversations
Figure 2 for Towards Generalized Models for Task-oriented Dialogue Modeling on Spoken Conversations
Figure 3 for Towards Generalized Models for Task-oriented Dialogue Modeling on Spoken Conversations
Figure 4 for Towards Generalized Models for Task-oriented Dialogue Modeling on Spoken Conversations

Building robust and general dialogue models for spoken conversations is challenging due to the gap in distributions of spoken and written data. This paper presents our approach to build generalized models for the Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations Challenge of DSTC-10. In order to mitigate the discrepancies between spoken and written text, we mainly employ extensive data augmentation strategies on written data, including artificial error injection and round-trip text-speech transformation. To train robust models for spoken conversations, we improve pre-trained language models, and apply ensemble algorithms for each sub-task. Typically, for the detection task, we fine-tune \roberta and ELECTRA, and run an error-fixing ensemble algorithm. For the selection task, we adopt a two-stage framework that consists of entity tracking and knowledge ranking, and propose a multi-task learning method to learn multi-level semantic information by domain classification and entity selection. For the generation task, we adopt a cross-validation data process to improve pre-trained generative language models, followed by a consensus decoding algorithm, which can add arbitrary features like relative \rouge metric, and tune associated feature weights toward \bleu directly. Our approach ranks third on the objective evaluation and second on the final official human evaluation.

Viaarxiv icon

Precognition in Task-oriented Dialogue Understanding: Posterior Regularization by Future Context

Mar 07, 2022
Nan Su, Yuchi Zhang, Chao Liu, Bingzhu Du, Yongliang Wang

Figure 1 for Precognition in Task-oriented Dialogue Understanding: Posterior Regularization by Future Context
Figure 2 for Precognition in Task-oriented Dialogue Understanding: Posterior Regularization by Future Context
Figure 3 for Precognition in Task-oriented Dialogue Understanding: Posterior Regularization by Future Context
Figure 4 for Precognition in Task-oriented Dialogue Understanding: Posterior Regularization by Future Context

Task-oriented dialogue systems have become overwhelmingly popular in recent researches. Dialogue understanding is widely used to comprehend users' intent, emotion and dialogue state in task-oriented dialogue systems. Most previous works on such discriminative tasks only models current query or historical conversations. Even if in some work the entire dialogue flow was modeled, it is not suitable for the real-world task-oriented conversations as the future contexts are not visible in such cases. In this paper, we propose to jointly model historical and future information through the posterior regularization method. More specifically, by modeling the current utterance and past contexts as prior, and the entire dialogue flow as posterior, we optimize the KL distance between these distributions to regularize our model during training. And only historical information is used for inference. Extensive experiments on two dialogue datasets validate the effectiveness of our proposed method, achieving superior results compared with all baseline models.

Viaarxiv icon