Alert button
Picture for Guan Wang

Guan Wang

Alert button

Training A Multi-stage Deep Classifier with Feedback Signals

Nov 12, 2023
Chao Xu, Yu Yang, Rongzhao Wang, Guan Wang, Bojia Lin

Multi-Stage Classifier (MSC) - several classifiers working sequentially in an arranged order and classification decision is partially made at each step - is widely used in industrial applications for various resource limitation reasons. The classifiers of a multi-stage process are usually Neural Network (NN) models trained independently or in their inference order without considering the signals from the latter stages. Aimed at two-stage binary classification process, the most common type of MSC, we propose a novel training framework, named Feedback Training. The classifiers are trained in an order reverse to their actual working order, and the classifier at the later stage is used to guide the training of initial-stage classifier via a sample weighting method. We experimentally show the efficacy of our proposed approach, and its great superiority under the scenario of few-shot training.

Viaarxiv icon

OpenChat: Advancing Open-source Language Models with Mixed-Quality Data

Sep 20, 2023
Guan Wang, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, Yang Liu

Figure 1 for OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
Figure 2 for OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
Figure 3 for OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
Figure 4 for OpenChat: Advancing Open-source Language Models with Mixed-Quality Data

Nowadays, open-source large language models like LLaMA have emerged. Recent developments have incorporated supervised fine-tuning (SFT) and reinforcement learning fine-tuning (RLFT) to align these models with human goals. However, SFT methods treat all training data with mixed quality equally, while RLFT methods require high-quality pairwise or ranking-based preference data. In this study, we present a novel framework, named OpenChat, to advance open-source language models with mixed-quality data. Specifically, we consider the general SFT training data, consisting of a small amount of expert data mixed with a large proportion of sub-optimal data, without any preference labels. We propose the C(onditioned)-RLFT, which regards different data sources as coarse-grained reward labels and learns a class-conditioned policy to leverage complementary data quality information. Interestingly, the optimal policy in C-RLFT can be easily solved through single-stage, RL-free supervised learning, which is lightweight and avoids costly human preference labeling. Through extensive experiments on three standard benchmarks, our openchat-13b fine-tuned with C-RLFT achieves the highest average performance among all 13b open-source language models. Moreover, we use AGIEval to validate the model generalization performance, in which only openchat-13b surpasses the base model. Finally, we conduct a series of analyses to shed light on the effectiveness and robustness of OpenChat. Our code, data, and models are publicly available at https://github.com/imoneoi/openchat.

Viaarxiv icon

Evolving Connectivity for Recurrent Spiking Neural Networks

May 28, 2023
Guan Wang, Yuhao Sun, Sijie Cheng, Sen Song

Figure 1 for Evolving Connectivity for Recurrent Spiking Neural Networks
Figure 2 for Evolving Connectivity for Recurrent Spiking Neural Networks
Figure 3 for Evolving Connectivity for Recurrent Spiking Neural Networks
Figure 4 for Evolving Connectivity for Recurrent Spiking Neural Networks

Recurrent spiking neural networks (RSNNs) hold great potential for advancing artificial general intelligence, as they draw inspiration from the biological nervous system and show promise in modeling complex dynamics. However, the widely-used surrogate gradient-based training methods for RSNNs are inherently inaccurate and unfriendly to neuromorphic hardware. To address these limitations, we propose the evolving connectivity (EC) framework, an inference-only method for training RSNNs. The EC framework reformulates weight-tuning as a search into parameterized connection probability distributions, and employs Natural Evolution Strategies (NES) for optimizing these distributions. Our EC framework circumvents the need for gradients and features hardware-friendly characteristics, including sparse boolean connections and high scalability. We evaluate EC on a series of standard robotic locomotion tasks, where it achieves comparable performance with deep neural networks and outperforms gradient-trained RSNNs, even solving the complex 17-DoF humanoid task. Additionally, the EC framework demonstrates a two to three fold speedup in efficiency compared to directly evolving parameters. By providing a performant and hardware-friendly alternative, the EC framework lays the groundwork for further energy-efficient applications of RSNNs and advances the development of neuromorphic devices.

Viaarxiv icon

AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization

May 26, 2023
Guan Wang, Weihua Li, Edmund M-K. Lai, Quan Bai

Figure 1 for AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization
Figure 2 for AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization
Figure 3 for AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization
Figure 4 for AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization

The rapid growth of information on the Internet has led to an overwhelming amount of opinions and comments on various activities, products, and services. This makes it difficult and time-consuming for users to process all the available information when making decisions. Text summarization, a Natural Language Processing (NLP) task, has been widely explored to help users quickly retrieve relevant information by generating short and salient content from long or multiple documents. Recent advances in pre-trained language models, such as ChatGPT, have demonstrated the potential of Large Language Models (LLMs) in text generation. However, LLMs require massive amounts of data and resources and are challenging to implement as offline applications. Furthermore, existing text summarization approaches often lack the ``adaptive" nature required to capture diverse aspects in opinion summarization, which is particularly detrimental to users with specific requirements or preferences. In this paper, we propose an Aspect-adaptive Knowledge-based Opinion Summarization model for product reviews, which effectively captures the adaptive nature required for opinion summarization. The model generates aspect-oriented summaries given a set of reviews for a particular product, efficiently providing users with useful information on specific aspects they are interested in, ensuring the generated summaries are more personalized and informative. Extensive experiments have been conducted using real-world datasets to evaluate the proposed model. The results demonstrate that our model outperforms state-of-the-art approaches and is adaptive and efficient in generating summaries that focus on particular aspects, enabling users to make well-informed decisions and catering to their diverse interests and preferences.

* 21 pages, 4 figures, 7 tables 
Viaarxiv icon

PO-VINS: An Efficient Pose-Only LiDAR-Enhanced Visual-Inertial State Estimator

May 22, 2023
Hailiang Tang, Xiaoji Niu, Tisheng Zhang, Liqiang Wang, Guan Wang, Jingnan Liu

Figure 1 for PO-VINS: An Efficient Pose-Only LiDAR-Enhanced Visual-Inertial State Estimator
Figure 2 for PO-VINS: An Efficient Pose-Only LiDAR-Enhanced Visual-Inertial State Estimator
Figure 3 for PO-VINS: An Efficient Pose-Only LiDAR-Enhanced Visual-Inertial State Estimator
Figure 4 for PO-VINS: An Efficient Pose-Only LiDAR-Enhanced Visual-Inertial State Estimator

The pose-only (PO) visual representation has been proven to be equivalent to the classical multiple-view geometry, while significantly improving computational efficiency. However, its applicability for real-world navigation in large-scale complex environments has not yet been demonstrated. In this study, we present an efficient pose-only LiDAR-enhanced visual-inertial navigation system (PO-VINS) to enhance the real-time performance of the state estimator. In the visual-inertial state estimator (VISE), we propose a pose-only visual-reprojection measurement model that only contains the inertial measurement unit (IMU) pose and extrinsic-parameter states. We further integrated the LiDAR-enhanced method to construct a pose-only LiDAR-depth measurement model. Real-world experiments were conducted in large-scale complex environments, demonstrating that the proposed PO-VISE and LiDAR-enhanced PO-VISE reduce computational complexity by more than 50% and over 20%, respectively. Additionally, the PO-VINS yields the same accuracy as conventional methods. These results indicate that the pose-only solution is efficient and applicable for real-time visual-inertial state estimation.

Viaarxiv icon

Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal Retriveal

May 07, 2023
Zhitao Liu, Zengyu Liu, Jiwei Wei, Guan Wang, Zhenjiang Du, Ning Xie, Heng Tao Shen

Figure 1 for Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal Retriveal
Figure 2 for Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal Retriveal
Figure 3 for Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal Retriveal
Figure 4 for Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal Retriveal

3D cross-modal retrieval is gaining attention in the multimedia community. Central to this topic is learning a joint embedding space to represent data from different modalities, such as images, 3D point clouds, and polygon meshes, to extract modality-invariant and discriminative features. Hence, the performance of cross-modal retrieval methods heavily depends on the representational capacity of this embedding space. Existing methods treat all instances equally, applying the same penalty strength to instances with varying degrees of difficulty, ignoring the differences between instances. This can result in ambiguous convergence or local optima, severely compromising the separability of the feature space. To address this limitation, we propose an Instance-Variant loss to assign different penalty strengths to different instances, improving the space separability. Specifically, we assign different penalty weights to instances positively related to their intra-class distance. Simultaneously, we reduce the cross-modal discrepancy between features by learning a shared weight vector for the same class data from different modalities. By leveraging the Gaussian RBF kernel to evaluate sample similarity, we further propose an Intra-Class loss function that minimizes the intra-class distance among same-class instances. Extensive experiments on three 3D cross-modal datasets show that our proposed method surpasses recent state-of-the-art approaches.

Viaarxiv icon

KATSum: Knowledge-aware Abstractive Text Summarization

Dec 06, 2022
Guan Wang, Weihua Li, Edmund Lai, Jianhua Jiang

Figure 1 for KATSum: Knowledge-aware Abstractive Text Summarization
Figure 2 for KATSum: Knowledge-aware Abstractive Text Summarization
Figure 3 for KATSum: Knowledge-aware Abstractive Text Summarization
Figure 4 for KATSum: Knowledge-aware Abstractive Text Summarization

Text Summarization is recognised as one of the NLP downstream tasks and it has been extensively investigated in recent years. It can assist people with perceiving the information rapidly from the Internet, including news articles, social posts, videos, etc. Most existing research works attempt to develop summarization models to produce a better output. However, advent limitations of most existing models emerge, including unfaithfulness and factual errors. In this paper, we propose a novel model, named as Knowledge-aware Abstractive Text Summarization, which leverages the advantages offered by Knowledge Graph to enhance the standard Seq2Seq model. On top of that, the Knowledge Graph triplets are extracted from the source text and utilised to provide keywords with relational information, producing coherent and factually errorless summaries. We conduct extensive experiments by using real-world data sets. The results reveal that the proposed framework can effectively utilise the information from Knowledge Graph and significantly reduce the factual errors in the summary.

* Presented at PKAW 2022 (arXiv:2211.03888) Report-no: PKAW/2022/02 
Viaarxiv icon

PAI3D: Painting Adaptive Instance-Prior for 3D Object Detection

Nov 15, 2022
Hao Liu, Zhuoran Xu, Dan Wang, Baofeng Zhang, Guan Wang, Bo Dong, Xin Wen, Xinyu Xu

Figure 1 for PAI3D: Painting Adaptive Instance-Prior for 3D Object Detection
Figure 2 for PAI3D: Painting Adaptive Instance-Prior for 3D Object Detection
Figure 3 for PAI3D: Painting Adaptive Instance-Prior for 3D Object Detection
Figure 4 for PAI3D: Painting Adaptive Instance-Prior for 3D Object Detection

3D object detection is a critical task in autonomous driving. Recently multi-modal fusion-based 3D object detection methods, which combine the complementary advantages of LiDAR and camera, have shown great performance improvements over mono-modal methods. However, so far, no methods have attempted to utilize the instance-level contextual image semantics to guide the 3D object detection. In this paper, we propose a simple and effective Painting Adaptive Instance-prior for 3D object detection (PAI3D) to fuse instance-level image semantics flexibly with point cloud features. PAI3D is a multi-modal sequential instance-level fusion framework. It first extracts instance-level semantic information from images, the extracted information, including objects categorical label, point-to-object membership and object position, are then used to augment each LiDAR point in the subsequent 3D detection network to guide and improve detection performance. PAI3D outperforms the state-of-the-art with a large margin on the nuScenes dataset, achieving 71.4 in mAP and 74.2 in NDS on the test split. Our comprehensive experiments show that instance-level image semantics contribute the most to the performance gain, and PAI3D works well with any good-quality instance segmentation models and any modern point cloud 3D encoders, making it a strong candidate for deployment on autonomous vehicles.

Viaarxiv icon

Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing

Nov 04, 2022
Yibo Wang, Congying Xia, Guan Wang, Philip Yu

Figure 1 for Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing
Figure 2 for Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing
Figure 3 for Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing
Figure 4 for Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing

The explosion of e-commerce has caused the need for processing and analysis of product titles, like entity typing in product titles. However, the rapid activity in e-commerce has led to the rapid emergence of new entities, which is difficult to be solved by general entity typing. Besides, product titles in e-commerce have very different language styles from text data in general domain. In order to handle new entities in product titles and address the special language styles problem of product titles in e-commerce domain, we propose our textual entailment model with continuous prompt tuning based hypotheses and fusion embeddings for e-commerce entity typing. First, we reformulate the entity typing task into a textual entailment problem to handle new entities that are not present during training. Second, we design a model to automatically generate textual entailment hypotheses using a continuous prompt tuning method, which can generate better textual entailment hypotheses without manual design. Third, we utilize the fusion embeddings of BERT embedding and CharacterBERT embedding with a two-layer MLP classifier to solve the problem that the language styles of product titles in e-commerce are different from that of general domain. To analyze the effect of each contribution, we compare the performance of entity typing and textual entailment model, and conduct ablation studies on continuous prompt tuning and fusion embeddings. We also evaluate the impact of different prompt template initialization for the continuous prompt tuning. We show our proposed model improves the average F1 score by around 2% compared to the baseline BERT entity typing model.

Viaarxiv icon

Learning Consumer Preferences from Bundle Sales Data

Sep 11, 2022
Ningyuan Chen, Setareh Farajollahzadeh, Guan Wang

Figure 1 for Learning Consumer Preferences from Bundle Sales Data
Figure 2 for Learning Consumer Preferences from Bundle Sales Data
Figure 3 for Learning Consumer Preferences from Bundle Sales Data

Product bundling is a common selling mechanism used in online retailing. To set profitable bundle prices, the seller needs to learn consumer preferences from the transaction data. When customers purchase bundles or multiple products, classical methods such as discrete choice models cannot be used to estimate customers' valuations. In this paper, we propose an approach to learn the distribution of consumers' valuations toward the products using bundle sales data. The approach reduces it to an estimation problem where the samples are censored by polyhedral regions. Using the EM algorithm and Monte Carlo simulation, our approach can recover the distribution of consumers' valuations. The framework allows for unobserved no-purchases and clustered market segments. We provide theoretical results on the identifiability of the probability model and the convergence of the EM algorithm. The performance of the approach is also demonstrated numerically.

Viaarxiv icon