Alert button
Picture for Yuxuan Chen

Yuxuan Chen

Alert button

NetGPT: A Native-AI Network Architecture Beyond Provisioning Personalized Generative Services

Jul 23, 2023
Yuxuan Chen, Rongpeng Li, Zhifeng Zhao, Chenghui Peng, Jianjun Wu, Ekram Hossain, Honggang Zhang

Figure 1 for NetGPT: A Native-AI Network Architecture Beyond Provisioning Personalized Generative Services
Figure 2 for NetGPT: A Native-AI Network Architecture Beyond Provisioning Personalized Generative Services
Figure 3 for NetGPT: A Native-AI Network Architecture Beyond Provisioning Personalized Generative Services
Figure 4 for NetGPT: A Native-AI Network Architecture Beyond Provisioning Personalized Generative Services

Large language models (LLMs) have triggered tremendous success to empower daily life by generative information, and the personalization of LLMs could further contribute to their applications due to better alignment with human intents. Towards personalized generative services, a collaborative cloud-edge methodology sounds promising, as it facilitates the effective orchestration of heterogeneous distributed communication and computing resources. In this article, after discussing the pros and cons of several candidate cloud-edge collaboration techniques, we put forward NetGPT to capably deploy appropriate LLMs at the edge and the cloud in accordance with their computing capacity. In addition, edge LLMs could efficiently leverage location-based information for personalized prompt completion, thus benefiting the interaction with cloud LLMs. After deploying representative open-source LLMs (e.g., GPT-2-base and LLaMA model) at the edge and the cloud, we present the feasibility of NetGPT on the basis of low-rank adaptation-based light-weight fine-tuning. Subsequently, we highlight substantial essential changes required for a native artificial intelligence (AI) network architecture towards NetGPT, with special emphasis on deeper integration of communications and computing resources and careful calibration of logical AI workflow. Furthermore, we demonstrate several by-product benefits of NetGPT, given edge LLM's astonishing capability to predict trends and infer intents, which possibly leads to a unified solution for intelligent network management \& orchestration. In a nutshell, we argue that NetGPT is a promising native-AI network architecture beyond provisioning personalized generative services.

Viaarxiv icon

What to Learn: Features, Image Transformations, or Both?

Jun 22, 2023
Yuxuan Chen, Binbin Xu, Frederike Dümbgen, Timothy D. Barfoot

Figure 1 for What to Learn: Features, Image Transformations, or Both?
Figure 2 for What to Learn: Features, Image Transformations, or Both?
Figure 3 for What to Learn: Features, Image Transformations, or Both?
Figure 4 for What to Learn: Features, Image Transformations, or Both?

Long-term visual localization is an essential problem in robotics and computer vision, but remains challenging due to the environmental appearance changes caused by lighting and seasons. While many existing works have attempted to solve it by directly learning invariant sparse keypoints and descriptors to match scenes, these approaches still struggle with adverse appearance changes. Recent developments in image transformations such as neural style transfer have emerged as an alternative to address such appearance gaps. In this work, we propose to combine an image transformation network and a feature-learning network to improve long-term localization performance. Given night-to-day image pairs, the image transformation network transforms the night images into day-like conditions prior to feature matching; the feature network learns to detect keypoint locations with their associated descriptor values, which can be passed to a classical pose estimator to compute the relative poses. We conducted various experiments to examine the effectiveness of combining style transfer and feature learning and its training strategy, showing that such a combination greatly improves long-term localization performance.

* IROS 2023. arXiv admin note: substantial text overlap with arXiv:2212.00122 
Viaarxiv icon

Towards the Universal Defense for Query-Based Audio Adversarial Attacks

Apr 20, 2023
Feng Guo, Zheng Sun, Yuxuan Chen, Lei Ju

Figure 1 for Towards the Universal Defense for Query-Based Audio Adversarial Attacks
Figure 2 for Towards the Universal Defense for Query-Based Audio Adversarial Attacks
Figure 3 for Towards the Universal Defense for Query-Based Audio Adversarial Attacks
Figure 4 for Towards the Universal Defense for Query-Based Audio Adversarial Attacks

Recently, studies show that deep learning-based automatic speech recognition (ASR) systems are vulnerable to adversarial examples (AEs), which add a small amount of noise to the original audio examples. These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices. The existing defense methods are either limited in application or only defend on results, but not on process. In this work, we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process. The insight of this method is based on the observation: many existing audio AE attacks utilize query-based methods, which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process. Inspired by this observation, We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query. Thus, we can identify when a sequence of queries appears to be suspectable to generate audio AEs. Through extensive evaluation on four state-of-the-art audio AE attacks, we demonstrate that on average our defense identify the adversary intent with over 90% accuracy. With careful regard for robustness evaluations, we also analyze our proposed defense and its strength to withstand two adaptive attacks. Finally, our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining.

* Submitted to Cybersecurity journal 
Viaarxiv icon

Towards the Transferable Audio Adversarial Attack via Ensemble Methods

Apr 18, 2023
Feng Guo, Zheng Sun, Yuxuan Chen, Lei Ju

Figure 1 for Towards the Transferable Audio Adversarial Attack via Ensemble Methods
Figure 2 for Towards the Transferable Audio Adversarial Attack via Ensemble Methods
Figure 3 for Towards the Transferable Audio Adversarial Attack via Ensemble Methods
Figure 4 for Towards the Transferable Audio Adversarial Attack via Ensemble Methods

In recent years, deep learning (DL) models have achieved significant progress in many domains, such as autonomous driving, facial recognition, and speech recognition. However, the vulnerability of deep learning models to adversarial attacks has raised serious concerns in the community because of their insufficient robustness and generalization. Also, transferable attacks have become a prominent method for black-box attacks. In this work, we explore the potential factors that impact adversarial examples (AEs) transferability in DL-based speech recognition. We also discuss the vulnerability of different DL systems and the irregular nature of decision boundaries. Our results show a remarkable difference in the transferability of AEs between speech and images, with the data relevance being low in images but opposite in speech recognition. Motivated by dropout-based ensemble approaches, we propose random gradient ensembles and dynamic gradient-weighted ensembles, and we evaluate the impact of ensembles on the transferability of AEs. The results show that the AEs created by both approaches are valid for transfer to the black box API.

* Submitted to Cybersecurity journal 2023 
Viaarxiv icon

Self-Supervised Feature Learning for Long-Term Metric Visual Localization

Nov 30, 2022
Yuxuan Chen, Timothy D. Barfoot

Figure 1 for Self-Supervised Feature Learning for Long-Term Metric Visual Localization
Figure 2 for Self-Supervised Feature Learning for Long-Term Metric Visual Localization
Figure 3 for Self-Supervised Feature Learning for Long-Term Metric Visual Localization
Figure 4 for Self-Supervised Feature Learning for Long-Term Metric Visual Localization

Visual localization is the task of estimating camera pose in a known scene, which is an essential problem in robotics and computer vision. However, long-term visual localization is still a challenge due to the environmental appearance changes caused by lighting and seasons. While techniques exist to address appearance changes using neural networks, these methods typically require ground-truth pose information to generate accurate image correspondences or act as a supervisory signal during training. In this paper, we present a novel self-supervised feature learning framework for metric visual localization. We use a sequence-based image matching algorithm across different sequences of images (i.e., experiences) to generate image correspondences without ground-truth labels. We can then sample image pairs to train a deep neural network that learns sparse features with associated descriptors and scores without ground-truth pose supervision. The learned features can be used together with a classical pose estimator for visual stereo localization. We validate the learned features by integrating with an existing Visual Teach & Repeat pipeline to perform closed-loop localization experiments under different lighting conditions for a total of 22.4 km.

* IEEE RA-L 2023 
Viaarxiv icon

Multilingual Relation Classification via Efficient and Effective Prompting

Oct 26, 2022
Yuxuan Chen, David Harbecke, Leonhard Hennig

Figure 1 for Multilingual Relation Classification via Efficient and Effective Prompting
Figure 2 for Multilingual Relation Classification via Efficient and Effective Prompting
Figure 3 for Multilingual Relation Classification via Efficient and Effective Prompting
Figure 4 for Multilingual Relation Classification via Efficient and Effective Prompting

Prompting pre-trained language models has achieved impressive performance on various NLP tasks, especially in low data regimes. Despite the success of prompting in monolingual settings, applying prompt-based methods in multilingual scenarios has been limited to a narrow set of tasks, due to the high cost of handcrafting multilingual prompts. In this paper, we present the first work on prompt-based multilingual relation classification (RC), by introducing an efficient and effective method that constructs prompts from relation triples and involves only minimal translation for the class labels. We evaluate its performance in fully supervised, few-shot and zero-shot scenarios, and analyze its effectiveness across 14 languages, prompt variants, and English-task training in cross-lingual settings. We find that in both fully supervised and few-shot scenarios, our prompt method beats competitive baselines: fine-tuning XLM-R_EM and null prompts. It also outperforms the random baseline by a large margin in zero-shot experiments. Our method requires little in-language knowledge and can be used as a strong baseline for similar multilingual classification tasks.

* EMNLP 2022 
Viaarxiv icon

SLIC: Self-Supervised Learning with Iterative Clustering for Human Action Videos

Jun 25, 2022
Salar Hosseini Khorasgani, Yuxuan Chen, Florian Shkurti

Figure 1 for SLIC: Self-Supervised Learning with Iterative Clustering for Human Action Videos
Figure 2 for SLIC: Self-Supervised Learning with Iterative Clustering for Human Action Videos
Figure 3 for SLIC: Self-Supervised Learning with Iterative Clustering for Human Action Videos
Figure 4 for SLIC: Self-Supervised Learning with Iterative Clustering for Human Action Videos

Self-supervised methods have significantly closed the gap with end-to-end supervised learning for image classification. In the case of human action videos, however, where both appearance and motion are significant factors of variation, this gap remains significant. One of the key reasons for this is that sampling pairs of similar video clips, a required step for many self-supervised contrastive learning methods, is currently done conservatively to avoid false positives. A typical assumption is that similar clips only occur temporally close within a single video, leading to insufficient examples of motion similarity. To mitigate this, we propose SLIC, a clustering-based self-supervised contrastive learning method for human action videos. Our key contribution is that we improve upon the traditional intra-video positive sampling by using iterative clustering to group similar video instances. This enables our method to leverage pseudo-labels from the cluster assignments to sample harder positives and negatives. SLIC outperforms state-of-the-art video retrieval baselines by +15.4% on top-1 recall on UCF101 and by +5.7% when directly transferred to HMDB51. With end-to-end finetuning for action classification, SLIC achieves 83.2% top-1 accuracy (+0.8%) on UCF101 and 54.5% on HMDB51 (+1.6%). SLIC is also competitive with the state-of-the-art in action classification after self-supervised pretraining on Kinetics400.

* CVPR2022 
Viaarxiv icon

Why only Micro-F1? Class Weighting of Measures for Relation Classification

May 19, 2022
David Harbecke, Yuxuan Chen, Leonhard Hennig, Christoph Alt

Figure 1 for Why only Micro-F1? Class Weighting of Measures for Relation Classification
Figure 2 for Why only Micro-F1? Class Weighting of Measures for Relation Classification
Figure 3 for Why only Micro-F1? Class Weighting of Measures for Relation Classification
Figure 4 for Why only Micro-F1? Class Weighting of Measures for Relation Classification

Relation classification models are conventionally evaluated using only a single measure, e.g., micro-F1, macro-F1 or AUC. In this work, we analyze weighting schemes, such as micro and macro, for imbalanced datasets. We introduce a framework for weighting schemes, where existing schemes are extremes, and two new intermediate schemes. We show that reporting results of different weighting schemes better highlights strengths and weaknesses of a model.

* NLP Power! The First Workshop on Efficient Benchmarking in NLP (ACL 2022) 
Viaarxiv icon

A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition

Apr 11, 2022
Yuxuan Chen, Jonas Mikkelsen, Arne Binder, Christoph Alt, Leonhard Hennig

Figure 1 for A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition
Figure 2 for A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition
Figure 3 for A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition
Figure 4 for A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition

Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data. However, their performance in low-resource scenarios, where such data is not available, remains an open question. We introduce an encoder evaluation framework, and use it to systematically compare the performance of state-of-the-art pre-trained representations on the task of low-resource NER. We analyze a wide range of encoders pre-trained with different strategies, model architectures, intermediate-task fine-tuning, and contrastive learning. Our experimental results across ten benchmark NER datasets in English and German show that encoder performance varies significantly, suggesting that the choice of encoder for a specific low-resource scenario needs to be carefully evaluated.

* Accepted at Repl4NLP 2022 (ACL) 
Viaarxiv icon