Alert button
Picture for Ming Shen

Ming Shen

Alert button

Blockage Prediction in Directional mmWave Links Using Liquid Time Constant Network

Jun 08, 2023
Martin H. Nielsen, Chia-Yi Yeh, Ming Shen, Muriel Médard

Figure 1 for Blockage Prediction in Directional mmWave Links Using Liquid Time Constant Network
Figure 2 for Blockage Prediction in Directional mmWave Links Using Liquid Time Constant Network

We propose to use a liquid time constant (LTC) network to predict the future blockage status of a millimeter wave (mmWave) link using only the received signal power as the input to the system. The LTC network is based on an ordinary differential equation (ODE) system inspired by biology and specialized for near-future prediction for time sequence observation as the input. Using an experimental dataset at 60 GHz, we show that our proposed use of LTC can reliably predict the occurrence of blockage and the length of the blockage without the need for scenario-specific data. The results show that the proposed LTC can predict with upwards of 97.85\% accuracy without prior knowledge of the outdoor scenario or retraining/tuning. These results highlight the promising gains of using LTC networks to predict time series-dependent signals, which can lead to more reliable and low-latency communication.

* 2 pages, pre-print for IRMMW 2023 conference 
Viaarxiv icon

Robust and Efficient Fault Diagnosis of mm-Wave Active Phased Arrays using Baseband Signal

Jun 07, 2023
Martin H. Nielsen, Yufeng Zhang, Changbin Xue, Jian Ren, Yingzeng Yin, Ming Shen, Gert F. Pedersen

Figure 1 for Robust and Efficient Fault Diagnosis of mm-Wave Active Phased Arrays using Baseband Signal
Figure 2 for Robust and Efficient Fault Diagnosis of mm-Wave Active Phased Arrays using Baseband Signal
Figure 3 for Robust and Efficient Fault Diagnosis of mm-Wave Active Phased Arrays using Baseband Signal
Figure 4 for Robust and Efficient Fault Diagnosis of mm-Wave Active Phased Arrays using Baseband Signal

One key communication block in 5G and 6G radios is the active phased array (APA). To ensure reliable operation, efficient and timely fault diagnosis of APAs on-site is crucial. To date, fault diagnosis has relied on measurement of frequency domain radiation patterns using costly equipment and multiple strictly controlled measurement probes, which are time-consuming, complex, and therefore infeasible for on-site deployment. This paper proposes a novel method exploiting a Deep Neural Network (DNN) tailored to extract the features hidden in the baseband in-phase and quadrature signals for classifying the different faults. It requires only a single probe in one measurement point for fast and accurate diagnosis of the faulty elements and components in APAs. Validation of the proposed method is done using a commercial 28 GHz APA. Accuracies of 99% and 80% have been demonstrated for single- and multi-element failure detection, respectively. Three different test scenarios are investigated: on-off antenna elements, phase variations, and magnitude attenuation variations. In a low signal to noise ratio of 4 dB, stable fault detection accuracy above 90% is maintained. This is all achieved with a detection time of milliseconds (e.g 6~ms), showing a high potential for on-site deployment.

* in IEEE Transactions on Antennas and Propagation, vol. 70, no. 7, pp. 5044-5053, July 2022  
* 10 pages 
Viaarxiv icon

Simple Yet Effective Synthetic Dataset Construction for Unsupervised Opinion Summarization

Mar 21, 2023
Ming Shen, Jie Ma, Shuai Wang, Yogarshi Vyas, Kalpit Dixit, Miguel Ballesteros, Yassine Benajiba

Figure 1 for Simple Yet Effective Synthetic Dataset Construction for Unsupervised Opinion Summarization
Figure 2 for Simple Yet Effective Synthetic Dataset Construction for Unsupervised Opinion Summarization
Figure 3 for Simple Yet Effective Synthetic Dataset Construction for Unsupervised Opinion Summarization
Figure 4 for Simple Yet Effective Synthetic Dataset Construction for Unsupervised Opinion Summarization

Opinion summarization provides an important solution for summarizing opinions expressed among a large number of reviews. However, generating aspect-specific and general summaries is challenging due to the lack of annotated data. In this work, we propose two simple yet effective unsupervised approaches to generate both aspect-specific and general opinion summaries by training on synthetic datasets constructed with aspect-related review contents. Our first approach, Seed Words Based Leave-One-Out (SW-LOO), identifies aspect-related portions of reviews simply by exact-matching aspect seed words and outperforms existing methods by 3.4 ROUGE-L points on SPACE and 0.5 ROUGE-1 point on OPOSUM+ for aspect-specific opinion summarization. Our second approach, Natural Language Inference Based Leave-One-Out (NLI-LOO) identifies aspect-related sentences utilizing an NLI model in a more general setting without using seed words and outperforms existing approaches by 1.2 ROUGE-L points on SPACE for aspect-specific opinion summarization and remains competitive on other metrics.

* EACL 2023 Findings 
Viaarxiv icon

Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments

Mar 06, 2023
Tung Thai, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Utkarsh Soni, Mudit Verma, Sriram Gopalakrishnan, Neeraj Varshney, Chitta Baral, Subbarao Kambhampati, Jivko Sinapov, Matthias Scheutz

Figure 1 for Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
Figure 2 for Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
Figure 3 for Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
Figure 4 for Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments

Learning to detect, characterize and accommodate novelties is a challenge that agents operating in open-world domains need to address to be able to guarantee satisfactory task performance. Certain novelties (e.g., changes in environment dynamics) can interfere with the performance or prevent agents from accomplishing task goals altogether. In this paper, we introduce general methods and architectural mechanisms for detecting and characterizing different types of novelties, and for building an appropriate adaptive model to accommodate them utilizing logical representations and reasoning methods. We demonstrate the effectiveness of the proposed methods in evaluations performed by a third party in the adversarial multi-agent board game Monopoly. The results show high novelty detection and accommodation rates across a variety of novelty types, including changes to the rules of the game, as well as changes to the agent's action capabilities.

Viaarxiv icon

Unsupervised Pronoun Resolution via Masked Noun-Phrase Prediction

May 28, 2021
Ming Shen, Pratyay Banerjee, Chitta Baral

Figure 1 for Unsupervised Pronoun Resolution via Masked Noun-Phrase Prediction
Figure 2 for Unsupervised Pronoun Resolution via Masked Noun-Phrase Prediction
Figure 3 for Unsupervised Pronoun Resolution via Masked Noun-Phrase Prediction
Figure 4 for Unsupervised Pronoun Resolution via Masked Noun-Phrase Prediction

In this work, we propose Masked Noun-Phrase Prediction (MNPP), a pre-training strategy to tackle pronoun resolution in a fully unsupervised setting. Firstly, We evaluate our pre-trained model on various pronoun resolution datasets without any finetuning. Our method outperforms all previous unsupervised methods on all datasets by large margins. Secondly, we proceed to a few-shot setting where we finetune our pre-trained model on WinoGrande-S and XS separately. Our method outperforms RoBERTa-large baseline with large margins, meanwhile, achieving a higher AUC score after further finetuning on the remaining three official splits of WinoGrande.

* Accepted to ACL2021 
Viaarxiv icon

TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition

Apr 24, 2020
Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang, Prashant Shiralkar, Xiang Ren

Figure 1 for TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition
Figure 2 for TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition
Figure 3 for TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition
Figure 4 for TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition

Training neural models for named entity recognition (NER) in a new domain often requires additional human annotations (e.g., tens of thousands of labeled instances) that are usually expensive and time-consuming to collect. Thus, a crucial research question is how to obtain supervision in a cost-effective way. In this paper, we introduce "entity triggers," an effective proxy of human explanations for facilitating label-efficient learning of NER models. An entity trigger is defined as a group of words in a sentence that helps to explain why humans would recognize an entity in the sentence. We crowd-sourced 14k entity triggers for two well-studied NER datasets. Our proposed model, Trigger Matching Network, jointly learns trigger representations and soft matching module with self-attention such that can generalize to unseen sentences easily for tagging. Our framework is significantly more cost-effective than the traditional neural NER frameworks. Experiments show that using only 20% of the trigger-annotated sentences results in a comparable performance as using 70% of conventional annotated sentences.

* Accepted to the ACL 2020. Camera-ready version. The first two authors contributed equally. Code and data: https://github.com/INK-USC/TriggerNER 
Viaarxiv icon

CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning

Nov 09, 2019
Bill Yuchen Lin, Ming Shen, Yu Xing, Pei Zhou, Xiang Ren

Figure 1 for CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
Figure 2 for CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
Figure 3 for CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
Figure 4 for CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning

Rational humans can generate sentences that cover a certain set of concepts while describing natural and common scenes. For example, given {apple(noun), tree(noun), pick(verb)}, humans can easily come up with scenes like "a boy is picking an apple from a tree" via their generative commonsense reasoning ability. However, we find this capacity has not been well learned by machines. Most prior works in machine commonsense focus on discriminative reasoning tasks with a multi-choice question answering setting. Herein, we present CommonGen: a challenging dataset for testing generative commonsense reasoning with a constrained text generation task. We collect 37k concept-sets as inputs and 90k human-written sentences as associated outputs. Additionally, we also provide high-quality rationales behind the reasoning process for the development and test sets from the human annotators. We demonstrate the difficulty of the task by examining a wide range of sequence generation methods with both automatic metrics and human evaluation. The state-of-the-art pre-trained generation model, UniLM, is still far from human performance in this task. Our data and code is publicly available at http://inklab.usc.edu/CommonGen/ .

* Work in process; 10 pages, Table 4 is on the last page ; The data and code can be found at http://inklab.usc.edu/CommonGen/ 
Viaarxiv icon