Alert button
Picture for Ruihong Huang

Ruihong Huang

Alert button

Semi-supervised News Discourse Profiling with Contrastive Learning

Sep 20, 2023
Ming Li, Ruihong Huang

News Discourse Profiling seeks to scrutinize the event-related role of each sentence in a news article and has been proven useful across various downstream applications. Specifically, within the context of a given news discourse, each sentence is assigned to a pre-defined category contingent upon its depiction of the news event structure. However, existing approaches suffer from an inadequacy of available human-annotated data, due to the laborious and time-intensive nature of generating discourse-level annotations. In this paper, we present a novel approach, denoted as Intra-document Contrastive Learning with Distillation (ICLD), for addressing the news discourse profiling task, capitalizing on its unique structural characteristics. Notably, we are the first to apply a semi-supervised methodology within this task paradigm, and evaluation demonstrates the effectiveness of the presented approach.

* IJCNLP-AACL 2023 
Viaarxiv icon

RST-style Discourse Parsing Guided by Document-level Content Structures

Sep 08, 2023
Ming Li, Ruihong Huang

Figure 1 for RST-style Discourse Parsing Guided by Document-level Content Structures
Figure 2 for RST-style Discourse Parsing Guided by Document-level Content Structures
Figure 3 for RST-style Discourse Parsing Guided by Document-level Content Structures
Figure 4 for RST-style Discourse Parsing Guided by Document-level Content Structures

Rhetorical Structure Theory based Discourse Parsing (RST-DP) explores how clauses, sentences, and large text spans compose a whole discourse and presents the rhetorical structure as a hierarchical tree. Existing RST parsing pipelines construct rhetorical structures without the knowledge of document-level content structures, which causes relatively low performance when predicting the discourse relations for large text spans. Recognizing the value of high-level content-related information in facilitating discourse relation recognition, we propose a novel pipeline for RST-DP that incorporates structure-aware news content sentence representations derived from the task of News Discourse Profiling. By incorporating only a few additional layers, this enhanced pipeline exhibits promising performance across various RST parsing metrics.

Viaarxiv icon

Composition-contrastive Learning for Sentence Embeddings

Jul 14, 2023
Sachin J. Chanchani, Ruihong Huang

Figure 1 for Composition-contrastive Learning for Sentence Embeddings
Figure 2 for Composition-contrastive Learning for Sentence Embeddings
Figure 3 for Composition-contrastive Learning for Sentence Embeddings
Figure 4 for Composition-contrastive Learning for Sentence Embeddings

Vector representations of natural language are ubiquitous in search applications. Recently, various methods based on contrastive learning have been proposed to learn textual representations from unlabelled data; by maximizing alignment between minimally-perturbed embeddings of the same text, and encouraging a uniform distribution of embeddings across a broader corpus. Differently, we propose maximizing alignment between texts and a composition of their phrasal constituents. We consider several realizations of this objective and elaborate the impact on representations in each case. Experimental results on semantic textual similarity tasks show improvements over baselines that are comparable with state-of-the-art approaches. Moreover, this work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters.

* ACL 2023 
Viaarxiv icon

HYTREL: Hypergraph-enhanced Tabular Data Representation Learning

Jul 14, 2023
Pei Chen, Soumajyoti Sarkar, Leonard Lausen, Balasubramaniam Srinivasan, Sheng Zha, Ruihong Huang, George Karypis

Figure 1 for HYTREL: Hypergraph-enhanced Tabular Data Representation Learning
Figure 2 for HYTREL: Hypergraph-enhanced Tabular Data Representation Learning
Figure 3 for HYTREL: Hypergraph-enhanced Tabular Data Representation Learning
Figure 4 for HYTREL: Hypergraph-enhanced Tabular Data Representation Learning

Language models pretrained on large collections of tabular data have demonstrated their effectiveness in several downstream tasks. However, many of these models do not take into account the row/column permutation invariances, hierarchical structure, etc. that exist in tabular data. To alleviate these limitations, we propose HYTREL, a tabular language model, that captures the permutation invariances and three more structural properties of tabular data by using hypergraphs - where the table cells make up the nodes and the cells occurring jointly together in each row, column, and the entire table are used to form three different types of hyperedges. We show that HYTREL is maximally invariant under certain conditions for tabular data, i.e., two tables obtain the same representations via HYTREL iff the two tables are identical up to permutations. Our empirical results demonstrate that HYTREL consistently outperforms other competitive baselines on four downstream tasks with minimal pretraining, illustrating the advantages of incorporating the inductive biases associated with tabular data into the representations. Finally, our qualitative analyses showcase that HYTREL can assimilate the table structures to generate robust representations for the cells, rows, columns, and the entire table.

Viaarxiv icon

Less is More: Simplifying Feature Extractors Prevents Overfitting for Neural Discourse Parsing Models

Oct 18, 2022
Ming Li, Sijing Yu, Ruihong Huang

Figure 1 for Less is More: Simplifying Feature Extractors Prevents Overfitting for Neural Discourse Parsing Models
Figure 2 for Less is More: Simplifying Feature Extractors Prevents Overfitting for Neural Discourse Parsing Models
Figure 3 for Less is More: Simplifying Feature Extractors Prevents Overfitting for Neural Discourse Parsing Models
Figure 4 for Less is More: Simplifying Feature Extractors Prevents Overfitting for Neural Discourse Parsing Models

Complex feature extractors are widely employed for text representation building. However, these complex feature extractors can lead to severe overfitting problems especially when the training datasets are small, which is especially the case for several discourse parsing tasks. Thus, we propose to remove additional feature extractors and only utilize self-attention mechanism to exploit pretrained neural language models in order to mitigate the overfitting problem. Experiments on three common discourse parsing tasks (News Discourse Profiling, Rhetorical Structure Theory based Discourse Parsing and Penn Discourse Treebank based Discourse Parsing) show that powered by recent pretrained language models, our simplied feature extractors obtain better generalizabilities and meanwhile achieve comparable or even better system performance. The simplified feature extractors have fewer learnable parameters and less processing time. Codes will be released and this simple yet effective model can serve as a better baseline for future research.

Viaarxiv icon

PARADE: A New Dataset for Paraphrase Identification Requiring Computer Science Domain Knowledge

Oct 08, 2020
Yun He, Zhuoer Wang, Yin Zhang, Ruihong Huang, James Caverlee

Figure 1 for PARADE: A New Dataset for Paraphrase Identification Requiring Computer Science Domain Knowledge
Figure 2 for PARADE: A New Dataset for Paraphrase Identification Requiring Computer Science Domain Knowledge
Figure 3 for PARADE: A New Dataset for Paraphrase Identification Requiring Computer Science Domain Knowledge
Figure 4 for PARADE: A New Dataset for Paraphrase Identification Requiring Computer Science Domain Knowledge

We present a new benchmark dataset called PARADE for paraphrase identification that requires specialized domain knowledge. PARADE contains paraphrases that overlap very little at the lexical and syntactic level but are semantically equivalent based on computer science domain knowledge, as well as non-paraphrases that overlap greatly at the lexical and syntactic level but are not semantically equivalent based on this domain knowledge. Experiments show that both state-of-the-art neural models and non-expert human annotators have poor performance on PARADE. For example, BERT after fine-tuning achieves an F1 score of 0.709, which is much lower than its performance on other paraphrase identification datasets. PARADE can serve as a resource for researchers interested in testing models that incorporate domain knowledge. We make our data and code freely available.

* Accepted by EMNLP 2020 as a regular long paper 
Viaarxiv icon

Weakly-supervised Fine-grained Event Recognition on Social Media Texts for Disaster Management

Oct 04, 2020
Wenlin Yao, Cheng Zhang, Shiva Saravanan, Ruihong Huang, Ali Mostafavi

Figure 1 for Weakly-supervised Fine-grained Event Recognition on Social Media Texts for Disaster Management
Figure 2 for Weakly-supervised Fine-grained Event Recognition on Social Media Texts for Disaster Management
Figure 3 for Weakly-supervised Fine-grained Event Recognition on Social Media Texts for Disaster Management
Figure 4 for Weakly-supervised Fine-grained Event Recognition on Social Media Texts for Disaster Management

People increasingly use social media to report emergencies, seek help or share information during disasters, which makes social networks an important tool for disaster management. To meet these time-critical needs, we present a weakly supervised approach for rapidly building high-quality classifiers that label each individual Twitter message with fine-grained event categories. Most importantly, we propose a novel method to create high-quality labeled data in a timely manner that automatically clusters tweets containing an event keyword and asks a domain expert to disambiguate event word senses and label clusters quickly. In addition, to process extremely noisy and often rather short user-generated messages, we enrich tweet representations using preceding context tweets and reply tweets in building event recognition classifiers. The evaluation on two hurricanes, Harvey and Florence, shows that using only 1-2 person-hours of human supervision, the rapidly trained weakly supervised classifiers outperform supervised classifiers trained using more than ten thousand annotated tweets created in over 50 person-hours.

* In Proceedings of the AAAI 2020 (AI for Social Impact Track). Link: https://aaai.org/ojs/index.php/AAAI/article/view/5391 
Viaarxiv icon

In Plain Sight: Media Bias Through the Lens of Factual Reporting

Sep 05, 2019
Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, Lu Wang

Figure 1 for In Plain Sight: Media Bias Through the Lens of Factual Reporting
Figure 2 for In Plain Sight: Media Bias Through the Lens of Factual Reporting
Figure 3 for In Plain Sight: Media Bias Through the Lens of Factual Reporting
Figure 4 for In Plain Sight: Media Bias Through the Lens of Factual Reporting

The increasing prevalence of political bias in news media calls for greater public awareness of it, as well as robust methods for its detection. While prior work in NLP has primarily focused on the lexical bias captured by linguistic attributes such as word choice and syntax, other types of bias stem from the actual content selected for inclusion in the text. In this work, we investigate the effects of informational bias: factual content that can nevertheless be deployed to sway reader opinion. We first produce a new dataset, BASIL, of 300 news articles annotated with 1,727 bias spans and find evidence that informational bias appears in news articles more frequently than lexical bias. We further study our annotations to observe how informational bias surfaces in news articles by different media outlets. Lastly, a baseline model for informational bias prediction is presented by fine-tuning BERT on our labeled data, indicating the challenges of the task and future directions.

* To appear as a short paper in EMNLP 2019 
Viaarxiv icon

Improving Dialogue State Tracking by Discerning the Relevant Context

Apr 04, 2019
Sanuj Sharma, Prafulla Kumar Choubey, Ruihong Huang

Figure 1 for Improving Dialogue State Tracking by Discerning the Relevant Context
Figure 2 for Improving Dialogue State Tracking by Discerning the Relevant Context
Figure 3 for Improving Dialogue State Tracking by Discerning the Relevant Context
Figure 4 for Improving Dialogue State Tracking by Discerning the Relevant Context

A typical conversation comprises of multiple turns between participants where they go back-and-forth between different topics. At each user turn, dialogue state tracking (DST) aims to estimate user's goal by processing the current utterance. However, in many turns, users implicitly refer to the previous goal, necessitating the use of relevant dialogue history. Nonetheless, distinguishing relevant history is challenging and a popular method of using dialogue recency for that is inefficient. We, therefore, propose a novel framework for DST that identifies relevant historical context by referring to the past utterances where a particular slot-value changes and uses that together with weighted system utterance to identify the relevant context. Specifically, we use the current user utterance and the most recent system utterance to determine the relevance of a system utterance. Empirical analyses show that our method improves joint goal accuracy by 2.75% and 2.36% on WoZ 2.0 and MultiWoZ 2.0 restaurant domain datasets respectively over the previous state-of-the-art GLAD model.

* NAACL 2019 
Viaarxiv icon

Building Context-aware Clause Representations for Situation Entity Type Classification

Sep 20, 2018
Zeyu Dai, Ruihong Huang

Figure 1 for Building Context-aware Clause Representations for Situation Entity Type Classification
Figure 2 for Building Context-aware Clause Representations for Situation Entity Type Classification
Figure 3 for Building Context-aware Clause Representations for Situation Entity Type Classification
Figure 4 for Building Context-aware Clause Representations for Situation Entity Type Classification

Capabilities to categorize a clause based on the type of situation entity (e.g., events, states and generic statements) the clause introduces to the discourse can benefit many NLP applications. Observing that the situation entity type of a clause depends on discourse functions the clause plays in a paragraph and the interpretation of discourse functions depends heavily on paragraph-wide contexts, we propose to build context-aware clause representations for predicting situation entity types of clauses. Specifically, we propose a hierarchical recurrent neural network model to read a whole paragraph at a time and jointly learn representations for all the clauses in the paragraph by extensively modeling context influences and inter-dependencies of clauses. Experimental results show that our model achieves the state-of-the-art performance for clause-level situation entity classification on the genre-rich MASC+Wiki corpus, which approaches human-level performance.

* Accepted by EMNLP 2018 
Viaarxiv icon