Alert button
Picture for Ryan Rossi

Ryan Rossi

Alert button

Summaries as Captions: Generating Figure Captions for Scientific Documents with Automated Text Summarization

Feb 23, 2023
Chieh-Yang Huang, Ting-Yao Hsu, Ryan Rossi, Ani Nenkova, Sungchul Kim, Gromit Yeuk-Yin Chan, Eunyee Koh, Clyde Lee Giles, Ting-Hao 'Kenneth' Huang

Figure 1 for Summaries as Captions: Generating Figure Captions for Scientific Documents with Automated Text Summarization
Figure 2 for Summaries as Captions: Generating Figure Captions for Scientific Documents with Automated Text Summarization
Figure 3 for Summaries as Captions: Generating Figure Captions for Scientific Documents with Automated Text Summarization
Figure 4 for Summaries as Captions: Generating Figure Captions for Scientific Documents with Automated Text Summarization

Effective figure captions are crucial for clear comprehension of scientific figures, yet poor caption writing remains a common issue in scientific articles. Our study of arXiv cs.CL papers found that 53.88% of captions were rated as unhelpful or worse by domain experts, showing the need for better caption generation. Previous efforts in figure caption generation treated it as a vision task, aimed at creating a model to understand visual content and complex contextual information. Our findings, however, demonstrate that over 75% of figure captions' tokens align with corresponding figure-mentioning paragraphs, indicating great potential for language technology to solve this task. In this paper, we present a novel approach for generating figure captions in scientific documents using text summarization techniques. Our approach extracts sentences referencing the target figure, then summarizes them into a concise caption. In the experiments on real-world arXiv papers (81.2% were published at academic conferences), our method, using only text data, outperformed previous approaches in both automatic and human evaluations. We further conducted data-driven investigations into the two core challenges: (i) low-quality author-written captions and (ii) the absence of a standard for good captions. We found that our models could generate improved captions for figures with original captions rated as unhelpful, and the model trained on captions with more than 30 tokens produced higher-quality captions. We also found that good captions often include the high-level takeaway of the figure. Our work proves the effectiveness of text summarization in generating figure captions for scholarly articles, outperforming prior vision-based approaches. Our findings have practical implications for future figure captioning systems, improving scientific communication clarity.

* Preprint, 2023 
Viaarxiv icon

Graph Learning with Localized Neighborhood Fairness

Dec 22, 2022
April Chen, Ryan Rossi, Nedim Lipka, Jane Hoffswell, Gromit Chan, Shunan Guo, Eunyee Koh, Sungchul Kim, Nesreen K. Ahmed

Figure 1 for Graph Learning with Localized Neighborhood Fairness
Figure 2 for Graph Learning with Localized Neighborhood Fairness
Figure 3 for Graph Learning with Localized Neighborhood Fairness
Figure 4 for Graph Learning with Localized Neighborhood Fairness

Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node. In this work, we formally introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings. We argue that the notion of neighborhood fairness is more appropriate since GNN-based models operate at the local neighborhood level of a node. Our neighborhood fairness framework has two main components that are flexible for learning fair graph representations from arbitrary data: the first aims to construct fair neighborhoods for any arbitrary node in a graph and the second enables adaption of these fair neighborhoods to better capture certain application or data-dependent constraints, such as allowing neighborhoods to be more biased towards certain attributes or neighbors in the graph.Furthermore, while link prediction has been extensively studied, we are the first to investigate the graph representation learning task of fair link classification. We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings. Notably, our approach achieves not only better fairness but also increases the accuracy in the majority of cases across a wide variety of graphs, problem settings, and metrics.

Viaarxiv icon

AutoGML: Fast Automatic Model Selection for Graph Machine Learning

Jun 18, 2022
Namyong Park, Ryan Rossi, Nesreen Ahmed, Christos Faloutsos

Figure 1 for AutoGML: Fast Automatic Model Selection for Graph Machine Learning
Figure 2 for AutoGML: Fast Automatic Model Selection for Graph Machine Learning
Figure 3 for AutoGML: Fast Automatic Model Selection for Graph Machine Learning
Figure 4 for AutoGML: Fast Automatic Model Selection for Graph Machine Learning

Given a graph learning task, such as link prediction, on a new graph dataset, how can we automatically select the best method as well as its hyperparameters (collectively called a model)? Model selection for graph learning has been largely ad hoc. A typical approach has been to apply popular methods to new datasets, but this is often suboptimal. On the other hand, systematically comparing models on the new graph quickly becomes too costly, or even impractical. In this work, we develop the first meta-learning approach for automatic graph machine learning, called AutoGML, which capitalizes on the prior performances of a large body of existing methods on benchmark graph datasets, and carries over this prior experience to automatically select an effective model to use for the new graph, without any model training or evaluations. To capture the similarity across graphs from different domains, we introduce specialized meta-graph features that quantify the structural characteristics of a graph. Then we design a meta-graph that represents the relations among models and graphs, and develop a graph meta-learner operating on the meta-graph, which estimates the relevance of each model to different graphs. Through extensive experiments, we show that using AutoGML to select a method for the new graph significantly outperforms consistently applying popular methods as well as several existing meta-learners, while being extremely fast at test time.

Viaarxiv icon

CGC: Contrastive Graph Clustering for Community Detection and Tracking

Apr 05, 2022
Namyong Park, Ryan Rossi, Eunyee Koh, Iftikhar Ahamath Burhanuddin, Sungchul Kim, Fan Du, Nesreen Ahmed, Christos Faloutsos

Figure 1 for CGC: Contrastive Graph Clustering for Community Detection and Tracking
Figure 2 for CGC: Contrastive Graph Clustering for Community Detection and Tracking
Figure 3 for CGC: Contrastive Graph Clustering for Community Detection and Tracking
Figure 4 for CGC: Contrastive Graph Clustering for Community Detection and Tracking

Given entities and their interactions in the web data, which may have occurred at different time, how can we find communities of entities and track their evolution? In this paper, we approach this important task from graph clustering perspective. Recently, state-of-the-art clustering performance in various domains has been achieved by deep clustering methods. Especially, deep graph clustering (DGC) methods have successfully extended deep clustering to graph-structured data by learning node representations and cluster assignments in a joint optimization framework. Despite some differences in modeling choices (e.g., encoder architectures), existing DGC methods are mainly based on autoencoders and use the same clustering objective with relatively minor adaptations. Also, while many real-world graphs are dynamic, previous DGC methods considered only static graphs. In this work, we develop CGC, a novel end-to-end framework for graph clustering, which fundamentally differs from existing methods. CGC learns node embeddings and cluster assignments in a contrastive graph learning framework, where positive and negative samples are carefully selected in a multi-level scheme such that they reflect hierarchical community structures and network homophily. Also, we extend CGC for time-evolving data, where temporal graph clustering is performed in an incremental learning fashion, with the ability to detect change points. Extensive evaluation on real-world graphs demonstrates that the proposed CGC consistently outperforms existing methods.

* TheWebConf 2022 Research Track 
Viaarxiv icon

Neural Point Process for Learning Spatiotemporal Event Dynamics

Dec 12, 2021
Zihao Zhou, Xingyi Yang, Ryan Rossi, Handong Zhao, Rose Yu

Figure 1 for Neural Point Process for Learning Spatiotemporal Event Dynamics
Figure 2 for Neural Point Process for Learning Spatiotemporal Event Dynamics
Figure 3 for Neural Point Process for Learning Spatiotemporal Event Dynamics
Figure 4 for Neural Point Process for Learning Spatiotemporal Event Dynamics

Learning the dynamics of spatiotemporal events is a fundamental problem. Neural point processes enhance the expressivity of point process models with deep neural networks. However, most existing methods only consider temporal dynamics without spatial modeling. We propose Deep Spatiotemporal Point Process (DeepSTPP), a deep dynamics model that integrates spatiotemporal point processes. Our method is flexible, efficient, and can accurately forecast irregularly sampled events over space and time. The key construction of our approach is the nonparametric space-time intensity function, governed by a latent process. The intensity function enjoys closed-form integration for the density. The latent process captures the uncertainty of the event sequence. We use amortized variational inference to infer the latent process with deep networks. Using synthetic datasets, we validate our model can accurately learn the true intensity function. On real-world benchmark datasets, our model demonstrates superior performance over state-of-the-art baselines.

Viaarxiv icon

Asymptotics of Ridge Regression in Convolutional Models

Mar 08, 2021
Mojtaba Sahraee-Ardakan, Tung Mai, Anup Rao, Ryan Rossi, Sundeep Rangan, Alyson K. Fletcher

Figure 1 for Asymptotics of Ridge Regression in Convolutional Models
Figure 2 for Asymptotics of Ridge Regression in Convolutional Models
Figure 3 for Asymptotics of Ridge Regression in Convolutional Models

Understanding generalization and estimation error of estimators for simple models such as linear and generalized linear models has attracted a lot of attention recently. This is in part due to an interesting observation made in machine learning community that highly over-parameterized neural networks achieve zero training error, and yet they are able to generalize well over the test samples. This phenomenon is captured by the so called double descent curve, where the generalization error starts decreasing again after the interpolation threshold. A series of recent works tried to explain such phenomenon for simple models. In this work, we analyze the asymptotics of estimation error in ridge estimators for convolutional linear models. These convolutional inverse problems, also known as deconvolution, naturally arise in different fields such as seismology, imaging, and acoustics among others. Our results hold for a large class of input distributions that include i.i.d. features as a special case. We derive exact formulae for estimation error of ridge estimators that hold in a certain high-dimensional regime. We show the double descent phenomenon in our experiments for convolutional models and show that our theoretical results match the experiments.

Viaarxiv icon

Machine Unlearning via Algorithmic Stability

Feb 25, 2021
Enayat Ullah, Tung Mai, Anup Rao, Ryan Rossi, Raman Arora

Figure 1 for Machine Unlearning via Algorithmic Stability
Figure 2 for Machine Unlearning via Algorithmic Stability

We study the problem of machine unlearning and identify a notion of algorithmic stability, Total Variation (TV) stability, which we argue, is suitable for the goal of exact unlearning. For convex risk minimization problems, we design TV-stable algorithms based on noisy Stochastic Gradient Descent (SGD). Our key contribution is the design of corresponding efficient unlearning algorithms, which are based on constructing a (maximal) coupling of Markov chains for the noisy SGD procedure. To understand the trade-offs between accuracy and unlearning efficiency, we give upper and lower bounds on excess empirical and populations risk of TV stable algorithms for convex risk minimization. Our techniques generalize to arbitrary non-convex functions, and our algorithms are differentially private as well.

Viaarxiv icon

Learning Contextualized Knowledge Structures for Commonsense Reasoning

Oct 24, 2020
Jun Yan, Mrigank Raman, Tianyu Zhang, Ryan Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, Xiang Ren

Figure 1 for Learning Contextualized Knowledge Structures for Commonsense Reasoning
Figure 2 for Learning Contextualized Knowledge Structures for Commonsense Reasoning
Figure 3 for Learning Contextualized Knowledge Structures for Commonsense Reasoning
Figure 4 for Learning Contextualized Knowledge Structures for Commonsense Reasoning

Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference. However, these methods rely on quality and contextualized knowledge structures (i.e., fact triples) that are retrieved at the pre-processing stage but overlook challenges caused by incompleteness of a KG, limited expressiveness of its relations, and retrieved facts irrelevant to the reasoning context. In this paper, we present a novel neural-symbolic model, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG), determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information. Our model learns a compact graph structure (comprising both extracted and generated edges) through filtering edges that are unhelpful to the reasoning process. We show marked improvement on three commonsense reasoning benchmarks and demonstrate the superiority of the learned graph structures with user studies.

Viaarxiv icon

Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation

Oct 24, 2020
Mrigank Raman, Siddhant Agarwal, Peifeng Wang, Aaron Chan, Hansen Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, Xiang Ren

Figure 1 for Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
Figure 2 for Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
Figure 3 for Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
Figure 4 for Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation

Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems. Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide "insights" to practitioners. In this paper, we question the faithfulness of such symbolic explanations. We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics. In particular, we train a reinforcement learning policy to manipulate relation types or edge connections in a knowledge graph, such that the resulting downstream performance is maximally preserved. Across multiple models and tasks, our approach drastically alters knowledge graphs with little to no drop in performance. These results raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic models in leveraging symbolic knowledge.

* 13 pages, 9 figures 
Viaarxiv icon

Reinforcement Learning-based N-ary Cross-Sentence Relation Extraction

Sep 26, 2020
Chenhan Yuan, Ryan Rossi, Andrew Katz, Hoda Eldardiry

Figure 1 for Reinforcement Learning-based N-ary Cross-Sentence Relation Extraction
Figure 2 for Reinforcement Learning-based N-ary Cross-Sentence Relation Extraction
Figure 3 for Reinforcement Learning-based N-ary Cross-Sentence Relation Extraction
Figure 4 for Reinforcement Learning-based N-ary Cross-Sentence Relation Extraction

The models of n-ary cross sentence relation extraction based on distant supervision assume that consecutive sentences mentioning n entities describe the relation of these n entities. However, on one hand, this assumption introduces noisy labeled data and harms the models' performance. On the other hand, some non-consecutive sentences also describe one relation and these sentences cannot be labeled under this assumption. In this paper, we relax this strong assumption by a weaker distant supervision assumption to address the second issue and propose a novel sentence distribution estimator model to address the first problem. This estimator selects correctly labeled sentences to alleviate the effect of noisy data is a two-level agent reinforcement learning model. In addition, a novel universal relation extractor with a hybrid approach of attention mechanism and PCNN is proposed such that it can be deployed in any tasks, including consecutive and nonconsecutive sentences. Experiments demonstrate that the proposed model can reduce the impact of noisy data and achieve better performance on general n-ary cross sentence relation extraction task compared to baseline models.

* 10 pages, 3 figures, submitted to AAAI 
Viaarxiv icon