Alert button
Picture for Kartik Talamadupula

Kartik Talamadupula

Alert button

Knowledge-augmented Deep Learning and Its Applications: A Survey

Nov 30, 2022
Zijun Cui, Tian Gao, Kartik Talamadupula, Qiang Ji

Figure 1 for Knowledge-augmented Deep Learning and Its Applications: A Survey
Figure 2 for Knowledge-augmented Deep Learning and Its Applications: A Survey
Figure 3 for Knowledge-augmented Deep Learning and Its Applications: A Survey
Figure 4 for Knowledge-augmented Deep Learning and Its Applications: A Survey

Deep learning models, though having achieved great success in many different fields over the past years, are usually data hungry, fail to perform well on unseen samples, and lack of interpretability. Various prior knowledge often exists in the target domain and their use can alleviate the deficiencies with deep learning. To better mimic the behavior of human brains, different advanced methods have been proposed to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning, which we refer to as knowledge-augmented deep learning (KADL). In this survey, we define the concept of KADL, and introduce its three major tasks, i.e., knowledge identification, knowledge representation, and knowledge integration. Different from existing surveys that are focused on a specific type of knowledge, we provide a broad and complete taxonomy of domain knowledge and its representations. Based on our taxonomy, we provide a systematic review of existing techniques, different from existing works that survey integration approaches agnostic to taxonomy of knowledge. This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning. The thorough and critical reviews of numerous papers help not only understand current progresses but also identify future directions for the research on knowledge-augmented deep learning.

* Submitted to IEEE Transactions on Neural Networks and Learning Systems 
Viaarxiv icon

Investigating Explainability of Generative AI for Code through Scenario-based Design

Feb 10, 2022
Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, Justin D. Weisz

Figure 1 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 2 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 3 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 4 for Investigating Explainability of Generative AI for Code through Scenario-based Design

What does it mean for a generative AI model to be explainable? The emergent discipline of explainable AI (XAI) has made great strides in helping people understand discriminative models. Less attention has been paid to generative models that produce artifacts, rather than decisions, as output. Meanwhile, generative AI (GenAI) technologies are maturing and being applied to application domains such as software engineering. Using scenario-based design and question-driven XAI design approaches, we explore users' explainability needs for GenAI in three software engineering use cases: natural language to code, code translation, and code auto-completion. We conducted 9 workshops with 43 software engineers in which real examples from state-of-the-art generative AI models were used to elicit users' explainability needs. Drawing from prior work, we also propose 4 types of XAI features for GenAI for code and gathered additional design ideas from participants. Our work explores explainability needs for GenAI for code and demonstrates how human-centered approaches can drive the technical development of XAI in novel domains.

Viaarxiv icon

When Is It Acceptable to Break the Rules? Knowledge Representation of Moral Judgement Based on Empirical Data

Jan 19, 2022
Edmond Awad, Sydney Levine, Andrea Loreggia, Nicholas Mattei, Iyad Rahwan, Francesca Rossi, Kartik Talamadupula, Joshua Tenenbaum, Max Kleiman-Weiner

Figure 1 for When Is It Acceptable to Break the Rules? Knowledge Representation of Moral Judgement Based on Empirical Data
Figure 2 for When Is It Acceptable to Break the Rules? Knowledge Representation of Moral Judgement Based on Empirical Data
Figure 3 for When Is It Acceptable to Break the Rules? Knowledge Representation of Moral Judgement Based on Empirical Data
Figure 4 for When Is It Acceptable to Break the Rules? Knowledge Representation of Moral Judgement Based on Empirical Data

One of the most remarkable things about the human moral mind is its flexibility. We can make moral judgments about cases we have never seen before. We can decide that pre-established rules should be broken. We can invent novel rules on the fly. Capturing this flexibility is one of the central challenges in developing AI systems that can interpret and produce human-like moral judgment. This paper details the results of a study of real-world decision makers who judge whether it is acceptable to break a well-established norm: ``no cutting in line.'' We gather data on how human participants judge the acceptability of line-cutting in a range of scenarios. Then, in order to effectively embed these reasoning capabilities into a machine, we propose a method for modeling them using a preference-based structure, which captures a novel modification to standard ``dual process'' theories of moral judgment.

Viaarxiv icon

Using Document Similarity Methods to create Parallel Datasets for Code Translation

Oct 11, 2021
Mayank Agarwal, Kartik Talamadupula, Fernando Martinez, Stephanie Houde, Michael Muller, John Richards, Steven I Ross, Justin D. Weisz

Figure 1 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 2 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 3 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 4 for Using Document Similarity Methods to create Parallel Datasets for Code Translation

Translating source code from one programming language to another is a critical, time-consuming task in modernizing legacy applications and codebases. Recent work in this space has drawn inspiration from the software naturalness hypothesis by applying natural language processing techniques towards automating the code translation task. However, due to the paucity of parallel data in this domain, supervised techniques have only been applied to a limited set of popular programming languages. To bypass this limitation, unsupervised neural machine translation techniques have been proposed to learn code translation using only monolingual corpora. In this work, we propose to use document similarity methods to create noisy parallel datasets of code, thus enabling supervised techniques to be applied for automated code translation without having to rely on the availability or expensive curation of parallel code datasets. We explore the noise tolerance of models trained on such automatically-created datasets and show that these models perform comparably to models trained on ground truth for reasonable levels of noise. Finally, we exhibit the practical utility of the proposed method by creating parallel datasets for languages beyond the ones explored in prior work, thus expanding the set of programming languages for automated code translation.

Viaarxiv icon

Eye of the Beholder: Improved Relation Generalization for Text-based Reinforcement Learning Agents

Jun 15, 2021
Keerthiram Murugesan, Subhajit Chaudhury, Kartik Talamadupula

Figure 1 for Eye of the Beholder: Improved Relation Generalization for Text-based Reinforcement Learning Agents
Figure 2 for Eye of the Beholder: Improved Relation Generalization for Text-based Reinforcement Learning Agents
Figure 3 for Eye of the Beholder: Improved Relation Generalization for Text-based Reinforcement Learning Agents
Figure 4 for Eye of the Beholder: Improved Relation Generalization for Text-based Reinforcement Learning Agents

Text-based games (TBGs) have become a popular proving ground for the demonstration of learning-based agents that make decisions in quasi real-world settings. The crux of the problem for a reinforcement learning agent in such TBGs is identifying the objects in the world, and those objects' relations with that world. While the recent use of text-based resources for increasing an agent's knowledge and improving its generalization have shown promise, we posit in this paper that there is much yet to be learned from visual representations of these same worlds. Specifically, we propose to retrieve images that represent specific instances of text observations from the world and train our agents on such images. This improves the agent's overall understanding of the game 'scene' and objects' relationships to the world around them, and the variety of visual representations on offer allow the agent to generate a better generalization of a relationship. We show that incorporating such images improves the performance of agents in various TBG settings.

Viaarxiv icon

NeurIPS 2020 NLC2CMD Competition: Translating Natural Language to Bash Commands

Mar 03, 2021
Mayank Agarwal, Tathagata Chakraborti, Quchen Fu, David Gros, Xi Victoria Lin, Jaron Maene, Kartik Talamadupula, Zhongwei Teng, Jules White

Figure 1 for NeurIPS 2020 NLC2CMD Competition: Translating Natural Language to Bash Commands
Figure 2 for NeurIPS 2020 NLC2CMD Competition: Translating Natural Language to Bash Commands
Figure 3 for NeurIPS 2020 NLC2CMD Competition: Translating Natural Language to Bash Commands

The NLC2CMD Competition hosted at NeurIPS 2020 aimed to bring the power of natural language processing to the command line. Participants were tasked with building models that can transform descriptions of command line tasks in English to their Bash syntax. This is a report on the competition with details of the task, metrics, data, attempted solutions, and lessons learned.

* Competition URL: http://ibm.biz/nlc2cmd 
Viaarxiv icon

VisualHints: A Visual-Lingual Environment for Multimodal Reinforcement Learning

Oct 26, 2020
Thomas Carta, Subhajit Chaudhury, Kartik Talamadupula, Michiaki Tatsubori

Figure 1 for VisualHints: A Visual-Lingual Environment for Multimodal Reinforcement Learning
Figure 2 for VisualHints: A Visual-Lingual Environment for Multimodal Reinforcement Learning
Figure 3 for VisualHints: A Visual-Lingual Environment for Multimodal Reinforcement Learning
Figure 4 for VisualHints: A Visual-Lingual Environment for Multimodal Reinforcement Learning

We present VisualHints, a novel environment for multimodal reinforcement learning (RL) involving text-based interactions along with visual hints (obtained from the environment). Real-life problems often demand that agents interact with the environment using both natural language information and visual perception towards solving a goal. However, most traditional RL environments either solve pure vision-based tasks like Atari games or video-based robotic manipulation; or entirely use natural language as a mode of interaction, like Text-based games and dialog systems. In this work, we aim to bridge this gap and unify these two approaches in a single environment for multimodal RL. We introduce an extension of the TextWorld cooking environment with the addition of visual clues interspersed throughout the environment. The goal is to force an RL agent to use both text and visual features to predict natural language action commands for solving the final task of cooking a meal. We enable variations and difficulties in our environment to emulate various interactive real-world scenarios. We present a baseline multimodal agent for solving such problems using CNN-based feature extraction from visual hints and LSTMs for textual feature extraction. We believe that our proposed visual-lingual environment will facilitate novel problem settings for the RL community.

* Code is available at http://ibm.biz/VisualHints 
Viaarxiv icon

Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines

Oct 08, 2020
Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Pushkar Shukla, Sadhana Kumaravel, Gerald Tesauro, Kartik Talamadupula, Mrinmaya Sachan, Murray Campbell

Figure 1 for Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
Figure 2 for Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
Figure 3 for Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
Figure 4 for Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines

Text-based games have emerged as an important test-bed for Reinforcement Learning (RL) research, requiring RL agents to combine grounded language understanding with sequential decision making. In this paper, we examine the problem of infusing RL agents with commonsense knowledge. Such knowledge would allow agents to efficiently act in the world by pruning out implausible actions, and to perform look-ahead planning to determine how current actions might affect future world states. We design a new text-based gaming environment called TextWorld Commonsense (TWC) for training and evaluating RL agents with a specific kind of commonsense knowledge about objects, their attributes, and affordances. We also introduce several baseline RL agents which track the sequential context and dynamically retrieve the relevant commonsense knowledge from ConceptNet. We show that agents which incorporate commonsense knowledge in TWC perform better, while acting more efficiently. We conduct user-studies to estimate human performance on TWC and show that there is ample room for future improvement.

Viaarxiv icon

Reading Comprehension as Natural Language Inference: A Semantic Analysis

Oct 04, 2020
Anshuman Mishra, Dhruvesh Patel, Aparna Vijayakumar, Xiang Li, Pavan Kapanipathi, Kartik Talamadupula

Figure 1 for Reading Comprehension as Natural Language Inference: A Semantic Analysis
Figure 2 for Reading Comprehension as Natural Language Inference: A Semantic Analysis
Figure 3 for Reading Comprehension as Natural Language Inference: A Semantic Analysis
Figure 4 for Reading Comprehension as Natural Language Inference: A Semantic Analysis

In the recent past, Natural language Inference (NLI) has gained significant attention, particularly given its promise for downstream NLP tasks. However, its true impact is limited and has not been well studied. Therefore, in this paper, we explore the utility of NLI for one of the most prominent downstream tasks, viz. Question Answering (QA). We transform the one of the largest available MRC dataset (RACE) to an NLI form, and compare the performances of a state-of-the-art model (RoBERTa) on both these forms. We propose new characterizations of questions, and evaluate the performance of QA and NLI models on these categories. We highlight clear categories for which the model is able to perform better when the data is presented in a coherent entailment form, and a structured question-answer concatenation form, respectively.

Viaarxiv icon