Alert button
Picture for Mayukh Das

Mayukh Das

Alert button

Jana

Tree DNN: A Deep Container Network

Dec 07, 2022
Brijraj Singh, Swati Gupta, Mayukh Das, Praveen Doreswamy Naidu, Sharan Kumar Allur

Figure 1 for Tree DNN: A Deep Container Network
Figure 2 for Tree DNN: A Deep Container Network

Multi-Task Learning (MTL) has shown its importance at user products for fast training, data efficiency, reduced overfitting etc. MTL achieves it by sharing the network parameters and training a network for multiple tasks simultaneously. However, MTL does not provide the solution, if each task needs training from a different dataset. In order to solve the stated problem, we have proposed an architecture named TreeDNN along with it's training methodology. TreeDNN helps in training the model with multiple datasets simultaneously, where each branch of the tree may need a different training dataset. We have shown in the results that TreeDNN provides competitive performance with the advantage of reduced ROM requirement for parameter storage and increased responsiveness of the system by loading only specific branch at inference time.

Viaarxiv icon

Human-guided Collaborative Problem Solving: A Natural Language based Framework

Jul 19, 2022
Harsha Kokel, Mayukh Das, Rakibul Islam, Julia Bonn, Jon Cai, Soham Dan, Anjali Narayan-Chen, Prashant Jayannavar, Janardhan Rao Doppa, Julia Hockenmaier, Sriraam Natarajan, Martha Palmer, Dan Roth

Figure 1 for Human-guided Collaborative Problem Solving: A Natural Language based Framework

We consider the problem of human-machine collaborative problem solving as a planning task coupled with natural language communication. Our framework consists of three components -- a natural language engine that parses the language utterances to a formal representation and vice-versa, a concept learner that induces generalized concepts for plans based on limited interactions with the user, and an HTN planner that solves the task based on human interaction. We illustrate the ability of this framework to address the key challenges of collaborative problem solving by demonstrating it on a collaborative building task in a Minecraft-based blocksworld domain. The accompanied demo video is available at https://youtu.be/q1pWe4aahF0.

* ICAPS 2021 (demo track) 
Viaarxiv icon

AutoCoMet: Smart Neural Architecture Search via Co-Regulated Shaping Reinforcement

Mar 29, 2022
Mayukh Das, Brijraj Singh, Harsh Kanti Chheda, Pawan Sharma, Pradeep NS

Figure 1 for AutoCoMet: Smart Neural Architecture Search via Co-Regulated Shaping Reinforcement
Figure 2 for AutoCoMet: Smart Neural Architecture Search via Co-Regulated Shaping Reinforcement
Figure 3 for AutoCoMet: Smart Neural Architecture Search via Co-Regulated Shaping Reinforcement
Figure 4 for AutoCoMet: Smart Neural Architecture Search via Co-Regulated Shaping Reinforcement

Designing suitable deep model architectures, for AI-driven on-device apps and features, at par with rapidly evolving mobile hardware and increasingly complex target scenarios is a difficult task. Though Neural Architecture Search (NAS/AutoML) has made this easier by shifting paradigm from extensive manual effort to automated architecture learning from data, yet it has major limitations, leading to critical bottlenecks in the context of mobile devices, including model-hardware fidelity, prohibitive search times and deviation from primary target objective(s). Thus, we propose AutoCoMet that can learn the most suitable DNN architecture optimized for varied types of device hardware and task contexts, ~ 3x faster. Our novel co-regulated shaping reinforcement controller together with the high fidelity hardware meta-behavior predictor produces a smart, fast NAS framework that adapts to context via a generalized formalism for any kind of multi-criteria optimization.

* ICPR 2022 
Viaarxiv icon

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

Dec 06, 2021
Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Srivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Rishabh Gupta, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, M. Yee, Jing Zhang, Yue Zhang

Figure 1 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 2 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 3 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 4 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on. In this paper, we present NL-Augmenter, a new participatory Python-based natural language augmentation framework which supports the creation of both transformations (modifications to the data) and filters (data splits according to specific features). We describe the framework and an initial set of 117 transformations and 23 filters for a variety of natural language tasks. We demonstrate the efficacy of NL-Augmenter by using several of its transformations to analyze the robustness of popular natural language models. The infrastructure, datacards and robustness analysis results are available publicly on the NL-Augmenter repository (\url{https://github.com/GEM-benchmark/NL-Augmenter}).

* 39 pages, repository at https://github.com/GEM-benchmark/NL-Augmenter 
Viaarxiv icon

User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams

Dec 16, 2019
Alexander L. Hayes, Mayukh Das, Phillip Odom, Sriraam Natarajan

Figure 1 for User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams
Figure 2 for User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams
Figure 3 for User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams
Figure 4 for User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams

One of the key advantages of Inductive Logic Programming systems is the ability of the domain experts to provide background knowledge as modes that allow for efficient search through the space of hypotheses. However, there is an inherent assumption that this expert should also be an ILP expert to provide effective modes. We relax this assumption by designing a graphical user interface that allows the domain expert to interact with the system using Entity Relationship diagrams. These interactions are used to construct modes for the learning system. We evaluate our algorithm on a probabilistic logic learning system where we demonstrate that the user is able to construct effective background knowledge on par with the expert-encoded knowledge on five data sets.

* Proceedings of the Knowledge Capture Conference (2017) 30:1-30:8  
* 8 pages. Published in Proceedings of the Knowledge Capture Conference, 2017 
Viaarxiv icon

One-Shot Induction of Generalized Logical Concepts via Human Guidance

Dec 15, 2019
Mayukh Das, Nandini Ramanan, Janardhan Rao Doppa, Sriraam Natarajan

Figure 1 for One-Shot Induction of Generalized Logical Concepts via Human Guidance
Figure 2 for One-Shot Induction of Generalized Logical Concepts via Human Guidance
Figure 3 for One-Shot Induction of Generalized Logical Concepts via Human Guidance
Figure 4 for One-Shot Induction of Generalized Logical Concepts via Human Guidance

We consider the problem of learning generalized first-order representations of concepts from a single example. To address this challenging problem, we augment an inductive logic programming learner with two novel algorithmic contributions. First, we define a distance measure between candidate concept representations that improves the efficiency of search for target concept and generalization. Second, we leverage richer human inputs in the form of advice to improve the sample-efficiency of learning. We prove that the proposed distance measure is semantically valid and use that to derive a PAC bound. Our experimental analysis on diverse concept learning tasks demonstrates both the effectiveness and efficiency of the proposed approach over a first-order concept learner using only examples.

* STARAI '20, Workshop version 
Viaarxiv icon

Knowledge-augmented Column Networks: Guiding Deep Learning with Advice

May 31, 2019
Mayukh Das, Devendra Singh Dhami, Yang Yu, Gautam Kunapuli, Sriraam Natarajan

Figure 1 for Knowledge-augmented Column Networks: Guiding Deep Learning with Advice
Figure 2 for Knowledge-augmented Column Networks: Guiding Deep Learning with Advice
Figure 3 for Knowledge-augmented Column Networks: Guiding Deep Learning with Advice
Figure 4 for Knowledge-augmented Column Networks: Guiding Deep Learning with Advice

Recently, deep models have had considerable success in several tasks, especially with low-level representations. However, effective learning from sparse noisy samples is a major challenge in most deep models, especially in domains with structured representations. Inspired by the proven success of human guided machine learning, we propose Knowledge-augmented Column Networks, a relational deep learning framework that leverages human advice/knowledge to learn better models in presence of sparsity and systematic noise.

* Presented at 2019 ICML Workshop on Human in the Loop Learning (HILL 2019), Long Beach, USA. arXiv admin note: substantial text overlap with arXiv:1904.06950 
Viaarxiv icon

Human-Guided Learning of Column Networks: Augmenting Deep Learning with Advice

Apr 15, 2019
Mayukh Das, Yang Yu, Devendra Singh Dhami, Gautam Kunapuli, Sriraam Natarajan

Figure 1 for Human-Guided Learning of Column Networks: Augmenting Deep Learning with Advice
Figure 2 for Human-Guided Learning of Column Networks: Augmenting Deep Learning with Advice
Figure 3 for Human-Guided Learning of Column Networks: Augmenting Deep Learning with Advice
Figure 4 for Human-Guided Learning of Column Networks: Augmenting Deep Learning with Advice

Recently, deep models have been successfully applied in several applications, especially with low-level representations. However, sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models. Column Networks, a deep architecture, can succinctly capture such domain structure and interactions, but may still be prone to sub-optimal learning from sparse and noisy samples. Inspired by the success of human-advice guided learning in AI, especially in data-scarce domains, we propose Knowledge-augmented Column Networks that leverage human advice/knowledge for better learning with noisy/sparse samples. Our experiments demonstrate that our approach leads to either superior overall performance or faster convergence (i.e., both effective and efficient).

* Under Review at 'Machine Learning Journal' (MLJ) 
Viaarxiv icon

Preference-Guided Planning: An Active Elicitation Approach

Apr 19, 2018
Mayukh Das, Phillip Odom, Md. Rakibul Islam, Janardhan Rao, Doppa, Dan Roth, Sriraam Natarajan

Figure 1 for Preference-Guided Planning: An Active Elicitation Approach
Figure 2 for Preference-Guided Planning: An Active Elicitation Approach
Figure 3 for Preference-Guided Planning: An Active Elicitation Approach
Figure 4 for Preference-Guided Planning: An Active Elicitation Approach

Planning with preferences has been employed extensively to quickly generate high-quality plans. However, it may be difficult for the human expert to supply this information without knowledge of the reasoning employed by the planner and the distribution of planning problems. We consider the problem of actively eliciting preferences from a human expert during the planning process. Specifically, we study this problem in the context of the Hierarchical Task Network (HTN) planning framework as it allows easy interaction with the human. Our experimental results on several diverse planning domains show that the preferences gathered using the proposed approach improve the quality and speed of the planner, while reducing the burden on the human expert.

* Under Review at Knowledge-Based Systems (Elsevier); "Extended Abstract" accepted and to appear at AAMAS 2018 
Viaarxiv icon