Alert button
Picture for Jayakumar Subramanian

Jayakumar Subramanian

Alert button

Behavior Optimized Image Generation

Add code
Bookmark button
Alert button
Nov 18, 2023
Varun Khurana, Yaman K Singla, Jayakumar Subramanian, Rajiv Ratn Shah, Changyou Chen, Zhiqiang Xu, Balaji Krishnamurthy

Viaarxiv icon

Counterfactual Explanation Policies in RL

Add code
Bookmark button
Alert button
Jul 25, 2023
Shripad V. Deshmukh, Srivatsan R, Supriti Vijay, Jayakumar Subramanian, Chirag Agarwal

Viaarxiv icon

SARC: Soft Actor Retrospective Critic

Add code
Bookmark button
Alert button
Jun 28, 2023
Sukriti Verma, Ayush Chopra, Jayakumar Subramanian, Mausoom Sarkar, Nikaash Puri, Piyush Gupta, Balaji Krishnamurthy

Figure 1 for SARC: Soft Actor Retrospective Critic
Figure 2 for SARC: Soft Actor Retrospective Critic
Figure 3 for SARC: Soft Actor Retrospective Critic
Figure 4 for SARC: Soft Actor Retrospective Critic
Viaarxiv icon

Explaining RL Decisions with Trajectories

Add code
Bookmark button
Alert button
May 06, 2023
Shripad Vilasrao Deshmukh, Arpan Dasgupta, Balaji Krishnamurthy, Nan Jiang, Chirag Agarwal, Georgios Theocharous, Jayakumar Subramanian

Figure 1 for Explaining RL Decisions with Trajectories
Figure 2 for Explaining RL Decisions with Trajectories
Figure 3 for Explaining RL Decisions with Trajectories
Figure 4 for Explaining RL Decisions with Trajectories
Viaarxiv icon

Differentiable Agent-based Epidemiology

Add code
Bookmark button
Alert button
Jul 20, 2022
Ayush Chopra, Alexander Rodríguez, Jayakumar Subramanian, Balaji Krishnamurthy, B. Aditya Prakash, Ramesh Raskar

Figure 1 for Differentiable Agent-based Epidemiology
Figure 2 for Differentiable Agent-based Epidemiology
Figure 3 for Differentiable Agent-based Epidemiology
Figure 4 for Differentiable Agent-based Epidemiology
Viaarxiv icon

DeepABM: Scalable, efficient and differentiable agent-based simulations via graph neural networks

Add code
Bookmark button
Alert button
Oct 09, 2021
Ayush Chopra, Esma Gel, Jayakumar Subramanian, Balaji Krishnamurthy, Santiago Romero-Brufau, Kalyan S. Pasupathy, Thomas C. Kingsley, Ramesh Raskar

Figure 1 for DeepABM: Scalable, efficient and differentiable agent-based simulations via graph neural networks
Figure 2 for DeepABM: Scalable, efficient and differentiable agent-based simulations via graph neural networks
Figure 3 for DeepABM: Scalable, efficient and differentiable agent-based simulations via graph neural networks
Figure 4 for DeepABM: Scalable, efficient and differentiable agent-based simulations via graph neural networks
Viaarxiv icon

Medical Dead-ends and Learning to Identify High-risk States and Treatments

Add code
Bookmark button
Alert button
Oct 08, 2021
Mehdi Fatemi, Taylor W. Killian, Jayakumar Subramanian, Marzyeh Ghassemi

Figure 1 for Medical Dead-ends and Learning to Identify High-risk States and Treatments
Figure 2 for Medical Dead-ends and Learning to Identify High-risk States and Treatments
Figure 3 for Medical Dead-ends and Learning to Identify High-risk States and Treatments
Figure 4 for Medical Dead-ends and Learning to Identify High-risk States and Treatments
Viaarxiv icon

An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare

Add code
Bookmark button
Alert button
Nov 23, 2020
Taylor W. Killian, Haoran Zhang, Jayakumar Subramanian, Mehdi Fatemi, Marzyeh Ghassemi

Figure 1 for An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare
Figure 2 for An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare
Figure 3 for An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare
Figure 4 for An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare
Viaarxiv icon

Approximate information state for approximate planning and reinforcement learning in partially observed systems

Add code
Bookmark button
Alert button
Oct 17, 2020
Jayakumar Subramanian, Amit Sinha, Raihan Seraj, Aditya Mahajan

Figure 1 for Approximate information state for approximate planning and reinforcement learning in partially observed systems
Figure 2 for Approximate information state for approximate planning and reinforcement learning in partially observed systems
Figure 3 for Approximate information state for approximate planning and reinforcement learning in partially observed systems
Figure 4 for Approximate information state for approximate planning and reinforcement learning in partially observed systems
Viaarxiv icon

Inducing Cooperative behaviour in Sequential-Social dilemmas through Multi-Agent Reinforcement Learning using Status-Quo Loss

Add code
Bookmark button
Alert button
Feb 13, 2020
Pinkesh Badjatiya, Mausoom Sarkar, Abhishek Sinha, Siddharth Singh, Nikaash Puri, Jayakumar Subramanian, Balaji Krishnamurthy

Figure 1 for Inducing Cooperative behaviour in Sequential-Social dilemmas through Multi-Agent Reinforcement Learning using Status-Quo Loss
Figure 2 for Inducing Cooperative behaviour in Sequential-Social dilemmas through Multi-Agent Reinforcement Learning using Status-Quo Loss
Figure 3 for Inducing Cooperative behaviour in Sequential-Social dilemmas through Multi-Agent Reinforcement Learning using Status-Quo Loss
Figure 4 for Inducing Cooperative behaviour in Sequential-Social dilemmas through Multi-Agent Reinforcement Learning using Status-Quo Loss
Viaarxiv icon