Alert button
Picture for Krishnan Srinivasan

Krishnan Srinivasan

Alert button

Learning Hierarchical Control for Robust In-Hand Manipulation

Add code
Bookmark button
Alert button
Oct 24, 2019
Tingguang Li, Krishnan Srinivasan, Max Qing-Hu Meng, Wenzhen Yuan, Jeannette Bohg

Figure 1 for Learning Hierarchical Control for Robust In-Hand Manipulation
Figure 2 for Learning Hierarchical Control for Robust In-Hand Manipulation
Figure 3 for Learning Hierarchical Control for Robust In-Hand Manipulation
Figure 4 for Learning Hierarchical Control for Robust In-Hand Manipulation
Viaarxiv icon

Controlling Assistive Robots with Learned Latent Actions

Add code
Bookmark button
Alert button
Oct 16, 2019
Dylan P. Losey, Krishnan Srinivasan, Ajay Mandlekar, Animesh Garg, Dorsa Sadigh

Figure 1 for Controlling Assistive Robots with Learned Latent Actions
Figure 2 for Controlling Assistive Robots with Learned Latent Actions
Figure 3 for Controlling Assistive Robots with Learned Latent Actions
Figure 4 for Controlling Assistive Robots with Learned Latent Actions
Viaarxiv icon

Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks

Add code
Bookmark button
Alert button
Jul 28, 2019
Michelle A. Lee, Yuke Zhu, Peter Zachares, Matthew Tan, Krishnan Srinivasan, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg

Figure 1 for Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Figure 2 for Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Figure 3 for Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Figure 4 for Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Viaarxiv icon

Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks

Add code
Bookmark button
Alert button
Mar 08, 2019
Michelle A. Lee, Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg

Figure 1 for Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Figure 2 for Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Figure 3 for Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Figure 4 for Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Viaarxiv icon

Graph-based Neural Multi-Document Summarization

Add code
Bookmark button
Alert button
Aug 23, 2017
Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, Dragomir Radev

Figure 1 for Graph-based Neural Multi-Document Summarization
Figure 2 for Graph-based Neural Multi-Document Summarization
Figure 3 for Graph-based Neural Multi-Document Summarization
Figure 4 for Graph-based Neural Multi-Document Summarization
Viaarxiv icon