Alert button
Picture for Philip H. S. Torr

Philip H. S. Torr

Alert button

Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence

Aug 14, 2018
Arslan Chaudhry, Puneet K. Dokania, Thalaiyasingam Ajanthan, Philip H. S. Torr

Figure 1 for Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
Figure 2 for Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
Figure 3 for Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
Figure 4 for Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
Viaarxiv icon

Incremental Tube Construction for Human Action Detection

Jul 23, 2018
Harkirat Singh Behl, Michael Sapienza, Gurkirt Singh, Suman Saha, Fabio Cuzzolin, Philip H. S. Torr

Figure 1 for Incremental Tube Construction for Human Action Detection
Figure 2 for Incremental Tube Construction for Human Action Detection
Figure 3 for Incremental Tube Construction for Human Action Detection
Figure 4 for Incremental Tube Construction for Human Action Detection
Viaarxiv icon

With Friends Like These, Who Needs Adversaries?

Jul 23, 2018
Saumya Jetley, Nicholas A. Lord, Philip H. S. Torr

Figure 1 for With Friends Like These, Who Needs Adversaries?
Figure 2 for With Friends Like These, Who Needs Adversaries?
Figure 3 for With Friends Like These, Who Needs Adversaries?
Figure 4 for With Friends Like These, Who Needs Adversaries?
Viaarxiv icon

Multi-Agent Diverse Generative Adversarial Networks

Jul 16, 2018
Arnab Ghosh, Viveka Kulharia, Vinay Namboodiri, Philip H. S. Torr, Puneet K. Dokania

Figure 1 for Multi-Agent Diverse Generative Adversarial Networks
Figure 2 for Multi-Agent Diverse Generative Adversarial Networks
Figure 3 for Multi-Agent Diverse Generative Adversarial Networks
Figure 4 for Multi-Agent Diverse Generative Adversarial Networks
Viaarxiv icon

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Jul 08, 2018
Anurag Arnab, Ondrej Miksik, Philip H. S. Torr

Figure 1 for On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Figure 2 for On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Figure 3 for On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Figure 4 for On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Viaarxiv icon

Intriguing Properties of Learned Representations

Jun 11, 2018
Amartya Sanyal, Varun Kanade, Philip H. S. Torr

Figure 1 for Intriguing Properties of Learned Representations
Figure 2 for Intriguing Properties of Learned Representations
Figure 3 for Intriguing Properties of Learned Representations
Figure 4 for Intriguing Properties of Learned Representations
Viaarxiv icon

Value Propagation Networks

May 28, 2018
Nantas Nardelli, Gabriel Synnaeve, Zeming Lin, Pushmeet Kohli, Philip H. S. Torr, Nicolas Usunier

Figure 1 for Value Propagation Networks
Figure 2 for Value Propagation Networks
Figure 3 for Value Propagation Networks
Figure 4 for Value Propagation Networks
Viaarxiv icon

A Unified View of Piecewise Linear Neural Network Verification

May 22, 2018
Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar

Figure 1 for A Unified View of Piecewise Linear Neural Network Verification
Figure 2 for A Unified View of Piecewise Linear Neural Network Verification
Figure 3 for A Unified View of Piecewise Linear Neural Network Verification
Figure 4 for A Unified View of Piecewise Linear Neural Network Verification
Viaarxiv icon

Meta-learning with differentiable closed-form solvers

May 21, 2018
Luca Bertinetto, João F. Henriques, Philip H. S. Torr, Andrea Vedaldi

Figure 1 for Meta-learning with differentiable closed-form solvers
Figure 2 for Meta-learning with differentiable closed-form solvers
Figure 3 for Meta-learning with differentiable closed-form solvers
Figure 4 for Meta-learning with differentiable closed-form solvers
Viaarxiv icon

Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning

May 21, 2018
Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip H. S. Torr, Pushmeet Kohli, Shimon Whiteson

Figure 1 for Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning
Figure 2 for Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning
Figure 3 for Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning
Figure 4 for Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning
Viaarxiv icon