Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Huangying Zhan

Unsupervised Scale-consistent Depth Learning from Video


May 25, 2021
Jia-Wang Bian, Huangying Zhan, Naiyan Wang, Zhichao Li, Le Zhang, Chunhua Shen, Ming-Ming Cheng, Ian Reid

* Accept to IJCV. The source code is available at https://github.com/JiawangBian/SC-SfMLearner-Release 

  Access Paper or Ask Questions

DF-VO: What Should Be Learnt for Visual Odometry?


Mar 01, 2021
Huangying Zhan, Chamara Saroj Weerasekera, Jia-Wang Bian, Ravi Garg, Ian Reid

* extended version of ICRA-2020 paper (Visual Odometry Revisited: What Should Be Learnt?) 

  Access Paper or Ask Questions

Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue


Jun 04, 2020
Jia-Wang Bian, Huangying Zhan, Naiyan Wang, Tat-Jun Chin, Chunhua Shen, Ian Reid

* See codes, data, and demos in GitHub page (https://github.com/JiawangBian/Unsupervised-Indoor-Depth

  Access Paper or Ask Questions

Visual Odometry Revisited: What Should Be Learnt?


Oct 03, 2019
Huangying Zhan, Chamara Saroj Weerasekera, Jiawang Bian, Ian Reid

* Demo video: https://youtu.be/Nl8mFU4SJKY Code: https://github.com/Huangying-Zhan/DF-VO 

  Access Paper or Ask Questions

Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video


Oct 03, 2019
Jia-Wang Bian, Zhichao Li, Naiyan Wang, Huangying Zhan, Chunhua Shen, Ming-Ming Cheng, Ian Reid

* Accepted to NeurIPS 2019. Code is available at https://github.com/JiawangBian/SC-SfMLearner-Release 

  Access Paper or Ask Questions

Self-supervised Learning for Single View Depth and Surface Normal Estimation


Mar 01, 2019
Huangying Zhan, Chamara Saroj Weerasekera, Ravi Garg, Ian Reid

* 6 pages, 3 figures, ICRA 2019 

  Access Paper or Ask Questions

Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction


Apr 05, 2018
Huangying Zhan, Ravi Garg, Chamara Saroj Weerasekera, Kejie Li, Harsh Agarwal, Ian Reid

* 8 pages, 6 figures, CVPR 2018 

  Access Paper or Ask Questions