Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

AdaTriplet-RA: Domain Matching via Adaptive Triplet and Reinforced Attention for Unsupervised Domain Adaptation


Nov 16, 2022
Xinyao Shu, Shiyang Yan, Zhenyu Lu, Xinshao Wang, Yuan Xie

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Misspecified Phase Retrieval with Generative Priors


Oct 11, 2022
Zhaoqiang Liu, Xinshao Wang, Jiulong Liu

Add code

* NeurIPS 2022 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State


Jun 30, 2022
Xinshao Wang, Yang Hua, Elyor Kodirov, Sankha Subhra Mukherjee, David A. Clifton, Neil M. Robertson

Add code

* This work presents a new insightful finding to complement a previous one "deep neural networks easily fit random labels [99]": Deep models fit and generalise significantly less confident when more random labels exist. ProSelfLC redirects and promotes entropy minimisation, which is in marked contrast to recent practices of confidence penalty [16, 59, 72] 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Not All Knowledge Is Created Equal


Jun 02, 2021
Ziyun Li, Xinshao Wang, Haojin Yang, Di Hu, Neil M. Robertson, David A. Clifton, Christoph Meinel

Add code

* Selective mutual knowledge distillation 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks


Jun 08, 2020
Xinshao Wang, Yang Hua, Elyor Kodirov, Neil M. Robertson

Add code

* Open discussion: we show it's fine for a learner (student) to be confident towards a correct low entropy status. Then more research attention should be paid to the definition of correct knowledge, as in genera we accept, human annotations used for learning supervision may be biased, subjective, and wrong 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

ProSelfLC: Progressive Self Label Correction for Target Revising in Label Noise


May 17, 2020
Xinshao Wang, Yang Hua, Elyor Kodirov, Neil M. Robertson

Add code

* Learning target revising, softer targets, entropy regularisation, EM algorithm. A target label distribution should define both the semantic class and similarity structure! 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Instance Cross Entropy for Deep Metric Learning


Nov 22, 2019
Xinshao Wang, Elyor Kodirov, Yang Hua, Neil Robertson

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

ID-aware Quality for Set-based Person Re-identification


Nov 20, 2019
Xinshao Wang, Elyor Kodirov, Yang Hua, Neil M. Robertson

Add code

* A Set-based Person Re-identification Baseline: Simple Average Fusion of Global Spatial Representations, without temporal information, without parts/poses/attributes information 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Emphasis Regularisation by Gradient Rescaling for Training Deep Neural Networks with Noisy Labels


May 27, 2019
Xinshao Wang, Yang Hua, Elyor Kodirov, Neil Robertson

Add code

* Question: What training examples should be focused and how much more should they be emphasised when training DNNs under label noise? Answer: When noise rate is higher, we can improve a model's robustness by focusing on relatively less difficult examples 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email
1
2
>>