* This work presents a new insightful finding to complement a previous
one "deep neural networks easily fit random labels [99]": Deep models fit and
generalise significantly less confident when more random labels exist.
ProSelfLC redirects and promotes entropy minimisation, which is in marked
contrast to recent practices of confidence penalty [16, 59, 72] Access Paper or Ask Questions
* Open discussion: we show it's fine for a learner (student) to be
confident towards a correct low entropy status. Then more research attention
should be paid to the definition of correct knowledge, as in genera we
accept, human annotations used for learning supervision may be biased,
subjective, and wrong Access Paper or Ask Questions
* Learning target revising, softer targets, entropy regularisation, EM
algorithm. A target label distribution should define both the semantic class
and similarity structure! Access Paper or Ask Questions
* A Set-based Person Re-identification Baseline: Simple Average Fusion
of Global Spatial Representations, without temporal information, without
parts/poses/attributes information Access Paper or Ask Questions
* Question: What training examples should be focused and how much more
should they be emphasised when training DNNs under label noise? Answer: When
noise rate is higher, we can improve a model's robustness by focusing on
relatively less difficult examples Access Paper or Ask Questions