Robust Person Re-ID By Modelling Feature Uncertainty
谢威 last edited by 谢威
Robust Person Re-Identification by Modelling Feature Uncertainty
The source code is here.
Two types of noise are prevalent in practice:
label noise caused by human annotator errors, i.e., people assigned with the wrong identities
data outliers caused by person detector errors or occlusion
Having both types of noisy samples in a training set inevitably has a detrimental effect on the learned feature embedding:
Noisy samples are often far from inliers of the same class in the input (image) space.
To minimise intra-class distance and pull the noisy samples close to their class centre, a ReID model often needs to sacrifice inter-class separability, leading to performance degradation.
Existing robust deep learning approaches can be grouped into two categories depending whether human supervision/verification of noise is required.
no such additional human noise annotation or pattern estimation is needed.
These methods address label noise by either iterative label correction via bootstrapping, adding additional layers on top of a classification layer to estimate the noise pattern [1, 2] or loss correlation .
requires a subset of noisy data to be re-annotated (cleaned) by more reliable sources to verify which samples contain noise. This subset is then used as seed/reference set so that noisy samples in the full training set can be identified.
(1) The recently proposed CleanNet  learns the similarity between class- and query- embedding vectors, which is then used to detect noisy samples.
(2) MentorNet  on the other hand resorts to curriculum learning and knowledge distillation to focus on samples whose labels are more likely to be correct.
We propose to explicitly model the feature distribution of an individual image as Gaussian, i.e., we assume the feature vector of an image is drawn from a Gaussian distribution parametrised.
DistributionNet models uncertainty and allocates it appropriately by introducing losses to promote high net uncertainty.
Given a noisy training sample, instead of forcing it to be closer to other inliers of the same class, DistributionNet computes a large variance, indicating that it is uncertain about what feature values should be assigned to the sample.
Training samples of larger variances have less impact on the learned feature embedding space.
Together with the supervised loss, they help the model identify noisy training samples, discount them/mitigate the negative impact of the noisy samples by assigning large variances and focus more on the clean inliers rather than overfitting to noisy samples, resulting in better class separability and better generalisation to test data.
Why large variances for noisy samples
First, we need to understand what the supervised classification loss
wants: samples with large variances will lead to large loss values of ; it is also noted that samples with wrong labels or outlying also have the same effect because they are normally far away from the class centres and the clean inliers.
Second, we explained that with the feature uncertainty loss
, the model cannot simply satisfy by reducing the variance of every sample to zero – the overall variance/uncertainty level has to be maintained.
So who will get the large variance?
Now the decision is clear: reducing variances of noisy samples would still lead to large
, whilst reducing those of clean inliers will have a direct impact of reducing ; the model therefore allocates large variance to noisy samples.
Why samples with larger variance contribute less for model training
The reason is intuitive – if an image embedding has a large variance, when it is sampled, the outcome
will be far away from its original point (the mean vector ) but with the same class label. So when several diverse and are fed to the classifier, it is likely that their gradients will cancel each other out.
On the other hand, when a sample has a small variance, all
will be close to ; feeding these to the classifier gives consistent gradients thus reinforcing the importance of the sample.
The variance/uncertainty thus provides a mechanism for DistributionNet to give more/less importance to different training samples. Since noisy samples are given large variance, their contribution to model training is reduced, resulting in a better feature embedding space.
 Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. In ICLR, 2015. [link]
 Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In ICLR, 2017. [link]
 Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017. [link]
 Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. Cleannet: Transfer learning for scalable image classifier training with label noise. In ICCV, 2017. [link]
 Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018. [link]