Catastrophic forgetting in neural networks during incremental learning remains a challenging problem. Previous research investigated catastrophic forgetting in fully connected networks, with some earlier work exploring activation functions and learning algorithms. Applications of neural networks have been extended to include similarity learning. It is of significant interest to understand how similarity learning loss functions would be affected by catastrophic forgetting. Our research investigates catastrophic forgetting for four well-known similarity-based loss functions during incremental class learning. The loss functions are angular, contrastive, centre, and triplet loss. Our results show that the rate of catastrophic forgetting is different across loss functions on multiple datasets. The angular loss was least affected, followed by contrastive, triplet loss, and centre loss with good mining techniques. We implemented three existing incremental learning techniques, iCaRL, EWC, and EBLL. We further proposed our novel technique using VAEs to generate representation as exemplars that are passed through intermediate layers of the network. Our method outperformed the three existing techniques. We have shown that we do not require stored images as exemplars for incremental learning with similarity learning. The generated representations can help preserve regions of the embedding space used by prior knowledge so that new knowledge will not ``overwrite'' prior knowledge.
This paper tackles face recognition in videos employing metric learning methods and similarity ranking models. The paper compares the use of the Siamese network with contrastive loss and Triplet Network with triplet loss implementing the following architectures: Google/Inception architecture, 3D Convolutional Network (C3D), and a 2-D Long short-term memory (LSTM) Recurrent Neural Network. We make use of still images and sequences from videos for training the networks and compare the performances implementing the above architectures. The dataset used was the YouTube Face Database designed for investigating the problem of face recognition in videos. The contribution of this paper is two-fold: to begin, the experiments have established 3-D Convolutional networks and 2-D LSTMs with the contrastive loss on image sequences do not outperform Google/Inception architecture with contrastive loss in top $n$ rank face retrievals with still images. However, the 3-D Convolution networks and 2-D LSTM with triplet Loss outperform the Google/Inception with triplet loss in top $n$ rank face retrievals on the dataset; second, a Support Vector Machine (SVM) was used in conjunction with the CNNs' learned feature representations for facial identification. The results show that feature representation learned with triplet loss is significantly better for n-shot facial identification compared to contrastive loss. The most useful feature representations for facial identification are from the 2-D LSTM with triplet loss. The experiments show that learning spatio-temporal features from video sequences is beneficial for facial recognition in videos.