Alert button
Picture for Haoxin Li

Haoxin Li

Alert button

NarrowBERT: Accelerating Masked Language Model Pretraining and Inference

Jan 11, 2023
Haoxin Li, Phillip Keung, Daniel Cheng, Jungo Kasai, Noah A. Smith

Figure 1 for NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Figure 2 for NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Figure 3 for NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Figure 4 for NarrowBERT: Accelerating Masked Language Model Pretraining and Inference

Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining corpora have become larger over time. We propose NarrowBERT, a modified transformer encoder that increases the throughput for masked language model pretraining by more than $2\times$. NarrowBERT sparsifies the transformer model such that the self-attention queries and feedforward layers only operate on the masked tokens of each sentence during pretraining, rather than all of the tokens as with the usual transformer encoder. We also show that NarrowBERT increases the throughput at inference time by as much as $3.5\times$ with minimal (or no) performance degradation on sentence encoding tasks like MNLI. Finally, we examine the performance of NarrowBERT on the IMDB and Amazon reviews classification and CoNLL NER tasks and show that it is also comparable to standard BERT performance.

* Under review (ACL Rolling Review) 
Viaarxiv icon

Evaluating and Mitigating Static Bias of Action Representations in the Background and the Foreground

Nov 23, 2022
Haoxin Li, Yue Wu, Yuan Liu, Hanwang Zhang, Boyang Li

Figure 1 for Evaluating and Mitigating Static Bias of Action Representations in the Background and the Foreground
Figure 2 for Evaluating and Mitigating Static Bias of Action Representations in the Background and the Foreground
Figure 3 for Evaluating and Mitigating Static Bias of Action Representations in the Background and the Foreground
Figure 4 for Evaluating and Mitigating Static Bias of Action Representations in the Background and the Foreground

Deep neural networks for video action recognition easily learn to utilize shortcut static features, such as background and objects instead of motion features. This results in poor generalization to atypical videos such as soccer playing on concrete surfaces (instead of soccer fields). However, due to the rarity of out-of-distribution (OOD) data, quantitative evaluation of static bias remains a difficult task. In this paper, we synthesize new sets of benchmarks to evaluate static bias of action representations, including SCUB for static cues in the background, and SCUF for static cues in the foreground. Further, we propose a simple yet effective video data augmentation technique, StillMix, that automatically identifies bias-inducing video frames; unlike similar augmentation techniques, StillMix does not need to enumerate or precisely segment biased content. With extensive experiments, we quantitatively compare and analyze existing action recognition models on the created benchmarks to reveal their characteristics. We validate the effectiveness of StillMix and show that it improves TSM (Lin, Gan, and Han 2021) and Video Swin Transformer (Liu et al. 2021) by more than 10% of accuracy on SCUB for OOD action recognition.

Viaarxiv icon

Adaptive Interaction Modeling via Graph Operations Search

May 05, 2020
Haoxin Li, Wei-Shi Zheng, Yu Tao, Haifeng Hu, Jian-Huang Lai

Figure 1 for Adaptive Interaction Modeling via Graph Operations Search
Figure 2 for Adaptive Interaction Modeling via Graph Operations Search
Figure 3 for Adaptive Interaction Modeling via Graph Operations Search
Figure 4 for Adaptive Interaction Modeling via Graph Operations Search

Interaction modeling is important for video action analysis. Recently, several works design specific structures to model interactions in videos. However, their structures are manually designed and non-adaptive, which require structures design efforts and more importantly could not model interactions adaptively. In this paper, we automate the process of structures design to learn adaptive structures for interaction modeling. We propose to search the network structures with differentiable architecture search mechanism, which learns to construct adaptive structures for different videos to facilitate adaptive interaction modeling. To this end, we first design the search space with several basic graph operations that explicitly capture different relations in videos. We experimentally demonstrate that our architecture search framework learns to construct adaptive interaction modeling structures, which provides more understanding about the relations between the structures and some interaction characteristics, and also releases the requirement of structures design efforts. Additionally, we show that the designed basic graph operations in the search space are able to model different interactions in videos. The experiments on two interaction datasets show that our method achieves competitive performance with state-of-the-arts.

Viaarxiv icon

Unsupervised Learning for Optical Flow Estimation Using Pyramid Convolution LSTM

Jul 26, 2019
Shuosen Guan, Haoxin Li, Wei-Shi Zheng

Figure 1 for Unsupervised Learning for Optical Flow Estimation Using Pyramid Convolution LSTM
Figure 2 for Unsupervised Learning for Optical Flow Estimation Using Pyramid Convolution LSTM
Figure 3 for Unsupervised Learning for Optical Flow Estimation Using Pyramid Convolution LSTM
Figure 4 for Unsupervised Learning for Optical Flow Estimation Using Pyramid Convolution LSTM

Most of current Convolution Neural Network (CNN) based methods for optical flow estimation focus on learning optical flow on synthetic datasets with groundtruth, which is not practical. In this paper, we propose an unsupervised optical flow estimation framework named PCLNet. It uses pyramid Convolution LSTM (ConvLSTM) with the constraint of adjacent frame reconstruction, which allows flexibly estimating multi-frame optical flows from any video clip. Besides, by decoupling motion feature learning and optical flow representation, our method avoids complex short-cut connections used in existing frameworks while improving accuracy of optical flow estimation. Moreover, different from those methods using specialized CNN architectures for capturing motion, our framework directly learns optical flow from the features of generic CNNs and thus can be easily embedded in any CNN based frameworks for other tasks. Extensive experiments have verified that our method not only estimates optical flow effectively and accurately, but also obtains comparable performance on action recognition.

* IEEE International Conference on Multimedia and Expo(ICME). 2019 
Viaarxiv icon

Deep Dual Relation Modeling for Egocentric Interaction Recognition

May 31, 2019
Haoxin Li, Yijun Cai, Wei-Shi Zheng

Figure 1 for Deep Dual Relation Modeling for Egocentric Interaction Recognition
Figure 2 for Deep Dual Relation Modeling for Egocentric Interaction Recognition
Figure 3 for Deep Dual Relation Modeling for Egocentric Interaction Recognition
Figure 4 for Deep Dual Relation Modeling for Egocentric Interaction Recognition

Egocentric interaction recognition aims to recognize the camera wearer's interactions with the interactor who faces the camera wearer in egocentric videos. In such a human-human interaction analysis problem, it is crucial to explore the relations between the camera wearer and the interactor. However, most existing works directly model the interactions as a whole and lack modeling the relations between the two interacting persons. To exploit the strong relations for egocentric interaction recognition, we introduce a dual relation modeling framework which learns to model the relations between the camera wearer and the interactor based on the individual action representations of the two persons. Specifically, we develop a novel interactive LSTM module, the key component of our framework, to explicitly model the relations between the two interacting persons based on their individual action representations, which are collaboratively learned with an interactor attention module and a global-local motion module. Experimental results on three egocentric interaction datasets show the effectiveness of our method and advantage over state-of-the-arts.

Viaarxiv icon