Alert button
Picture for Hao Li

Hao Li

Alert button

GiraffeDet: A Heavy-Neck Paradigm for Object Detection

Feb 09, 2022
Yiqi Jiang, Zhiyu Tan, Junyan Wang, Xiuyu Sun, Ming Lin, Hao Li

Figure 1 for GiraffeDet: A Heavy-Neck Paradigm for Object Detection
Figure 2 for GiraffeDet: A Heavy-Neck Paradigm for Object Detection
Figure 3 for GiraffeDet: A Heavy-Neck Paradigm for Object Detection
Figure 4 for GiraffeDet: A Heavy-Neck Paradigm for Object Detection
Viaarxiv icon

Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer

Jan 21, 2022
Pichao Wang, Fan Wang, Hao Li

Figure 1 for Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer
Figure 2 for Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer
Figure 3 for Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer
Figure 4 for Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer
Viaarxiv icon

Studying Popular Open Source Machine Learning Libraries and Their Cross-Ecosystem Bindings

Jan 18, 2022
Hao Li, Cor-Paul Bezemer

Viaarxiv icon

CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwnannoma and Cochlea Segmentation

Jan 08, 2022
Reuben Dorent, Aaron Kujawa, Marina Ivory, Spyridon Bakas, Nicola Rieke, Samuel Joutard, Ben Glocker, Jorge Cardoso, Marc Modat, Kayhan Batmanghelich, Arseniy Belkov, Maria Baldeon Calisto, Jae Won Choi, Benoit M. Dawant, Hexin Dong, Sergio Escalera, Yubo Fan, Lasse Hansen, Mattias P. Heinrich, Smriti Joshi, Victoriya Kashtanova, Hyeon Gyu Kim, Satoshi Kondo, Christian N. Kruse, Susana K. Lai-Yuen, Hao Li, Han Liu, Buntheng Ly, Ipek Oguz, Hyungseob Shin, Boris Shirokikh, Zixian Su, Guotai Wang, Jianghao Wu, Yanwu Xu, Kai Yao, Li Zhang, Sebastien Ourselin, Jonathan Shapey, Tom Vercauteren

Figure 1 for CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwnannoma and Cochlea Segmentation
Figure 2 for CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwnannoma and Cochlea Segmentation
Figure 3 for CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwnannoma and Cochlea Segmentation
Figure 4 for CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwnannoma and Cochlea Segmentation
Viaarxiv icon

Graph Neural Networks for Double-Strand DNA Breaks Prediction

Jan 04, 2022
XU Wang, Huan Zhao, Weiwei TU, Hao Li, Yu Sun, Xiaochen Bo

Figure 1 for Graph Neural Networks for Double-Strand DNA Breaks Prediction
Figure 2 for Graph Neural Networks for Double-Strand DNA Breaks Prediction
Figure 3 for Graph Neural Networks for Double-Strand DNA Breaks Prediction
Figure 4 for Graph Neural Networks for Double-Strand DNA Breaks Prediction
Viaarxiv icon

ELSA: Enhanced Local Self-Attention for Vision Transformer

Dec 23, 2021
Jingkai Zhou, Pichao Wang, Fan Wang, Qiong Liu, Hao Li, Rong Jin

Figure 1 for ELSA: Enhanced Local Self-Attention for Vision Transformer
Figure 2 for ELSA: Enhanced Local Self-Attention for Vision Transformer
Figure 3 for ELSA: Enhanced Local Self-Attention for Vision Transformer
Figure 4 for ELSA: Enhanced Local Self-Attention for Vision Transformer
Viaarxiv icon

TransZero++: Cross Attribute-Guided Transformer for Zero-Shot Learning

Dec 21, 2021
Shiming Chen, Ziming Hong, Guo-Sen Xie, Jian Zhao, Hao Li, Xinge You, Shuicheng Yan, Ling Shao

Figure 1 for TransZero++: Cross Attribute-Guided Transformer for Zero-Shot Learning
Figure 2 for TransZero++: Cross Attribute-Guided Transformer for Zero-Shot Learning
Figure 3 for TransZero++: Cross Attribute-Guided Transformer for Zero-Shot Learning
Figure 4 for TransZero++: Cross Attribute-Guided Transformer for Zero-Shot Learning
Viaarxiv icon

Watch Those Words: Video Falsification Detection Using Word-Conditioned Facial Motion

Dec 21, 2021
Shruti Agarwal, Liwen Hu, Evonne Ng, Trevor Darrell, Hao Li, Anna Rohrbach

Figure 1 for Watch Those Words: Video Falsification Detection Using Word-Conditioned Facial Motion
Figure 2 for Watch Those Words: Video Falsification Detection Using Word-Conditioned Facial Motion
Figure 3 for Watch Those Words: Video Falsification Detection Using Word-Conditioned Facial Motion
Figure 4 for Watch Those Words: Video Falsification Detection Using Word-Conditioned Facial Motion
Viaarxiv icon

Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition

Dec 16, 2021
Benjia Zhou, Pichao Wang, Jun Wan, Yanyan Liang, Fan Wang, Du Zhang, Zhen Lei, Hao Li, Rong Jin

Figure 1 for Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition
Figure 2 for Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition
Figure 3 for Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition
Figure 4 for Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition
Viaarxiv icon

On the Dilution of Precision for Time Difference of Arrival with Station Deployment

Dec 10, 2021
Fengyun Zhang, Hao Li, Yulong Ding, Shuang-Hua Yang, Li Yang

Figure 1 for On the Dilution of Precision for Time Difference of Arrival with Station Deployment
Figure 2 for On the Dilution of Precision for Time Difference of Arrival with Station Deployment
Figure 3 for On the Dilution of Precision for Time Difference of Arrival with Station Deployment
Figure 4 for On the Dilution of Precision for Time Difference of Arrival with Station Deployment
Viaarxiv icon