Alert button
Picture for Yatai Ji

Yatai Ji

Alert button

Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection

Nov 28, 2023
Yicheng Xiao, Zhuoyan Luo, Yong Liu, Yue Ma, Hengwei Bian, Yatai Ji, Yujiu Yang, Xiu Li

Viaarxiv icon

Global and Local Semantic Completion Learning for Vision-Language Pre-training

Jun 12, 2023
Rong-Cheng Tu, Yatai Ji, Jie Jiang, Weijie Kong, Chengfei Cai, Wenzhe Zhao, Hongfa Wang, Yujiu Yang, Wei Liu

Figure 1 for Global and Local Semantic Completion Learning for Vision-Language Pre-training
Figure 2 for Global and Local Semantic Completion Learning for Vision-Language Pre-training
Figure 3 for Global and Local Semantic Completion Learning for Vision-Language Pre-training
Figure 4 for Global and Local Semantic Completion Learning for Vision-Language Pre-training
Viaarxiv icon

Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition

Dec 09, 2022
Xinzhe Ni, Hao Wen, Yong Liu, Yatai Ji, Yujiu Yang

Figure 1 for Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition
Figure 2 for Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition
Figure 3 for Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition
Figure 4 for Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition
Viaarxiv icon

Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning

Nov 24, 2022
Yatai Ji, Rongcheng Tu, Jie Jiang, Weijie Kong, Chengfei Cai, Wenzhe Zhao, Hongfa Wang, Yujiu Yang, Wei Liu

Figure 1 for Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning
Figure 2 for Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning
Figure 3 for Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning
Figure 4 for Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning
Viaarxiv icon

MAP: Modality-Agnostic Uncertainty-Aware Vision-Language Pre-training Model

Oct 11, 2022
Yatai Ji, Junjie Wang, Yuan Gong, Lin Zhang, Yanru Zhu, Hongfa Wang, Jiaxing Zhang, Tetsuya Sakai, Yujiu Yang

Figure 1 for MAP: Modality-Agnostic Uncertainty-Aware Vision-Language Pre-training Model
Figure 2 for MAP: Modality-Agnostic Uncertainty-Aware Vision-Language Pre-training Model
Figure 3 for MAP: Modality-Agnostic Uncertainty-Aware Vision-Language Pre-training Model
Figure 4 for MAP: Modality-Agnostic Uncertainty-Aware Vision-Language Pre-training Model
Viaarxiv icon