Alert button
Picture for Lijuan Wang

Lijuan Wang

Alert button

Violet

MLP Architectures for Vision-and-Language Modeling: An Empirical Study

Add code
Bookmark button
Alert button
Dec 08, 2021
Yixin Nie, Linjie Li, Zhe Gan, Shuohang Wang, Chenguang Zhu, Michael Zeng, Zicheng Liu, Mohit Bansal, Lijuan Wang

Figure 1 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 2 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 3 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 4 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Viaarxiv icon

Grounded Language-Image Pre-training

Add code
Bookmark button
Alert button
Dec 07, 2021
Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao

Figure 1 for Grounded Language-Image Pre-training
Figure 2 for Grounded Language-Image Pre-training
Figure 3 for Grounded Language-Image Pre-training
Figure 4 for Grounded Language-Image Pre-training
Viaarxiv icon

SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning

Add code
Bookmark button
Alert button
Nov 25, 2021
Kevin Lin, Linjie Li, Chung-Ching Lin, Faisal Ahmed, Zhe Gan, Zicheng Liu, Yumao Lu, Lijuan Wang

Figure 1 for SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning
Figure 2 for SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning
Figure 3 for SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning
Figure 4 for SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning
Viaarxiv icon

An Empirical Study of Training End-to-End Vision-and-Language Transformers

Add code
Bookmark button
Alert button
Nov 25, 2021
Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Zicheng Liu, Michael Zeng

Figure 1 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 2 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 3 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 4 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Viaarxiv icon

VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling

Add code
Bookmark button
Alert button
Nov 24, 2021
Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, Zicheng Liu

Figure 1 for VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling
Figure 2 for VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling
Figure 3 for VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling
Figure 4 for VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling
Viaarxiv icon

Scaling Up Vision-Language Pre-training for Image Captioning

Add code
Bookmark button
Alert button
Nov 24, 2021
Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, Lijuan Wang

Figure 1 for Scaling Up Vision-Language Pre-training for Image Captioning
Figure 2 for Scaling Up Vision-Language Pre-training for Image Captioning
Figure 3 for Scaling Up Vision-Language Pre-training for Image Captioning
Figure 4 for Scaling Up Vision-Language Pre-training for Image Captioning
Viaarxiv icon

Crossing the Format Boundary of Text and Boxes: Towards Unified Vision-Language Modeling

Add code
Bookmark button
Alert button
Nov 23, 2021
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, Lijuan Wang

Figure 1 for Crossing the Format Boundary of Text and Boxes: Towards Unified Vision-Language Modeling
Figure 2 for Crossing the Format Boundary of Text and Boxes: Towards Unified Vision-Language Modeling
Figure 3 for Crossing the Format Boundary of Text and Boxes: Towards Unified Vision-Language Modeling
Figure 4 for Crossing the Format Boundary of Text and Boxes: Towards Unified Vision-Language Modeling
Viaarxiv icon

Florence: A New Foundation Model for Computer Vision

Add code
Bookmark button
Alert button
Nov 22, 2021
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang

Figure 1 for Florence: A New Foundation Model for Computer Vision
Figure 2 for Florence: A New Foundation Model for Computer Vision
Figure 3 for Florence: A New Foundation Model for Computer Vision
Figure 4 for Florence: A New Foundation Model for Computer Vision
Viaarxiv icon

UFO: A UniFied TransfOrmer for Vision-Language Representation Learning

Add code
Bookmark button
Alert button
Nov 19, 2021
Jianfeng Wang, Xiaowei Hu, Zhe Gan, Zhengyuan Yang, Xiyang Dai, Zicheng Liu, Yumao Lu, Lijuan Wang

Figure 1 for UFO: A UniFied TransfOrmer for Vision-Language Representation Learning
Figure 2 for UFO: A UniFied TransfOrmer for Vision-Language Representation Learning
Figure 3 for UFO: A UniFied TransfOrmer for Vision-Language Representation Learning
Figure 4 for UFO: A UniFied TransfOrmer for Vision-Language Representation Learning
Viaarxiv icon