Alert button
Picture for Yan Zeng

Yan Zeng

Alert button

A Survey on Causal Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 27, 2023
Yan Zeng, Ruichu Cai, Fuchun Sun, Libo Huang, Zhifeng Hao

Figure 1 for A Survey on Causal Reinforcement Learning
Figure 2 for A Survey on Causal Reinforcement Learning
Figure 3 for A Survey on Causal Reinforcement Learning
Figure 4 for A Survey on Causal Reinforcement Learning
Viaarxiv icon

Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks

Add code
Bookmark button
Alert button
Jan 12, 2023
Xinsong Zhang, Yan Zeng, Jipeng Zhang, Hang Li

Figure 1 for Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
Figure 2 for Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
Figure 3 for Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
Figure 4 for Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
Viaarxiv icon

Biomedical image analysis competitions: The state of current participation practice

Add code
Bookmark button
Alert button
Dec 16, 2022
Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J. Adler, Patrick Godau, Veronika Cheplygina, Michal Kozubek, Sharib Ali, Anubha Gupta, Jan Kybic, Alison Noble, Carlos Ortiz de Solórzano, Samiksha Pachade, Caroline Petitjean, Daniel Sage, Donglai Wei, Elizabeth Wilden, Deepak Alapatt, Vincent Andrearczyk, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Vivek Singh Bawa, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Jinwook Choi, Olivier Commowick, Marie Daum, Adrien Depeursinge, Reuben Dorent, Jan Egger, Hannah Eichhorn, Sandy Engelhardt, Melanie Ganz, Gabriel Girard, Lasse Hansen, Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Hyunjeong Kim, Bennett Landman, Hongwei Bran Li, Jianning Li, Jun Ma, Anne Martel, Carlos Martín-Isla, Bjoern Menze, Chinedu Innocent Nwoye, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Carole Sudre, Kimberlin van Wijnen, Armine Vardazaryan, Tom Vercauteren, Martin Wagner, Chuanbo Wang, Moi Hoon Yap, Zeyun Yu, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Rina Bao, Chanyeol Choi, Andrew Cohen, Oleh Dzyubachyk, Adrian Galdran, Tianyuan Gan, Tianqi Guo, Pradyumna Gupta, Mahmood Haithami, Edward Ho, Ikbeom Jang, Zhili Li, Zhengbo Luo, Filip Lux, Sokratis Makrogiannis, Dominik Müller, Young-tack Oh, Subeen Pang, Constantin Pape, Gorkem Polat, Charlotte Rosalie Reed, Kanghyun Ryu, Tim Scherr, Vajira Thambawita, Haoyu Wang, Xinliang Wang, Kele Xu, Hung Yeh, Doyeob Yeo, Yixuan Yuan, Yan Zeng, Xin Zhao, Julian Abbing, Jannes Adam, Nagesh Adluru, Niklas Agethen, Salman Ahmed, Yasmina Al Khalil, Mireia Alenyà, Esa Alhoniemi, Chengyang An, Talha Anwar, Tewodros Weldebirhan Arega, Netanell Avisdris, Dogu Baran Aydogan, Yingbin Bai, Maria Baldeon Calisto, Berke Doga Basaran, Marcel Beetz, Cheng Bian, Hao Bian, Kevin Blansit, Louise Bloch, Robert Bohnsack, Sara Bosticardo, Jack Breen, Mikael Brudfors, Raphael Brüngel, Mariano Cabezas, Alberto Cacciola, Zhiwei Chen, Yucong Chen, Daniel Tianming Chen, Minjeong Cho, Min-Kook Choi, Chuantao Xie Chuantao Xie, Dana Cobzas, Julien Cohen-Adad, Jorge Corral Acero, Sujit Kumar Das, Marcela de Oliveira, Hanqiu Deng, Guiming Dong, Lars Doorenbos, Cory Efird, Di Fan, Mehdi Fatan Serj, Alexandre Fenneteau, Lucas Fidon, Patryk Filipiak, René Finzel, Nuno R. Freitas, Christoph M. Friedrich, Mitchell Fulton, Finn Gaida, Francesco Galati, Christoforos Galazis, Chang Hee Gan, Zheyao Gao, Shengbo Gao, Matej Gazda, Beerend Gerats, Neil Getty, Adam Gibicar, Ryan Gifford, Sajan Gohil, Maria Grammatikopoulou, Daniel Grzech, Orhun Güley, Timo Günnemann, Chunxu Guo, Sylvain Guy, Heonjin Ha, Luyi Han, Il Song Han, Ali Hatamizadeh, Tian He, Jimin Heo, Sebastian Hitziger, SeulGi Hong, SeungBum Hong, Rian Huang, Ziyan Huang, Markus Huellebrand, Stephan Huschauer, Mustaffa Hussain, Tomoo Inubushi, Ece Isik Polat, Mojtaba Jafaritadi, SeongHun Jeong, Bailiang Jian, Yuanhong Jiang, Zhifan Jiang, Yueming Jin, Smriti Joshi, Abdolrahim Kadkhodamohammadi, Reda Abdellah Kamraoui, Inha Kang, Junghwa Kang, Davood Karimi, April Khademi, Muhammad Irfan Khan, Suleiman A. Khan, Rishab Khantwal, Kwang-Ju Kim, Timothy Kline, Satoshi Kondo, Elina Kontio, Adrian Krenzer, Artem Kroviakov, Hugo Kuijf, Satyadwyoom Kumar, Francesco La Rosa, Abhi Lad, Doohee Lee, Minho Lee, Chiara Lena, Hao Li, Ling Li, Xingyu Li, Fuyuan Liao, KuanLun Liao, Arlindo Limede Oliveira, Chaonan Lin, Shan Lin, Akis Linardos, Marius George Linguraru, Han Liu, Tao Liu, Di Liu, Yanling Liu, João Lourenço-Silva, Jingpei Lu, Jiangshan Lu, Imanol Luengo, Christina B. Lund, Huan Minh Luu, Yi Lv, Yi Lv, Uzay Macar, Leon Maechler, Sina Mansour L., Kenji Marshall, Moona Mazher, Richard McKinley, Alfonso Medela, Felix Meissen, Mingyuan Meng, Dylan Miller, Seyed Hossein Mirjahanmardi, Arnab Mishra, Samir Mitha, Hassan Mohy-ud-Din, Tony Chi Wing Mok, Gowtham Krishnan Murugesan, Enamundram Naga Karthik, Sahil Nalawade, Jakub Nalepa, Mohamed Naser, Ramin Nateghi, Hammad Naveed, Quang-Minh Nguyen, Cuong Nguyen Quoc, Brennan Nichyporuk, Bruno Oliveira, David Owen, Jimut Bahan Pal, Junwen Pan, Wentao Pan, Winnie Pang, Bogyu Park, Vivek Pawar, Kamlesh Pawar, Michael Peven, Lena Philipp, Tomasz Pieciak, Szymon Plotka, Marcel Plutat, Fattaneh Pourakpour, Domen Preložnik, Kumaradevan Punithakumar, Abdul Qayyum, Sandro Queirós, Arman Rahmim, Salar Razavi, Jintao Ren, Mina Rezaei, Jonathan Adam Rico, ZunHyan Rieu, Markus Rink, Johannes Roth, Yusely Ruiz-Gonzalez, Numan Saeed, Anindo Saha, Mostafa Salem, Ricardo Sanchez-Matilla, Kurt Schilling, Wei Shao, Zhiqiang Shen, Ruize Shi, Pengcheng Shi, Daniel Sobotka, Théodore Soulier, Bella Specktor Fadida, Danail Stoyanov, Timothy Sum Hon Mun, Xiaowu Sun, Rong Tao, Franz Thaler, Antoine Théberge, Felix Thielke, Helena Torres, Kareem A. Wahid, Jiacheng Wang, YiFei Wang, Wei Wang, Xiong Wang, Jianhui Wen, Ning Wen, Marek Wodzinski, Ye Wu, Fangfang Xia, Tianqi Xiang, Chen Xiaofei, Lizhan Xu, Tingting Xue, Yuxuan Yang, Lin Yang, Kai Yao, Huifeng Yao, Amirsaeed Yazdani, Michael Yip, Hwanseung Yoo, Fereshteh Yousefirizi, Shunkai Yu, Lei Yu, Jonathan Zamora, Ramy Ashraf Zeineldin, Dewen Zeng, Jianpeng Zhang, Bokai Zhang, Jiapeng Zhang, Fan Zhang, Huahong Zhang, Zhongchen Zhao, Zixuan Zhao, Jiachen Zhao, Can Zhao, Qingshuo Zheng, Yuheng Zhi, Ziqi Zhou, Baosheng Zou, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein

Viaarxiv icon

X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks

Add code
Bookmark button
Alert button
Nov 22, 2022
Yan Zeng, Xinsong Zhang, Hang Li, Jiawei Wang, Jipeng Zhang, Wangchunshu Zhou

Figure 1 for X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Figure 2 for X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Figure 3 for X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Figure 4 for X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Viaarxiv icon

EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning

Add code
Bookmark button
Alert button
Oct 14, 2022
Tiannan Wang, Wangchunshu Zhou, Yan Zeng, Xinsong Zhang

Figure 1 for EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning
Figure 2 for EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning
Figure 3 for EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning
Figure 4 for EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning
Viaarxiv icon

Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training

Add code
Bookmark button
Alert button
Jun 01, 2022
Yan Zeng, Wangchunshu Zhou, Ao Luo, Xinsong Zhang

Figure 1 for Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training
Figure 2 for Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training
Figure 3 for Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training
Figure 4 for Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training
Viaarxiv icon

VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models

Add code
Bookmark button
Alert button
May 30, 2022
Wangchunshu Zhou, Yan Zeng, Shizhe Diao, Xinsong Zhang

Figure 1 for VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
Figure 2 for VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
Figure 3 for VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
Figure 4 for VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
Viaarxiv icon

ULSA: Unified Language of Synthesis Actions for Representation of Synthesis Protocols

Add code
Bookmark button
Alert button
Jan 23, 2022
Zheren Wang, Kevin Cruse, Yuxing Fei, Ann Chia, Yan Zeng, Haoyan Huo, Tanjin He, Bowen Deng, Olga Kononova, Gerbrand Ceder

Figure 1 for ULSA: Unified Language of Synthesis Actions for Representation of Synthesis Protocols
Figure 2 for ULSA: Unified Language of Synthesis Actions for Representation of Synthesis Protocols
Figure 3 for ULSA: Unified Language of Synthesis Actions for Representation of Synthesis Protocols
Figure 4 for ULSA: Unified Language of Synthesis Actions for Representation of Synthesis Protocols
Viaarxiv icon

Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts

Add code
Bookmark button
Alert button
Nov 16, 2021
Yan Zeng, Xinsong Zhang, Hang Li

Figure 1 for Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts
Figure 2 for Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts
Figure 3 for Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts
Figure 4 for Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts
Viaarxiv icon