Alert button
Picture for Zhiyuan Liu

Zhiyuan Liu

Alert button

Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules

Add code
Bookmark button
Alert button
Oct 23, 2023
Zhiyuan Liu, Yaorui Shi, An Zhang, Enzhi Zhang, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua

Viaarxiv icon

Unlock Multi-Modal Capability of Dense Retrieval via Visual Module Plugin

Add code
Bookmark button
Alert button
Oct 21, 2023
Tianshuo Zhou, Sen Mei, Xinze Li, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu, Yu Gu, Ge Yu

Viaarxiv icon

ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction

Add code
Bookmark button
Alert button
Oct 20, 2023
Yaorui Shi, An Zhang, Enzhi Zhang, Zhiyuan Liu, Xiang Wang

Viaarxiv icon

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

Add code
Bookmark button
Alert button
Oct 20, 2023
Zekai Qu, Ruobing Xie, Chaojun Xiao, Yuan Yao, Zhiyuan Liu, Fengzong Lian, Zhanhui Kang, Jie Zhou

Figure 1 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 2 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 3 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 4 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Viaarxiv icon

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

Add code
Bookmark button
Alert button
Oct 19, 2023
Weize Chen, Xiaoyue Xu, Xu Han, Yankai Lin, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou

Viaarxiv icon

MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter

Add code
Bookmark button
Alert button
Oct 19, 2023
Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua

Viaarxiv icon

Toolink: Linking Toolkit Creation and Using through Chain-of-Solving on Open-Source Model

Add code
Bookmark button
Alert button
Oct 08, 2023
Cheng Qian, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu

Figure 1 for Toolink: Linking Toolkit Creation and Using through Chain-of-Solving on Open-Source Model
Figure 2 for Toolink: Linking Toolkit Creation and Using through Chain-of-Solving on Open-Source Model
Figure 3 for Toolink: Linking Toolkit Creation and Using through Chain-of-Solving on Open-Source Model
Figure 4 for Toolink: Linking Toolkit Creation and Using through Chain-of-Solving on Open-Source Model
Viaarxiv icon

Unlock Predictable Scaling from Emergent Abilities

Add code
Bookmark button
Alert button
Oct 05, 2023
Shengding Hu, Xin Liu, Xu Han, Xinrong Zhang, Chaoqun He, Weilin Zhao, Yankai Lin, Ning Ding, Zebin Ou, Guoyang Zeng, Zhiyuan Liu, Maosong Sun

Figure 1 for Unlock Predictable Scaling from Emergent Abilities
Figure 2 for Unlock Predictable Scaling from Emergent Abilities
Figure 3 for Unlock Predictable Scaling from Emergent Abilities
Figure 4 for Unlock Predictable Scaling from Emergent Abilities
Viaarxiv icon

UltraFeedback: Boosting Language Models with High-quality Feedback

Add code
Bookmark button
Alert button
Oct 02, 2023
Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, Maosong Sun

Figure 1 for UltraFeedback: Boosting Language Models with High-quality Feedback
Figure 2 for UltraFeedback: Boosting Language Models with High-quality Feedback
Figure 3 for UltraFeedback: Boosting Language Models with High-quality Feedback
Figure 4 for UltraFeedback: Boosting Language Models with High-quality Feedback
Viaarxiv icon

Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants

Add code
Bookmark button
Alert button
Oct 01, 2023
Tianyu Yu, Jinyi Hu, Yuan Yao, Haoye Zhang, Yue Zhao, Chongyi Wang, Shan Wang, Yinxv Pan, Jiao Xue, Dahai Li, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun

Figure 1 for Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants
Figure 2 for Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants
Figure 3 for Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants
Figure 4 for Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants
Viaarxiv icon