Alert button
Picture for Tianhua Tao

Tianhua Tao

Alert button

LLM360: Towards Fully Transparent Open-Source LLMs

Add code
Bookmark button
Alert button
Dec 11, 2023
Zhengzhong Liu, Aurick Qiao, Willie Neiswanger, Hongyi Wang, Bowen Tan, Tianhua Tao, Junbo Li, Yuqi Wang, Suqi Sun, Omkar Pangarkar, Richard Fan, Yi Gu, Victor Miller, Yonghao Zhuang, Guowei He, Haonan Li, Fajri Koto, Liping Tang, Nikhil Ranjan, Zhiqiang Shen, Xuguang Ren, Roberto Iriondo, Cun Mu, Zhiting Hu, Mark Schulze, Preslav Nakov, Tim Baldwin, Eric P. Xing

Figure 1 for LLM360: Towards Fully Transparent Open-Source LLMs
Figure 2 for LLM360: Towards Fully Transparent Open-Source LLMs
Figure 3 for LLM360: Towards Fully Transparent Open-Source LLMs
Figure 4 for LLM360: Towards Fully Transparent Open-Source LLMs
Viaarxiv icon

SlimPajama-DC: Understanding Data Combinations for LLM Training

Add code
Bookmark button
Alert button
Sep 19, 2023
Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing

Viaarxiv icon

Language Models Meet World Models: Embodied Experiences Enhance Language Models

Add code
Bookmark button
Alert button
May 22, 2023
Jiannan Xiang, Tianhua Tao, Yi Gu, Tianmin Shu, Zirui Wang, Zichao Yang, Zhiting Hu

Figure 1 for Language Models Meet World Models: Embodied Experiences Enhance Language Models
Figure 2 for Language Models Meet World Models: Embodied Experiences Enhance Language Models
Figure 3 for Language Models Meet World Models: Embodied Experiences Enhance Language Models
Figure 4 for Language Models Meet World Models: Embodied Experiences Enhance Language Models
Viaarxiv icon

On the Learning of Non-Autoregressive Transformers

Add code
Bookmark button
Alert button
Jun 13, 2022
Fei Huang, Tianhua Tao, Hao Zhou, Lei Li, Minlie Huang

Figure 1 for On the Learning of Non-Autoregressive Transformers
Figure 2 for On the Learning of Non-Autoregressive Transformers
Figure 3 for On the Learning of Non-Autoregressive Transformers
Figure 4 for On the Learning of Non-Autoregressive Transformers
Viaarxiv icon

Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation

Add code
Bookmark button
Alert button
Jul 23, 2021
Guangyi Liu, Zichao Yang, Tianhua Tao, Xiaodan Liang, Zhen Li, Bowen Zhou, Shuguang Cui, Zhiting Hu

Figure 1 for Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Figure 2 for Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Figure 3 for Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Figure 4 for Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Viaarxiv icon