Alert button
Picture for Lili Yu

Lili Yu

Alert button

Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length

Add code
Bookmark button
Alert button
Apr 16, 2024
Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou

Viaarxiv icon

Jointly Training Large Autoregressive Multimodal Models

Add code
Bookmark button
Alert button
Sep 28, 2023
Emanuele Aiello, Lili Yu, Yixin Nie, Armen Aghajanyan, Barlas Oguz

Figure 1 for Jointly Training Large Autoregressive Multimodal Models
Figure 2 for Jointly Training Large Autoregressive Multimodal Models
Figure 3 for Jointly Training Large Autoregressive Multimodal Models
Figure 4 for Jointly Training Large Autoregressive Multimodal Models
Viaarxiv icon

Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning

Add code
Bookmark button
Alert button
Sep 05, 2023
Lili Yu, Bowen Shi, Ramakanth Pasunuru, Benjamin Muller, Olga Golovneva, Tianlu Wang, Arun Babu, Binh Tang, Brian Karrer, Shelly Sheynin, Candace Ross, Adam Polyak, Russell Howes, Vasu Sharma, Puxin Xu, Hovhannes Tamoyan, Oron Ashual, Uriel Singer, Shang-Wen Li, Susan Zhang, Richard James, Gargi Ghosh, Yaniv Taigman, Maryam Fazel-Zarandi, Asli Celikyilmaz, Luke Zettlemoyer, Armen Aghajanyan

Figure 1 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 2 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 3 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 4 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Viaarxiv icon

MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers

Add code
Bookmark button
Alert button
May 19, 2023
Lili Yu, Dániel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis

Figure 1 for MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
Figure 2 for MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
Figure 3 for MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
Figure 4 for MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
Viaarxiv icon

LIMA: Less Is More for Alignment

Add code
Bookmark button
Alert button
May 18, 2023
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy

Figure 1 for LIMA: Less Is More for Alignment
Figure 2 for LIMA: Less Is More for Alignment
Figure 3 for LIMA: Less Is More for Alignment
Figure 4 for LIMA: Less Is More for Alignment
Viaarxiv icon

VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation

Add code
Bookmark button
Alert button
May 04, 2023
Xilun Chen, Lili Yu, Wenhan Xiong, Barlas Oğuz, Yashar Mehdad, Wen-tau Yih

Figure 1 for VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Figure 2 for VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Figure 3 for VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Figure 4 for VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Viaarxiv icon

Scaling Laws for Generative Mixed-Modal Language Models

Add code
Bookmark button
Alert button
Jan 10, 2023
Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke Zettlemoyer

Figure 1 for Scaling Laws for Generative Mixed-Modal Language Models
Figure 2 for Scaling Laws for Generative Mixed-Modal Language Models
Figure 3 for Scaling Laws for Generative Mixed-Modal Language Models
Figure 4 for Scaling Laws for Generative Mixed-Modal Language Models
Viaarxiv icon

Improving Faithfulness of Abstractive Summarization by Controlling Confounding Effect of Irrelevant Sentences

Add code
Bookmark button
Alert button
Dec 19, 2022
Asish Ghoshal, Arash Einolghozati, Ankit Arun, Haoran Li, Lili Yu, Yashar Mehdad, Scott Wen-tau Yih, Asli Celikyilmaz

Figure 1 for Improving Faithfulness of Abstractive Summarization by Controlling Confounding Effect of Irrelevant Sentences
Figure 2 for Improving Faithfulness of Abstractive Summarization by Controlling Confounding Effect of Irrelevant Sentences
Figure 3 for Improving Faithfulness of Abstractive Summarization by Controlling Confounding Effect of Irrelevant Sentences
Figure 4 for Improving Faithfulness of Abstractive Summarization by Controlling Confounding Effect of Irrelevant Sentences
Viaarxiv icon