Alert button
Picture for Liunian Harold Li

Liunian Harold Li

Alert button

Tailoring Self-Rationalizers with Multi-Reward Distillation

Add code
Bookmark button
Alert button
Nov 06, 2023
Sahana Ramnath, Brihi Joshi, Skyler Hallinan, Ximing Lu, Liunian Harold Li, Aaron Chan, Jack Hessel, Yejin Choi, Xiang Ren

Viaarxiv icon

DesCo: Learning Object Recognition with Rich Language Descriptions

Add code
Bookmark button
Alert button
Jun 24, 2023
Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang

Figure 1 for DesCo: Learning Object Recognition with Rich Language Descriptions
Figure 2 for DesCo: Learning Object Recognition with Rich Language Descriptions
Figure 3 for DesCo: Learning Object Recognition with Rich Language Descriptions
Figure 4 for DesCo: Learning Object Recognition with Rich Language Descriptions
Viaarxiv icon

Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step

Add code
Bookmark button
Alert button
Jun 24, 2023
Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, Yejin Choi

Figure 1 for Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Figure 2 for Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Figure 3 for Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Figure 4 for Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Viaarxiv icon

MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models

Add code
Bookmark button
Alert button
Jun 02, 2023
Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin F. Yang, Kai-Wei Chang

Figure 1 for MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models
Figure 2 for MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models
Figure 3 for MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models
Figure 4 for MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models
Viaarxiv icon

GLIPv2: Unifying Localization and Vision-Language Understanding

Add code
Bookmark button
Alert button
Jun 12, 2022
Haotian Zhang, Pengchuan Zhang, Xiaowei Hu, Yen-Chun Chen, Liunian Harold Li, Xiyang Dai, Lijuan Wang, Lu Yuan, Jenq-Neng Hwang, Jianfeng Gao

Figure 1 for GLIPv2: Unifying Localization and Vision-Language Understanding
Figure 2 for GLIPv2: Unifying Localization and Vision-Language Understanding
Figure 3 for GLIPv2: Unifying Localization and Vision-Language Understanding
Figure 4 for GLIPv2: Unifying Localization and Vision-Language Understanding
Viaarxiv icon

DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation

Add code
Bookmark button
Alert button
May 25, 2022
Jingnong Qu, Liunian Harold Li, Jieyu Zhao, Sunipa Dev, Kai-Wei Chang

Figure 1 for DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Figure 2 for DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Figure 3 for DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Figure 4 for DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Viaarxiv icon

On the Paradox of Learning to Reason from Data

Add code
Bookmark button
Alert button
May 24, 2022
Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, Guy Van den Broeck

Figure 1 for On the Paradox of Learning to Reason from Data
Figure 2 for On the Paradox of Learning to Reason from Data
Figure 3 for On the Paradox of Learning to Reason from Data
Figure 4 for On the Paradox of Learning to Reason from Data
Viaarxiv icon

GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models

Add code
Bookmark button
Alert button
May 24, 2022
Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li, Kai-Wei Chang

Figure 1 for GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models
Figure 2 for GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models
Figure 3 for GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models
Figure 4 for GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models
Viaarxiv icon

ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models

Add code
Bookmark button
Alert button
Apr 20, 2022
Chunyuan Li, Haotian Liu, Liunian Harold Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Yong Jae Lee, Houdong Hu, Zicheng Liu, Jianfeng Gao

Figure 1 for ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Figure 2 for ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Figure 3 for ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Figure 4 for ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Viaarxiv icon