Picture for Fangyu Liu

Fangyu Liu

Do ever larger octopi still amplify reporting biases? Evidence from judgments of typical colour

Add code
Sep 26, 2022
Figure 1 for Do ever larger octopi still amplify reporting biases? Evidence from judgments of typical colour
Figure 2 for Do ever larger octopi still amplify reporting biases? Evidence from judgments of typical colour
Figure 3 for Do ever larger octopi still amplify reporting biases? Evidence from judgments of typical colour
Figure 4 for Do ever larger octopi still amplify reporting biases? Evidence from judgments of typical colour
Viaarxiv icon

WinoDict: Probing language models for in-context word acquisition

Add code
Sep 25, 2022
Figure 1 for WinoDict: Probing language models for in-context word acquisition
Figure 2 for WinoDict: Probing language models for in-context word acquisition
Figure 3 for WinoDict: Probing language models for in-context word acquisition
Figure 4 for WinoDict: Probing language models for in-context word acquisition
Viaarxiv icon

On Reality and the Limits of Language Data

Add code
Aug 25, 2022
Figure 1 for On Reality and the Limits of Language Data
Figure 2 for On Reality and the Limits of Language Data
Figure 3 for On Reality and the Limits of Language Data
Viaarxiv icon

TweetNLP: Cutting-Edge Natural Language Processing for Social Media

Add code
Jun 29, 2022
Figure 1 for TweetNLP: Cutting-Edge Natural Language Processing for Social Media
Figure 2 for TweetNLP: Cutting-Edge Natural Language Processing for Social Media
Figure 3 for TweetNLP: Cutting-Edge Natural Language Processing for Social Media
Figure 4 for TweetNLP: Cutting-Edge Natural Language Processing for Social Media
Viaarxiv icon

Language Models Can See: Plugging Visual Controls in Text Generation

Add code
May 05, 2022
Figure 1 for Language Models Can See: Plugging Visual Controls in Text Generation
Figure 2 for Language Models Can See: Plugging Visual Controls in Text Generation
Figure 3 for Language Models Can See: Plugging Visual Controls in Text Generation
Figure 4 for Language Models Can See: Plugging Visual Controls in Text Generation
Viaarxiv icon

Visual Spatial Reasoning

Add code
Apr 30, 2022
Figure 1 for Visual Spatial Reasoning
Figure 2 for Visual Spatial Reasoning
Figure 3 for Visual Spatial Reasoning
Figure 4 for Visual Spatial Reasoning
Viaarxiv icon

Exposing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders

Add code
Apr 30, 2022
Figure 1 for Exposing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders
Figure 2 for Exposing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders
Figure 3 for Exposing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders
Figure 4 for Exposing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders
Viaarxiv icon

Modality-Balanced Embedding for Video Retrieval

Add code
Apr 18, 2022
Figure 1 for Modality-Balanced Embedding for Video Retrieval
Figure 2 for Modality-Balanced Embedding for Video Retrieval
Figure 3 for Modality-Balanced Embedding for Video Retrieval
Viaarxiv icon

Improving Word Translation via Two-Stage Contrastive Learning

Add code
Mar 26, 2022
Figure 1 for Improving Word Translation via Two-Stage Contrastive Learning
Figure 2 for Improving Word Translation via Two-Stage Contrastive Learning
Figure 3 for Improving Word Translation via Two-Stage Contrastive Learning
Figure 4 for Improving Word Translation via Two-Stage Contrastive Learning
Viaarxiv icon

Revisiting Parameter-Efficient Tuning: Are We Really There Yet?

Add code
Feb 16, 2022
Figure 1 for Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Figure 2 for Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Figure 3 for Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Figure 4 for Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Viaarxiv icon