Picture for William Yang Wang

William Yang Wang

Hire a Linguist!: Learning Endangered Languages with In-Context Linguistic Descriptions

Add code
Feb 28, 2024
Figure 1 for Hire a Linguist!: Learning Endangered Languages with In-Context Linguistic Descriptions
Figure 2 for Hire a Linguist!: Learning Endangered Languages with In-Context Linguistic Descriptions
Figure 3 for Hire a Linguist!: Learning Endangered Languages with In-Context Linguistic Descriptions
Figure 4 for Hire a Linguist!: Learning Endangered Languages with In-Context Linguistic Descriptions
Viaarxiv icon

Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models

Add code
Feb 18, 2024
Figure 1 for Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models
Figure 2 for Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models
Figure 3 for Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models
Figure 4 for Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models
Viaarxiv icon

Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation

Add code
Feb 05, 2024
Viaarxiv icon

Weak-to-Strong Jailbreaking on Large Language Models

Add code
Feb 05, 2024
Viaarxiv icon

Tweets to Citations: Unveiling the Impact of Social Media Influencers on AI Research Visibility

Add code
Jan 24, 2024
Viaarxiv icon

Efficient Online Data Mixing For Language Model Pre-Training

Add code
Dec 05, 2023
Figure 1 for Efficient Online Data Mixing For Language Model Pre-Training
Figure 2 for Efficient Online Data Mixing For Language Model Pre-Training
Figure 3 for Efficient Online Data Mixing For Language Model Pre-Training
Figure 4 for Efficient Online Data Mixing For Language Model Pre-Training
Viaarxiv icon

VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following

Add code
Nov 29, 2023
Figure 1 for VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following
Figure 2 for VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following
Figure 3 for VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following
Figure 4 for VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following
Viaarxiv icon

Pinpoint, Not Criticize: Refining Large Language Models via Fine-Grained Actionable Feedback

Add code
Nov 15, 2023
Figure 1 for Pinpoint, Not Criticize: Refining Large Language Models via Fine-Grained Actionable Feedback
Figure 2 for Pinpoint, Not Criticize: Refining Large Language Models via Fine-Grained Actionable Feedback
Figure 3 for Pinpoint, Not Criticize: Refining Large Language Models via Fine-Grained Actionable Feedback
Figure 4 for Pinpoint, Not Criticize: Refining Large Language Models via Fine-Grained Actionable Feedback
Viaarxiv icon

GPT-4V as a Generalist Evaluator for Vision-Language Tasks

Add code
Nov 02, 2023
Viaarxiv icon

A Survey on Detection of LLMs-Generated Content

Add code
Oct 24, 2023
Figure 1 for A Survey on Detection of LLMs-Generated Content
Figure 2 for A Survey on Detection of LLMs-Generated Content
Figure 3 for A Survey on Detection of LLMs-Generated Content
Figure 4 for A Survey on Detection of LLMs-Generated Content
Viaarxiv icon