Picture for Tian Yun

Tian Yun

What is an "Abstract Reasoner"? Revisiting Experiments and Arguments about Large Language Models

Add code
Jul 30, 2025
Viaarxiv icon

How Do Vision-Language Models Process Conflicting Information Across Modalities?

Add code
Jul 02, 2025
Viaarxiv icon

TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration

Add code
May 21, 2025
Viaarxiv icon

$100K or 100 Days: Trade-offs when Pre-Training with Academic Resources

Add code
Oct 30, 2024
Figure 1 for $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources
Figure 2 for $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources
Figure 3 for $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources
Figure 4 for $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources
Viaarxiv icon

Pre-trained Vision-Language Models Learn Discoverable Visual Concepts

Add code
Apr 19, 2024
Viaarxiv icon

mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?

Add code
Apr 18, 2024
Figure 1 for mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?
Figure 2 for mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?
Figure 3 for mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?
Figure 4 for mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?
Viaarxiv icon

Emergence of Abstract State Representations in Embodied Sequence Modeling

Add code
Nov 07, 2023
Viaarxiv icon

Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback

Add code
Oct 03, 2023
Figure 1 for Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback
Figure 2 for Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback
Figure 3 for Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback
Figure 4 for Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback
Viaarxiv icon

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Add code
Nov 09, 2022
Viaarxiv icon

Do Vision-Language Pretrained Models Learn Primitive Concepts?

Add code
Mar 31, 2022
Figure 1 for Do Vision-Language Pretrained Models Learn Primitive Concepts?
Figure 2 for Do Vision-Language Pretrained Models Learn Primitive Concepts?
Figure 3 for Do Vision-Language Pretrained Models Learn Primitive Concepts?
Figure 4 for Do Vision-Language Pretrained Models Learn Primitive Concepts?
Viaarxiv icon