Picture for Yangyi Chen

Yangyi Chen

A Single Transformer for Scalable Vision-Language Modeling

Add code
Jul 08, 2024
Viaarxiv icon

SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales

Add code
May 31, 2024
Figure 1 for SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
Figure 2 for SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
Figure 3 for SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
Figure 4 for SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
Viaarxiv icon

Executable Code Actions Elicit Better LLM Agents

Add code
Feb 01, 2024
Viaarxiv icon

ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation

Add code
Nov 22, 2023
Figure 1 for ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation
Figure 2 for ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation
Figure 3 for ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation
Figure 4 for ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation
Viaarxiv icon

DRESS: Instructing Large Vision-Language Models to Align and Interact with Humans via Natural Language Feedback

Add code
Nov 16, 2023
Figure 1 for DRESS: Instructing Large Vision-Language Models to Align and Interact with Humans via Natural Language Feedback
Figure 2 for DRESS: Instructing Large Vision-Language Models to Align and Interact with Humans via Natural Language Feedback
Figure 3 for DRESS: Instructing Large Vision-Language Models to Align and Interact with Humans via Natural Language Feedback
Figure 4 for DRESS: Instructing Large Vision-Language Models to Align and Interact with Humans via Natural Language Feedback
Viaarxiv icon

Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown

Add code
Nov 16, 2023
Figure 1 for Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown
Figure 2 for Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown
Figure 3 for Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown
Figure 4 for Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown
Viaarxiv icon

R-Tuning: Teaching Large Language Models to Refuse Unknown Questions

Add code
Nov 16, 2023
Figure 1 for R-Tuning: Teaching Large Language Models to Refuse Unknown Questions
Figure 2 for R-Tuning: Teaching Large Language Models to Refuse Unknown Questions
Figure 3 for R-Tuning: Teaching Large Language Models to Refuse Unknown Questions
Figure 4 for R-Tuning: Teaching Large Language Models to Refuse Unknown Questions
Viaarxiv icon

CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets

Add code
Sep 29, 2023
Figure 1 for CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets
Figure 2 for CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets
Figure 3 for CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets
Figure 4 for CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets
Viaarxiv icon

MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback

Add code
Sep 19, 2023
Figure 1 for MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Figure 2 for MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Figure 3 for MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Figure 4 for MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Viaarxiv icon

Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models

Add code
Sep 08, 2023
Figure 1 for Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
Figure 2 for Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
Figure 3 for Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
Figure 4 for Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
Viaarxiv icon