Picture for Hung-yi Lee

Hung-yi Lee

Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course

Add code
Jul 07, 2024
Figure 1 for Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Figure 2 for Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Figure 3 for Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Figure 4 for Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Viaarxiv icon

Investigating the Effects of Large-Scale Pseudo-Stereo Data and Different Speech Foundation Model on Dialogue Generative Spoken Language Model

Add code
Jul 02, 2024
Figure 1 for Investigating the Effects of Large-Scale Pseudo-Stereo Data and Different Speech Foundation Model on Dialogue Generative Spoken Language Model
Figure 2 for Investigating the Effects of Large-Scale Pseudo-Stereo Data and Different Speech Foundation Model on Dialogue Generative Spoken Language Model
Figure 3 for Investigating the Effects of Large-Scale Pseudo-Stereo Data and Different Speech Foundation Model on Dialogue Generative Spoken Language Model
Viaarxiv icon

DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging

Add code
Jul 01, 2024
Viaarxiv icon

DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment

Add code
Jun 27, 2024
Viaarxiv icon

Can LLMs Understand the Implication of Emphasized Sentences in Dialogue?

Add code
Jun 16, 2024
Figure 1 for Can LLMs Understand the Implication of Emphasized Sentences in Dialogue?
Figure 2 for Can LLMs Understand the Implication of Emphasized Sentences in Dialogue?
Figure 3 for Can LLMs Understand the Implication of Emphasized Sentences in Dialogue?
Figure 4 for Can LLMs Understand the Implication of Emphasized Sentences in Dialogue?
Viaarxiv icon

Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech

Add code
Jun 16, 2024
Figure 1 for Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech
Figure 2 for Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech
Figure 3 for Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech
Figure 4 for Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech
Viaarxiv icon

On the Evaluation of Speech Foundation Models for Spoken Language Understanding

Add code
Jun 14, 2024
Viaarxiv icon

StreamBench: Towards Benchmarking Continuous Improvement of Language Agents

Add code
Jun 13, 2024
Viaarxiv icon

Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models

Add code
Jun 12, 2024
Figure 1 for Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models
Figure 2 for Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models
Figure 3 for Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models
Figure 4 for Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models
Viaarxiv icon

ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets

Add code
Jun 12, 2024
Figure 1 for ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets
Figure 2 for ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets
Figure 3 for ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets
Figure 4 for ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets
Viaarxiv icon