Picture for James Glass

James Glass

MIT Computer Science and Artificial Intelligence Laboratory, MA, USA

Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains

Add code
Oct 24, 2024
Figure 1 for Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains
Figure 2 for Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains
Figure 3 for Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains
Figure 4 for Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains
Viaarxiv icon

GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models

Add code
Oct 08, 2024
Figure 1 for GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models
Figure 2 for GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models
Figure 3 for GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models
Figure 4 for GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models
Viaarxiv icon

Quantifying Generalization Complexity for Large Language Models

Add code
Oct 02, 2024
Viaarxiv icon

Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec models

Add code
Sep 21, 2024
Figure 1 for Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec models
Figure 2 for Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec models
Figure 3 for Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec models
Figure 4 for Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec models
Viaarxiv icon

Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps

Add code
Jul 09, 2024
Viaarxiv icon

DASS: Distilled Audio State Space Models Are Stronger and More Duration-Scalable Learners

Add code
Jul 04, 2024
Figure 1 for DASS: Distilled Audio State Space Models Are Stronger and More Duration-Scalable Learners
Figure 2 for DASS: Distilled Audio State Space Models Are Stronger and More Duration-Scalable Learners
Figure 3 for DASS: Distilled Audio State Space Models Are Stronger and More Duration-Scalable Learners
Figure 4 for DASS: Distilled Audio State Space Models Are Stronger and More Duration-Scalable Learners
Viaarxiv icon

Automatic Prediction of Amyotrophic Lateral Sclerosis Progression using Longitudinal Speech Transformer

Add code
Jun 26, 2024
Figure 1 for Automatic Prediction of Amyotrophic Lateral Sclerosis Progression using Longitudinal Speech Transformer
Figure 2 for Automatic Prediction of Amyotrophic Lateral Sclerosis Progression using Longitudinal Speech Transformer
Figure 3 for Automatic Prediction of Amyotrophic Lateral Sclerosis Progression using Longitudinal Speech Transformer
Figure 4 for Automatic Prediction of Amyotrophic Lateral Sclerosis Progression using Longitudinal Speech Transformer
Viaarxiv icon

Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization

Add code
Jun 23, 2024
Figure 1 for Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
Figure 2 for Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
Figure 3 for Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
Figure 4 for Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
Viaarxiv icon

Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts

Add code
Jun 17, 2024
Viaarxiv icon

Adaptive Query Rewriting: Aligning Rewriters through Marginal Probability of Conversational Answers

Add code
Jun 16, 2024
Viaarxiv icon