Picture for Kaixiang Lin

Kaixiang Lin

Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk

Add code
Jan 10, 2024
Viaarxiv icon

Automated Few-shot Classification with Instruction-Finetuned Language Models

Add code
May 21, 2023
Figure 1 for Automated Few-shot Classification with Instruction-Finetuned Language Models
Figure 2 for Automated Few-shot Classification with Instruction-Finetuned Language Models
Figure 3 for Automated Few-shot Classification with Instruction-Finetuned Language Models
Figure 4 for Automated Few-shot Classification with Instruction-Finetuned Language Models
Viaarxiv icon

Parameter and Data Efficient Continual Pre-training for Robustness to Dialectal Variance in Arabic

Add code
Nov 08, 2022
Figure 1 for Parameter and Data Efficient Continual Pre-training for Robustness to Dialectal Variance in Arabic
Figure 2 for Parameter and Data Efficient Continual Pre-training for Robustness to Dialectal Variance in Arabic
Figure 3 for Parameter and Data Efficient Continual Pre-training for Robustness to Dialectal Variance in Arabic
Viaarxiv icon

CH-MARL: A Multimodal Benchmark for Cooperative, Heterogeneous Multi-Agent Reinforcement Learning

Add code
Aug 26, 2022
Figure 1 for CH-MARL: A Multimodal Benchmark for Cooperative, Heterogeneous Multi-Agent Reinforcement Learning
Figure 2 for CH-MARL: A Multimodal Benchmark for Cooperative, Heterogeneous Multi-Agent Reinforcement Learning
Figure 3 for CH-MARL: A Multimodal Benchmark for Cooperative, Heterogeneous Multi-Agent Reinforcement Learning
Figure 4 for CH-MARL: A Multimodal Benchmark for Cooperative, Heterogeneous Multi-Agent Reinforcement Learning
Viaarxiv icon

DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following

Add code
Feb 27, 2022
Figure 1 for DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
Figure 2 for DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
Figure 3 for DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
Figure 4 for DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
Viaarxiv icon

Learning to Act with Affordance-Aware Multimodal Neural SLAM

Add code
Feb 04, 2022
Figure 1 for Learning to Act with Affordance-Aware Multimodal Neural SLAM
Figure 2 for Learning to Act with Affordance-Aware Multimodal Neural SLAM
Figure 3 for Learning to Act with Affordance-Aware Multimodal Neural SLAM
Figure 4 for Learning to Act with Affordance-Aware Multimodal Neural SLAM
Viaarxiv icon

Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning

Add code
Jan 21, 2022
Figure 1 for Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning
Figure 2 for Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning
Figure 3 for Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning
Figure 4 for Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning
Viaarxiv icon

LUMINOUS: Indoor Scene Generation for Embodied AI Challenges

Add code
Nov 10, 2021
Figure 1 for LUMINOUS: Indoor Scene Generation for Embodied AI Challenges
Figure 2 for LUMINOUS: Indoor Scene Generation for Embodied AI Challenges
Figure 3 for LUMINOUS: Indoor Scene Generation for Embodied AI Challenges
Figure 4 for LUMINOUS: Indoor Scene Generation for Embodied AI Challenges
Viaarxiv icon

Off-Policy Imitation Learning from Observations

Add code
Feb 25, 2021
Figure 1 for Off-Policy Imitation Learning from Observations
Figure 2 for Off-Policy Imitation Learning from Observations
Figure 3 for Off-Policy Imitation Learning from Observations
Figure 4 for Off-Policy Imitation Learning from Observations
Viaarxiv icon

Transfer Learning in Deep Reinforcement Learning: A Survey

Add code
Sep 24, 2020
Figure 1 for Transfer Learning in Deep Reinforcement Learning: A Survey
Viaarxiv icon