Picture for Jae Hee Lee

Jae Hee Lee

The Expert Strikes Back: Interpreting Mixture-of-Experts Language Models at Expert Level

Add code
Apr 02, 2026
Viaarxiv icon

Explaining, Verifying, and Aligning Semantic Hierarchies in Vision-Language Model Embeddings

Add code
Mar 26, 2026
Viaarxiv icon

Mental Modeling of Reinforcement Learning Agents by Language Models

Add code
Jun 26, 2024
Figure 1 for Mental Modeling of Reinforcement Learning Agents by Language Models
Figure 2 for Mental Modeling of Reinforcement Learning Agents by Language Models
Figure 3 for Mental Modeling of Reinforcement Learning Agents by Language Models
Figure 4 for Mental Modeling of Reinforcement Learning Agents by Language Models
Viaarxiv icon

Details Make a Difference: Object State-Sensitive Neurorobotic Task Planning

Add code
Jun 14, 2024
Figure 1 for Details Make a Difference: Object State-Sensitive Neurorobotic Task Planning
Figure 2 for Details Make a Difference: Object State-Sensitive Neurorobotic Task Planning
Figure 3 for Details Make a Difference: Object State-Sensitive Neurorobotic Task Planning
Figure 4 for Details Make a Difference: Object State-Sensitive Neurorobotic Task Planning
Viaarxiv icon

Causal State Distillation for Explainable Reinforcement Learning

Add code
Dec 30, 2023
Figure 1 for Causal State Distillation for Explainable Reinforcement Learning
Figure 2 for Causal State Distillation for Explainable Reinforcement Learning
Figure 3 for Causal State Distillation for Explainable Reinforcement Learning
Figure 4 for Causal State Distillation for Explainable Reinforcement Learning
Viaarxiv icon

Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models

Add code
Dec 13, 2023
Figure 1 for Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
Figure 2 for Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
Figure 3 for Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
Figure 4 for Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
Viaarxiv icon

Visually Grounded Continual Language Learning with Selective Specialization

Add code
Oct 24, 2023
Figure 1 for Visually Grounded Continual Language Learning with Selective Specialization
Figure 2 for Visually Grounded Continual Language Learning with Selective Specialization
Figure 3 for Visually Grounded Continual Language Learning with Selective Specialization
Figure 4 for Visually Grounded Continual Language Learning with Selective Specialization
Viaarxiv icon

From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks

Add code
Oct 18, 2023
Viaarxiv icon

Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic

Add code
Sep 23, 2023
Viaarxiv icon

Internally Rewarded Reinforcement Learning

Add code
Feb 01, 2023
Figure 1 for Internally Rewarded Reinforcement Learning
Figure 2 for Internally Rewarded Reinforcement Learning
Figure 3 for Internally Rewarded Reinforcement Learning
Figure 4 for Internally Rewarded Reinforcement Learning
Viaarxiv icon