Alert button
Picture for Elias Stengel-Eskin

Elias Stengel-Eskin

Alert button

Johns Hopkins University

Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training

Add code
Bookmark button
Alert button
Mar 04, 2024
David Wan, Jaemin Cho, Elias Stengel-Eskin, Mohit Bansal

Figure 1 for Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training
Figure 2 for Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training
Figure 3 for Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training
Figure 4 for Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training
Viaarxiv icon

Language-guided Skill Learning with Temporal Variational Inference

Add code
Bookmark button
Alert button
Feb 26, 2024
Haotian Fu, Pratyusha Sharma, Elias Stengel-Eskin, George Konidaris, Nicolas Le Roux, Marc-Alexandre Côté, Xingdi Yuan

Viaarxiv icon

Soft Self-Consistency Improves Language Model Agents

Add code
Bookmark button
Alert button
Feb 20, 2024
Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal

Viaarxiv icon

GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations

Add code
Bookmark button
Alert button
Feb 19, 2024
Jinhao Duan, Renming Zhang, James Diffenderfer, Bhavya Kailkhura, Lichao Sun, Elias Stengel-Eskin, Mohit Bansal, Tianlong Chen, Kaidi Xu

Viaarxiv icon

MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models

Add code
Bookmark button
Alert button
Feb 02, 2024
Justin Chih-Yao Chen, Swarnadeep Saha, Elias Stengel-Eskin, Mohit Bansal

Viaarxiv icon

ReGAL: Refactoring Programs to Discover Generalizable Abstractions

Add code
Bookmark button
Alert button
Jan 29, 2024
Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal

Viaarxiv icon

Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models

Add code
Bookmark button
Alert button
Oct 09, 2023
Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal

Figure 1 for Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models
Figure 2 for Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models
Figure 3 for Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models
Figure 4 for Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models
Viaarxiv icon

Zero and Few-shot Semantic Parsing with Ambiguous Inputs

Add code
Bookmark button
Alert button
Jun 01, 2023
Elias Stengel-Eskin, Kyle Rawlins, Benjamin Van Durme

Figure 1 for Zero and Few-shot Semantic Parsing with Ambiguous Inputs
Figure 2 for Zero and Few-shot Semantic Parsing with Ambiguous Inputs
Figure 3 for Zero and Few-shot Semantic Parsing with Ambiguous Inputs
Figure 4 for Zero and Few-shot Semantic Parsing with Ambiguous Inputs
Viaarxiv icon

Did You Mean...? Confidence-based Trade-offs in Semantic Parsing

Add code
Bookmark button
Alert button
Mar 31, 2023
Elias Stengel-Eskin, Benjamin Van Durme

Figure 1 for Did You Mean...? Confidence-based Trade-offs in Semantic Parsing
Figure 2 for Did You Mean...? Confidence-based Trade-offs in Semantic Parsing
Figure 3 for Did You Mean...? Confidence-based Trade-offs in Semantic Parsing
Figure 4 for Did You Mean...? Confidence-based Trade-offs in Semantic Parsing
Viaarxiv icon

Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning

Add code
Bookmark button
Alert button
Dec 01, 2022
Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, Alan Yuille

Figure 1 for Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning
Figure 2 for Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning
Figure 3 for Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning
Figure 4 for Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning
Viaarxiv icon