Picture for Hongming Zhang

Hongming Zhang

Shammie

Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models

Add code
Feb 27, 2024
Viaarxiv icon

AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation Tuning with Plausibility Estimation

Add code
Feb 16, 2024
Viaarxiv icon

On the BER vs. Bandwidth-Efficiency Trade-offs in Windowed OTSM Dispensing with Zero-Padding

Add code
Feb 01, 2024
Viaarxiv icon

WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models

Add code
Jan 28, 2024
Viaarxiv icon

Monte Carlo Tree Search in the Presence of Transition Uncertainty

Add code
Dec 18, 2023
Viaarxiv icon

Dense X Retrieval: What Retrieval Granularity Should We Use?

Add code
Dec 12, 2023
Viaarxiv icon

CLOMO: Counterfactual Logical Modification with Large Language Models

Add code
Nov 30, 2023
Viaarxiv icon

Provable Representation with Efficient Planning for Partially Observable Reinforcement Learning

Add code
Nov 20, 2023
Viaarxiv icon

AbsPyramid: Benchmarking the Abstraction Ability of Language Models with a Unified Entailment Graph

Add code
Nov 16, 2023
Viaarxiv icon

Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models

Add code
Nov 15, 2023
Figure 1 for Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
Figure 2 for Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
Figure 3 for Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
Figure 4 for Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
Viaarxiv icon