Picture for Hongming Zhang

Hongming Zhang

Shammie

$\textit{GeoHard}$: Towards Measuring Class-wise Hardness through Modelling Class Semantics

Add code
Jul 17, 2024
Viaarxiv icon

DOCBENCH: A Benchmark for Evaluating LLM-based Document Reading Systems

Add code
Jul 15, 2024
Viaarxiv icon

$\texttt{MixGR}$: Enhancing Retriever Generalization for Scientific Domain through Complementary Granularity

Add code
Jul 15, 2024
Viaarxiv icon

Abstraction-of-Thought Makes Language Models Better Reasoners

Add code
Jun 18, 2024
Viaarxiv icon

Beyond Relevance: Evaluate and Improve Retrievers on Perspective Awareness

Add code
May 04, 2024
Figure 1 for Beyond Relevance: Evaluate and Improve Retrievers on Perspective Awareness
Figure 2 for Beyond Relevance: Evaluate and Improve Retrievers on Perspective Awareness
Figure 3 for Beyond Relevance: Evaluate and Improve Retrievers on Perspective Awareness
Figure 4 for Beyond Relevance: Evaluate and Improve Retrievers on Perspective Awareness
Viaarxiv icon

NegotiationToM: A Benchmark for Stress-testing Machine Theory of Mind on Negotiation Surrounding

Add code
Apr 21, 2024
Viaarxiv icon

Conceptual and Unbiased Reasoning in Language Models

Add code
Mar 30, 2024
Figure 1 for Conceptual and Unbiased Reasoning in Language Models
Figure 2 for Conceptual and Unbiased Reasoning in Language Models
Figure 3 for Conceptual and Unbiased Reasoning in Language Models
Figure 4 for Conceptual and Unbiased Reasoning in Language Models
Viaarxiv icon

Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models

Add code
Feb 27, 2024
Figure 1 for Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models
Figure 2 for Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models
Figure 3 for Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models
Figure 4 for Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models
Viaarxiv icon

AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation Tuning with Plausibility Estimation

Add code
Feb 16, 2024
Figure 1 for AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation Tuning with Plausibility Estimation
Figure 2 for AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation Tuning with Plausibility Estimation
Figure 3 for AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation Tuning with Plausibility Estimation
Figure 4 for AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation Tuning with Plausibility Estimation
Viaarxiv icon

On the BER vs. Bandwidth-Efficiency Trade-offs in Windowed OTSM Dispensing with Zero-Padding

Add code
Feb 01, 2024
Viaarxiv icon