Picture for Tongxuan Liu

Tongxuan Liu

IFDNS: An Iterative Feedback-Driven Neuro-Symbolic Method for Faithful Logical Reasoning

Add code
Jan 12, 2026
Viaarxiv icon

OxygenREC: An Instruction-Following Generative Framework for E-commerce Recommendation

Add code
Dec 31, 2025
Viaarxiv icon

xGR: Efficient Generative Recommendation Serving at Scale

Add code
Dec 19, 2025
Viaarxiv icon

Are LLMs Reliable Translators of Logical Reasoning Across Lexically Diversified Contexts?

Add code
Jun 05, 2025
Viaarxiv icon

TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection

Add code
Apr 05, 2025
Figure 1 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 2 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 3 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 4 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Viaarxiv icon

S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency

Add code
Feb 07, 2025
Figure 1 for S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Figure 2 for S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Figure 3 for S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Figure 4 for S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Viaarxiv icon

FoPru: Focal Pruning for Efficient Large Vision-Language Models

Add code
Nov 21, 2024
Figure 1 for FoPru: Focal Pruning for Efficient Large Vision-Language Models
Figure 2 for FoPru: Focal Pruning for Efficient Large Vision-Language Models
Figure 3 for FoPru: Focal Pruning for Efficient Large Vision-Language Models
Figure 4 for FoPru: Focal Pruning for Efficient Large Vision-Language Models
Viaarxiv icon

Leveraging LLMs for Hypothetical Deduction in Logical Inference: A Neuro-Symbolic Approach

Add code
Oct 29, 2024
Figure 1 for Leveraging LLMs for Hypothetical Deduction in Logical Inference: A Neuro-Symbolic Approach
Figure 2 for Leveraging LLMs for Hypothetical Deduction in Logical Inference: A Neuro-Symbolic Approach
Figure 3 for Leveraging LLMs for Hypothetical Deduction in Logical Inference: A Neuro-Symbolic Approach
Figure 4 for Leveraging LLMs for Hypothetical Deduction in Logical Inference: A Neuro-Symbolic Approach
Viaarxiv icon

Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models

Add code
Sep 26, 2024
Figure 1 for Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Figure 2 for Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Figure 3 for Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Figure 4 for Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Viaarxiv icon

GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion

Add code
Sep 21, 2024
Figure 1 for GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion
Figure 2 for GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion
Figure 3 for GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion
Figure 4 for GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion
Viaarxiv icon