Picture for Lei Jiang

Lei Jiang

Dialogues Aspect-based Sentiment Quadruple Extraction via Structural Entropy Minimization Partitioning

Add code
Aug 07, 2025
Viaarxiv icon

Eyepiece-free pupil-optimized holographic near-eye displays

Add code
Jul 30, 2025
Viaarxiv icon

RECALLED: An Unbounded Resource Consumption Attack on Large Vision-Language Models

Add code
Jul 24, 2025
Viaarxiv icon

T-T: Table Transformer for Tagging-based Aspect Sentiment Triplet Extraction

Add code
May 08, 2025
Viaarxiv icon

Addressing Noise and Stochasticity in Fraud Detection for Service Networks

Add code
May 02, 2025
Viaarxiv icon

TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection

Add code
Apr 05, 2025
Figure 1 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 2 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 3 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 4 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Viaarxiv icon

Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models

Add code
Feb 25, 2025
Figure 1 for Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models
Figure 2 for Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models
Figure 3 for Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models
Figure 4 for Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models
Viaarxiv icon

CipherPrune: Efficient and Scalable Private Transformer Inference

Add code
Feb 24, 2025
Figure 1 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 2 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 3 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 4 for CipherPrune: Efficient and Scalable Private Transformer Inference
Viaarxiv icon

S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency

Add code
Feb 07, 2025
Figure 1 for S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Figure 2 for S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Figure 3 for S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Figure 4 for S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Viaarxiv icon

MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model

Add code
Jan 31, 2025
Figure 1 for MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model
Figure 2 for MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model
Figure 3 for MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model
Figure 4 for MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model
Viaarxiv icon