Picture for Xue Li

Xue Li

School of Information Technology and Electronic Engineering, The University of Queensland

DART: Diffusion-Inspired Speculative Decoding for Fast LLM Inference

Add code
Jan 27, 2026
Viaarxiv icon

MemWeaver: Weaving Hybrid Memories for Traceable Long-Horizon Agentic Reasoning

Add code
Jan 26, 2026
Viaarxiv icon

Are LLMs Smarter Than Chimpanzees? An Evaluation on Perspective Taking and Knowledge State Estimation

Add code
Jan 18, 2026
Viaarxiv icon

Efficient Vision-Language Reasoning via Adaptive Token Pruning

Add code
Dec 14, 2025
Figure 1 for Efficient Vision-Language Reasoning via Adaptive Token Pruning
Figure 2 for Efficient Vision-Language Reasoning via Adaptive Token Pruning
Figure 3 for Efficient Vision-Language Reasoning via Adaptive Token Pruning
Figure 4 for Efficient Vision-Language Reasoning via Adaptive Token Pruning
Viaarxiv icon

ReaKase-8B: Legal Case Retrieval via Knowledge and Reasoning Representations with LLMs

Add code
Oct 30, 2025
Viaarxiv icon

Attributed Graph Clustering with Multi-Scale Weight-Based Pairwise Coarsening and Contrastive Learning

Add code
Jul 28, 2025
Figure 1 for Attributed Graph Clustering with Multi-Scale Weight-Based Pairwise Coarsening and Contrastive Learning
Figure 2 for Attributed Graph Clustering with Multi-Scale Weight-Based Pairwise Coarsening and Contrastive Learning
Figure 3 for Attributed Graph Clustering with Multi-Scale Weight-Based Pairwise Coarsening and Contrastive Learning
Figure 4 for Attributed Graph Clustering with Multi-Scale Weight-Based Pairwise Coarsening and Contrastive Learning
Viaarxiv icon

SAGE: A Visual Language Model for Anomaly Detection via Fact Enhancement and Entropy-aware Alignment

Add code
Jul 10, 2025
Viaarxiv icon

FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding

Add code
May 23, 2025
Figure 1 for FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding
Figure 2 for FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding
Figure 3 for FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding
Figure 4 for FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding
Viaarxiv icon

Random Client Selection on Contrastive Federated Learning for Tabular Data

Add code
May 16, 2025
Figure 1 for Random Client Selection on Contrastive Federated Learning for Tabular Data
Figure 2 for Random Client Selection on Contrastive Federated Learning for Tabular Data
Figure 3 for Random Client Selection on Contrastive Federated Learning for Tabular Data
Figure 4 for Random Client Selection on Contrastive Federated Learning for Tabular Data
Viaarxiv icon

From Embeddings to Accuracy: Comparing Foundation Models for Radiographic Classification

Add code
May 16, 2025
Viaarxiv icon