Picture for Qianhui Wu

Qianhui Wu

GUI-360$^\circ$: A Comprehensive Dataset and Benchmark for Computer-Using Agents

Add code
Nov 10, 2025
Viaarxiv icon

Adapting Web Agents with Synthetic Supervision

Add code
Nov 08, 2025
Viaarxiv icon

GUI-360: A Comprehensive Dataset and Benchmark for Computer-Using Agents

Add code
Nov 06, 2025
Viaarxiv icon

Dyna-Mind: Learning to Simulate from Experience for Better AI Agents

Add code
Oct 10, 2025
Figure 1 for Dyna-Mind: Learning to Simulate from Experience for Better AI Agents
Figure 2 for Dyna-Mind: Learning to Simulate from Experience for Better AI Agents
Figure 3 for Dyna-Mind: Learning to Simulate from Experience for Better AI Agents
Figure 4 for Dyna-Mind: Learning to Simulate from Experience for Better AI Agents
Viaarxiv icon

MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention

Add code
Apr 22, 2025
Figure 1 for MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Figure 2 for MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Figure 3 for MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Figure 4 for MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Viaarxiv icon

Magma: A Foundation Model for Multimodal AI Agents

Add code
Feb 18, 2025
Viaarxiv icon

On Memory Construction and Retrieval for Personalized Conversational Agents

Add code
Feb 08, 2025
Figure 1 for On Memory Construction and Retrieval for Personalized Conversational Agents
Figure 2 for On Memory Construction and Retrieval for Personalized Conversational Agents
Figure 3 for On Memory Construction and Retrieval for Personalized Conversational Agents
Figure 4 for On Memory Construction and Retrieval for Personalized Conversational Agents
Viaarxiv icon

SCBench: A KV Cache-Centric Analysis of Long-Context Methods

Add code
Dec 13, 2024
Figure 1 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 2 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 3 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 4 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Viaarxiv icon

MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention

Add code
Jul 02, 2024
Figure 1 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 2 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 3 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 4 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Viaarxiv icon

Mitigate Position Bias in Large Language Models via Scaling a Single Dimension

Add code
Jun 04, 2024
Figure 1 for Mitigate Position Bias in Large Language Models via Scaling a Single Dimension
Figure 2 for Mitigate Position Bias in Large Language Models via Scaling a Single Dimension
Figure 3 for Mitigate Position Bias in Large Language Models via Scaling a Single Dimension
Figure 4 for Mitigate Position Bias in Large Language Models via Scaling a Single Dimension
Viaarxiv icon