Picture for Qi Chen

Qi Chen

SPFresh: Incremental In-Place Update for Billion-Scale Vector Search

Add code
Oct 18, 2024
Figure 1 for SPFresh: Incremental In-Place Update for Billion-Scale Vector Search
Figure 2 for SPFresh: Incremental In-Place Update for Billion-Scale Vector Search
Figure 3 for SPFresh: Incremental In-Place Update for Billion-Scale Vector Search
Figure 4 for SPFresh: Incremental In-Place Update for Billion-Scale Vector Search
Viaarxiv icon

PAPL-SLAM: Principal Axis-Anchored Monocular Point-Line SLAM

Add code
Oct 16, 2024
Figure 1 for PAPL-SLAM: Principal Axis-Anchored Monocular Point-Line SLAM
Figure 2 for PAPL-SLAM: Principal Axis-Anchored Monocular Point-Line SLAM
Figure 3 for PAPL-SLAM: Principal Axis-Anchored Monocular Point-Line SLAM
Figure 4 for PAPL-SLAM: Principal Axis-Anchored Monocular Point-Line SLAM
Viaarxiv icon

Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles

Add code
Oct 09, 2024
Figure 1 for Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles
Figure 2 for Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles
Figure 3 for Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles
Figure 4 for Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles
Viaarxiv icon

Integrative Decoding: Improve Factuality via Implicit Self-consistency

Add code
Oct 02, 2024
Figure 1 for Integrative Decoding: Improve Factuality via Implicit Self-consistency
Figure 2 for Integrative Decoding: Improve Factuality via Implicit Self-consistency
Figure 3 for Integrative Decoding: Improve Factuality via Implicit Self-consistency
Figure 4 for Integrative Decoding: Improve Factuality via Implicit Self-consistency
Viaarxiv icon

Accelerated Multi-Contrast MRI Reconstruction via Frequency and Spatial Mutual Learning

Add code
Sep 21, 2024
Viaarxiv icon

RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval

Add code
Sep 16, 2024
Figure 1 for RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Figure 2 for RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Figure 3 for RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Figure 4 for RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Viaarxiv icon

Analyzing Tumors by Synthesis

Add code
Sep 09, 2024
Figure 1 for Analyzing Tumors by Synthesis
Figure 2 for Analyzing Tumors by Synthesis
Figure 3 for Analyzing Tumors by Synthesis
Figure 4 for Analyzing Tumors by Synthesis
Viaarxiv icon

HEAD: A Bandwidth-Efficient Cooperative Perception Approach for Heterogeneous Connected and Autonomous Vehicles

Add code
Aug 27, 2024
Figure 1 for HEAD: A Bandwidth-Efficient Cooperative Perception Approach for Heterogeneous Connected and Autonomous Vehicles
Figure 2 for HEAD: A Bandwidth-Efficient Cooperative Perception Approach for Heterogeneous Connected and Autonomous Vehicles
Figure 3 for HEAD: A Bandwidth-Efficient Cooperative Perception Approach for Heterogeneous Connected and Autonomous Vehicles
Figure 4 for HEAD: A Bandwidth-Efficient Cooperative Perception Approach for Heterogeneous Connected and Autonomous Vehicles
Viaarxiv icon

XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training

Add code
Jul 28, 2024
Figure 1 for XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training
Figure 2 for XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training
Figure 3 for XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training
Figure 4 for XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training
Viaarxiv icon

InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion Generation

Add code
Jul 14, 2024
Figure 1 for InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion Generation
Figure 2 for InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion Generation
Figure 3 for InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion Generation
Figure 4 for InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion Generation
Viaarxiv icon