Picture for Fan Yang

Fan Yang

refer to the report for detailed contributions

ASGM-KG: Unveiling Alluvial Gold Mining Through Knowledge Graphs

Add code
Aug 16, 2024
Viaarxiv icon

LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image

Add code
Aug 14, 2024
Figure 1 for LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image
Figure 2 for LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image
Figure 3 for LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image
Figure 4 for LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image
Viaarxiv icon

LUT Tensor Core: Lookup Table Enables Efficient Low-Bit LLM Inference Acceleration

Add code
Aug 12, 2024
Viaarxiv icon

Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers

Add code
Aug 12, 2024
Viaarxiv icon

ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

Add code
Aug 08, 2024
Viaarxiv icon

CFBench: A Comprehensive Constraints-Following Benchmark for LLMs

Add code
Aug 02, 2024
Figure 1 for CFBench: A Comprehensive Constraints-Following Benchmark for LLMs
Figure 2 for CFBench: A Comprehensive Constraints-Following Benchmark for LLMs
Figure 3 for CFBench: A Comprehensive Constraints-Following Benchmark for LLMs
Figure 4 for CFBench: A Comprehensive Constraints-Following Benchmark for LLMs
Viaarxiv icon

IN-Sight: Interactive Navigation through Sight

Add code
Aug 01, 2024
Figure 1 for IN-Sight: Interactive Navigation through Sight
Figure 2 for IN-Sight: Interactive Navigation through Sight
Figure 3 for IN-Sight: Interactive Navigation through Sight
Figure 4 for IN-Sight: Interactive Navigation through Sight
Viaarxiv icon

EVLM: An Efficient Vision-Language Model for Visual Understanding

Add code
Jul 19, 2024
Figure 1 for EVLM: An Efficient Vision-Language Model for Visual Understanding
Figure 2 for EVLM: An Efficient Vision-Language Model for Visual Understanding
Figure 3 for EVLM: An Efficient Vision-Language Model for Visual Understanding
Figure 4 for EVLM: An Efficient Vision-Language Model for Visual Understanding
Viaarxiv icon

FocusDiffuser: Perceiving Local Disparities for Camouflaged Object Detection

Add code
Jul 18, 2024
Figure 1 for FocusDiffuser: Perceiving Local Disparities for Camouflaged Object Detection
Figure 2 for FocusDiffuser: Perceiving Local Disparities for Camouflaged Object Detection
Figure 3 for FocusDiffuser: Perceiving Local Disparities for Camouflaged Object Detection
Figure 4 for FocusDiffuser: Perceiving Local Disparities for Camouflaged Object Detection
Viaarxiv icon

Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models

Add code
Jul 15, 2024
Figure 1 for Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models
Figure 2 for Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models
Figure 3 for Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models
Figure 4 for Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models
Viaarxiv icon