Picture for Liang Lin

Liang Lin

DART: Differentiable Dynamic Adaptive Region Tokenizer for Vision Transformer and Mamba

Add code
Jun 12, 2025
Viaarxiv icon

Chain of Methodologies: Scaling Test Time Computation without Training

Add code
Jun 08, 2025
Viaarxiv icon

Geometry-Editable and Appearance-Preserving Object Compositon

Add code
May 27, 2025
Viaarxiv icon

MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models

Add code
May 26, 2025
Figure 1 for MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models
Figure 2 for MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models
Figure 3 for MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models
Figure 4 for MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models
Viaarxiv icon

UniErase: Unlearning Token as a Universal Erasure Primitive for Language Models

Add code
May 21, 2025
Viaarxiv icon

DFVO: Learning Darkness-free Visible and Infrared Image Disentanglement and Fusion All at Once

Add code
May 07, 2025
Viaarxiv icon

RoBridge: A Hierarchical Architecture Bridging Cognition and Execution for General Robotic Manipulation

Add code
May 03, 2025
Figure 1 for RoBridge: A Hierarchical Architecture Bridging Cognition and Execution for General Robotic Manipulation
Figure 2 for RoBridge: A Hierarchical Architecture Bridging Cognition and Execution for General Robotic Manipulation
Figure 3 for RoBridge: A Hierarchical Architecture Bridging Cognition and Execution for General Robotic Manipulation
Figure 4 for RoBridge: A Hierarchical Architecture Bridging Cognition and Execution for General Robotic Manipulation
Viaarxiv icon

Can We Achieve Efficient Diffusion without Self-Attention? Distilling Self-Attention into Convolutions

Add code
Apr 30, 2025
Figure 1 for Can We Achieve Efficient Diffusion without Self-Attention? Distilling Self-Attention into Convolutions
Figure 2 for Can We Achieve Efficient Diffusion without Self-Attention? Distilling Self-Attention into Convolutions
Figure 3 for Can We Achieve Efficient Diffusion without Self-Attention? Distilling Self-Attention into Convolutions
Figure 4 for Can We Achieve Efficient Diffusion without Self-Attention? Distilling Self-Attention into Convolutions
Viaarxiv icon

Rethinking Generalizable Infrared Small Target Detection: A Real-scene Benchmark and Cross-view Representation Learning

Add code
Apr 23, 2025
Figure 1 for Rethinking Generalizable Infrared Small Target Detection: A Real-scene Benchmark and Cross-view Representation Learning
Figure 2 for Rethinking Generalizable Infrared Small Target Detection: A Real-scene Benchmark and Cross-view Representation Learning
Figure 3 for Rethinking Generalizable Infrared Small Target Detection: A Real-scene Benchmark and Cross-view Representation Learning
Figure 4 for Rethinking Generalizable Infrared Small Target Detection: A Real-scene Benchmark and Cross-view Representation Learning
Viaarxiv icon

A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment

Add code
Apr 22, 2025
Viaarxiv icon