Picture for Junlin Zhang

Junlin Zhang

Efficiently Reconstructing Dynamic Scenes One D4RT at a Time

Add code
Dec 10, 2025
Viaarxiv icon

Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B

Add code
Nov 09, 2025
Viaarxiv icon

Event Vision Sensor: A Review

Add code
Feb 10, 2025
Viaarxiv icon

Imagen 3

Add code
Aug 13, 2024
Viaarxiv icon

Robust Visual Tracking via Iterative Gradient Descent and Threshold Selection

Add code
Jun 02, 2024
Figure 1 for Robust Visual Tracking via Iterative Gradient Descent and Threshold Selection
Figure 2 for Robust Visual Tracking via Iterative Gradient Descent and Threshold Selection
Figure 3 for Robust Visual Tracking via Iterative Gradient Descent and Threshold Selection
Figure 4 for Robust Visual Tracking via Iterative Gradient Descent and Threshold Selection
Viaarxiv icon

Relation Modeling and Distillation for Learning with Noisy Labels

Add code
Jun 02, 2024
Viaarxiv icon

Comparative Study of Neighbor-based Methods for Local Outlier Detection

Add code
May 29, 2024
Figure 1 for Comparative Study of Neighbor-based Methods for Local Outlier Detection
Figure 2 for Comparative Study of Neighbor-based Methods for Local Outlier Detection
Figure 3 for Comparative Study of Neighbor-based Methods for Local Outlier Detection
Figure 4 for Comparative Study of Neighbor-based Methods for Local Outlier Detection
Viaarxiv icon

DebCSE: Rethinking Unsupervised Contrastive Sentence Embedding Learning in the Debiasing Perspective

Add code
Sep 14, 2023
Viaarxiv icon

Perception Test: A Diagnostic Benchmark for Multimodal Video Models

Add code
May 23, 2023
Figure 1 for Perception Test: A Diagnostic Benchmark for Multimodal Video Models
Figure 2 for Perception Test: A Diagnostic Benchmark for Multimodal Video Models
Figure 3 for Perception Test: A Diagnostic Benchmark for Multimodal Video Models
Figure 4 for Perception Test: A Diagnostic Benchmark for Multimodal Video Models
Viaarxiv icon

MemoNet:Memorizing Representations of All Cross Features Efficiently via Multi-Hash Codebook Network for CTR Prediction

Add code
Nov 03, 2022
Figure 1 for MemoNet:Memorizing Representations of All Cross Features Efficiently via Multi-Hash Codebook Network for CTR Prediction
Figure 2 for MemoNet:Memorizing Representations of All Cross Features Efficiently via Multi-Hash Codebook Network for CTR Prediction
Figure 3 for MemoNet:Memorizing Representations of All Cross Features Efficiently via Multi-Hash Codebook Network for CTR Prediction
Figure 4 for MemoNet:Memorizing Representations of All Cross Features Efficiently via Multi-Hash Codebook Network for CTR Prediction
Viaarxiv icon