Picture for Yi Liao

Yi Liao

INFANiTE: Implicit Neural representation for high-resolution Fetal brain spatio-temporal Atlas learNing from clinical Thick-slicE MRI

Add code
May 11, 2026
Viaarxiv icon

Annotation-free deep learning for detection and segmentation of fetal germinal matrix-intraventricular hemorrhage in brain MRI

Add code
May 10, 2026
Viaarxiv icon

FetalAgents: A Multi-Agent System for Fetal Ultrasound Image and Video Analysis

Add code
Mar 10, 2026
Viaarxiv icon

Deep learning-based neurodevelopmental assessment in preterm infants

Add code
Jan 17, 2026
Viaarxiv icon

What-If Analysis of Large Language Models: Explore the Game World Using Proactive Thinking

Add code
Sep 05, 2025
Viaarxiv icon

Dynamic Accumulated Attention Map for Interpreting Evolution of Decision-Making in Vision Transformer

Add code
Mar 18, 2025
Viaarxiv icon

Tgea: An error-annotated dataset and benchmark tasks for text generation from pretrained language models

Add code
Mar 06, 2025
Viaarxiv icon

Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models

Add code
Dec 02, 2024
Figure 1 for Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models
Figure 2 for Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models
Figure 3 for Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models
Figure 4 for Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models
Viaarxiv icon

SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers

Add code
Sep 30, 2024
Figure 1 for SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
Figure 2 for SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
Figure 3 for SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
Figure 4 for SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
Viaarxiv icon

SoftDedup: an Efficient Data Reweighting Method for Speeding Up Language Model Pre-training

Add code
Jul 09, 2024
Figure 1 for SoftDedup: an Efficient Data Reweighting Method for Speeding Up Language Model Pre-training
Figure 2 for SoftDedup: an Efficient Data Reweighting Method for Speeding Up Language Model Pre-training
Figure 3 for SoftDedup: an Efficient Data Reweighting Method for Speeding Up Language Model Pre-training
Figure 4 for SoftDedup: an Efficient Data Reweighting Method for Speeding Up Language Model Pre-training
Viaarxiv icon