Picture for Tao Zhong

Tao Zhong

Senior Member, IEEE

Neural Fields for NV-Center Inverse Sensing

Add code
May 13, 2026
Viaarxiv icon

Topology-Preserving Neural Operator Learning via Hodge Decomposition

Add code
May 13, 2026
Viaarxiv icon

HodgeCover: Higher-Order Topological Coverage Drives Compression of Sparse Mixture-of-Experts

Add code
May 13, 2026
Viaarxiv icon

Neural Field Thermal Tomography: A Differentiable Physics Framework for Non-Destructive Evaluation

Add code
Mar 11, 2026
Viaarxiv icon

Privacy-Preserving End-to-End Full-Duplex Speech Dialogue Models

Add code
Mar 09, 2026
Viaarxiv icon

RIFT: Repurposing Negative Samples via Reward-Informed Fine-Tuning

Add code
Jan 14, 2026
Viaarxiv icon

Local-Canonicalization Equivariant Graph Neural Networks for Sample-Efficient and Generalizable Swarm Robot Control

Add code
Sep 17, 2025
Viaarxiv icon

Constraint Matters: Multi-Modal Representation for Reducing Mixed-Integer Linear programming

Add code
Aug 26, 2025
Viaarxiv icon

Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models

Add code
Mar 29, 2025
Figure 1 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 2 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 3 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 4 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Viaarxiv icon

Unlocking Efficient Long-to-Short LLM Reasoning with Model Merging

Add code
Mar 26, 2025
Figure 1 for Unlocking Efficient Long-to-Short LLM Reasoning with Model Merging
Figure 2 for Unlocking Efficient Long-to-Short LLM Reasoning with Model Merging
Figure 3 for Unlocking Efficient Long-to-Short LLM Reasoning with Model Merging
Figure 4 for Unlocking Efficient Long-to-Short LLM Reasoning with Model Merging
Viaarxiv icon