Picture for Han Wu

Han Wu

Unbiased Online Curvature Approximation for Regularized Graph Continual Learning

Add code
Sep 16, 2025
Viaarxiv icon

Adapting Foundation Model for Dental Caries Detection with Dual-View Co-Training

Add code
Aug 28, 2025
Viaarxiv icon

AFABench: A Generic Framework for Benchmarking Active Feature Acquisition

Add code
Aug 20, 2025
Viaarxiv icon

LoViC: Efficient Long Video Generation with Context Compression

Add code
Jul 17, 2025
Viaarxiv icon

One-shot Face Sketch Synthesis in the Wild via Generative Diffusion Prior and Instruction Tuning

Add code
Jun 18, 2025
Viaarxiv icon

Application-Driven Value Alignment in Agentic AI Systems: Survey and Perspectives

Add code
Jun 11, 2025
Viaarxiv icon

Activation-Guided Consensus Merging for Large Language Models

Add code
May 20, 2025
Viaarxiv icon

Prompted Meta-Learning for Few-shot Knowledge Graph Completion

Add code
May 08, 2025
Figure 1 for Prompted Meta-Learning for Few-shot Knowledge Graph Completion
Figure 2 for Prompted Meta-Learning for Few-shot Knowledge Graph Completion
Figure 3 for Prompted Meta-Learning for Few-shot Knowledge Graph Completion
Figure 4 for Prompted Meta-Learning for Few-shot Knowledge Graph Completion
Viaarxiv icon

Benchmarking Federated Machine Unlearning methods for Tabular Data

Add code
Apr 01, 2025
Figure 1 for Benchmarking Federated Machine Unlearning methods for Tabular Data
Figure 2 for Benchmarking Federated Machine Unlearning methods for Tabular Data
Figure 3 for Benchmarking Federated Machine Unlearning methods for Tabular Data
Figure 4 for Benchmarking Federated Machine Unlearning methods for Tabular Data
Viaarxiv icon

Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models

Add code
Mar 29, 2025
Figure 1 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 2 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 3 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 4 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Viaarxiv icon