Picture for Yongjun He

Yongjun He

Computing in the Era of Large Generative Models: From Cloud-Native to AI-Native

Add code
Jan 17, 2024
Figure 1 for Computing in the Era of Large Generative Models: From Cloud-Native to AI-Native
Figure 2 for Computing in the Era of Large Generative Models: From Cloud-Native to AI-Native
Viaarxiv icon

Contrastive Loss Based Frame-wise Feature disentanglement for Polyphonic Sound Event Detection

Add code
Jan 11, 2024
Figure 1 for Contrastive Loss Based Frame-wise Feature disentanglement for Polyphonic Sound Event Detection
Figure 2 for Contrastive Loss Based Frame-wise Feature disentanglement for Polyphonic Sound Event Detection
Figure 3 for Contrastive Loss Based Frame-wise Feature disentanglement for Polyphonic Sound Event Detection
Figure 4 for Contrastive Loss Based Frame-wise Feature disentanglement for Polyphonic Sound Event Detection
Viaarxiv icon

Auto-FP: An Experimental Study of Automated Feature Preprocessing for Tabular Data

Add code
Oct 04, 2023
Figure 1 for Auto-FP: An Experimental Study of Automated Feature Preprocessing for Tabular Data
Figure 2 for Auto-FP: An Experimental Study of Automated Feature Preprocessing for Tabular Data
Figure 3 for Auto-FP: An Experimental Study of Automated Feature Preprocessing for Tabular Data
Figure 4 for Auto-FP: An Experimental Study of Automated Feature Preprocessing for Tabular Data
Viaarxiv icon

BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks

Add code
Aug 31, 2023
Figure 1 for BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks
Figure 2 for BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks
Figure 3 for BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks
Figure 4 for BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks
Viaarxiv icon

Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees

Add code
Jun 02, 2022
Figure 1 for Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Figure 2 for Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Figure 3 for Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Figure 4 for Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Viaarxiv icon

Decentralized Training of Foundation Models in Heterogeneous Environments

Add code
Jun 02, 2022
Figure 1 for Decentralized Training of Foundation Models in Heterogeneous Environments
Figure 2 for Decentralized Training of Foundation Models in Heterogeneous Environments
Figure 3 for Decentralized Training of Foundation Models in Heterogeneous Environments
Figure 4 for Decentralized Training of Foundation Models in Heterogeneous Environments
Viaarxiv icon

Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters

Add code
Nov 23, 2021
Figure 1 for Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters
Figure 2 for Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters
Figure 3 for Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters
Figure 4 for Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters
Viaarxiv icon