Picture for Yang You

Yang You

LoBaSS: Gauging Learnability in Supervised Fine-tuning Data

Add code
Oct 16, 2023
Viaarxiv icon

Let's reward step by step: Step-Level reward model as the Navigators for Reasoning

Add code
Oct 16, 2023
Figure 1 for Let's reward step by step: Step-Level reward model as the Navigators for Reasoning
Figure 2 for Let's reward step by step: Step-Level reward model as the Navigators for Reasoning
Figure 3 for Let's reward step by step: Step-Level reward model as the Navigators for Reasoning
Figure 4 for Let's reward step by step: Step-Level reward model as the Navigators for Reasoning
Viaarxiv icon

Does Graph Distillation See Like Vision Dataset Counterpart?

Add code
Oct 13, 2023
Figure 1 for Does Graph Distillation See Like Vision Dataset Counterpart?
Figure 2 for Does Graph Distillation See Like Vision Dataset Counterpart?
Figure 3 for Does Graph Distillation See Like Vision Dataset Counterpart?
Figure 4 for Does Graph Distillation See Like Vision Dataset Counterpart?
Viaarxiv icon

Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases

Add code
Oct 11, 2023
Figure 1 for Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases
Figure 2 for Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases
Figure 3 for Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases
Figure 4 for Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases
Viaarxiv icon

Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching

Add code
Oct 09, 2023
Figure 1 for Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
Figure 2 for Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
Figure 3 for Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
Figure 4 for Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
Viaarxiv icon

Can pre-trained models assist in dataset distillation?

Add code
Oct 05, 2023
Figure 1 for Can pre-trained models assist in dataset distillation?
Figure 2 for Can pre-trained models assist in dataset distillation?
Figure 3 for Can pre-trained models assist in dataset distillation?
Figure 4 for Can pre-trained models assist in dataset distillation?
Viaarxiv icon

Boosting Unsupervised Contrastive Learning Using Diffusion-Based Data Augmentation From Scratch

Add code
Sep 10, 2023
Viaarxiv icon

Color Prompting for Data-Free Continual Unsupervised Domain Adaptive Person Re-Identification

Add code
Aug 21, 2023
Viaarxiv icon

Dataset Quantization

Add code
Aug 21, 2023
Viaarxiv icon

The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field

Add code
Aug 19, 2023
Viaarxiv icon