Picture for Yuchuan Tian

Yuchuan Tian

Circle-RoPE: Cone-like Decoupled Rotary Positional Embedding for Large Vision-Language Models

Add code
May 22, 2025
Viaarxiv icon

Post-Training Quantization for Diffusion Transformer via Hierarchical Timestep Grouping

Add code
Mar 10, 2025
Figure 1 for Post-Training Quantization for Diffusion Transformer via Hierarchical Timestep Grouping
Figure 2 for Post-Training Quantization for Diffusion Transformer via Hierarchical Timestep Grouping
Figure 3 for Post-Training Quantization for Diffusion Transformer via Hierarchical Timestep Grouping
Figure 4 for Post-Training Quantization for Diffusion Transformer via Hierarchical Timestep Grouping
Viaarxiv icon

DiC: Rethinking Conv3x3 Designs in Diffusion Models

Add code
Dec 31, 2024
Figure 1 for DiC: Rethinking Conv3x3 Designs in Diffusion Models
Figure 2 for DiC: Rethinking Conv3x3 Designs in Diffusion Models
Figure 3 for DiC: Rethinking Conv3x3 Designs in Diffusion Models
Figure 4 for DiC: Rethinking Conv3x3 Designs in Diffusion Models
Viaarxiv icon

Learning Quantized Adaptive Conditions for Diffusion Models

Add code
Sep 26, 2024
Figure 1 for Learning Quantized Adaptive Conditions for Diffusion Models
Figure 2 for Learning Quantized Adaptive Conditions for Diffusion Models
Figure 3 for Learning Quantized Adaptive Conditions for Diffusion Models
Figure 4 for Learning Quantized Adaptive Conditions for Diffusion Models
Viaarxiv icon

Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation

Add code
Jun 30, 2024
Figure 1 for Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation
Figure 2 for Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation
Figure 3 for Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation
Figure 4 for Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation
Viaarxiv icon

U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers

Add code
May 04, 2024
Viaarxiv icon

DiJiang: Efficient Large Language Models through Compact Kernelization

Add code
Apr 01, 2024
Figure 1 for DiJiang: Efficient Large Language Models through Compact Kernelization
Figure 2 for DiJiang: Efficient Large Language Models through Compact Kernelization
Figure 3 for DiJiang: Efficient Large Language Models through Compact Kernelization
Figure 4 for DiJiang: Efficient Large Language Models through Compact Kernelization
Viaarxiv icon

Rethinking Optimization and Architecture for Tiny Language Models

Add code
Feb 06, 2024
Figure 1 for Rethinking Optimization and Architecture for Tiny Language Models
Figure 2 for Rethinking Optimization and Architecture for Tiny Language Models
Figure 3 for Rethinking Optimization and Architecture for Tiny Language Models
Figure 4 for Rethinking Optimization and Architecture for Tiny Language Models
Viaarxiv icon

Towards Higher Ranks via Adversarial Weight Pruning

Add code
Nov 29, 2023
Viaarxiv icon

Multiscale Positive-Unlabeled Detection of AI-Generated Texts

Add code
Jun 02, 2023
Figure 1 for Multiscale Positive-Unlabeled Detection of AI-Generated Texts
Figure 2 for Multiscale Positive-Unlabeled Detection of AI-Generated Texts
Figure 3 for Multiscale Positive-Unlabeled Detection of AI-Generated Texts
Figure 4 for Multiscale Positive-Unlabeled Detection of AI-Generated Texts
Viaarxiv icon