Picture for Denis Kuznedelev

Denis Kuznedelev

Does Diffusion Beat GAN in Image Super Resolution?

Add code
May 27, 2024
Viaarxiv icon

PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression

Add code
May 23, 2024
Viaarxiv icon

YaART: Yet Another ART Rendering Technology

Add code
Apr 08, 2024
Figure 1 for YaART: Yet Another ART Rendering Technology
Figure 2 for YaART: Yet Another ART Rendering Technology
Figure 3 for YaART: Yet Another ART Rendering Technology
Figure 4 for YaART: Yet Another ART Rendering Technology
Viaarxiv icon

Extreme Compression of Large Language Models via Additive Quantization

Add code
Jan 11, 2024
Figure 1 for Extreme Compression of Large Language Models via Additive Quantization
Figure 2 for Extreme Compression of Large Language Models via Additive Quantization
Figure 3 for Extreme Compression of Large Language Models via Additive Quantization
Figure 4 for Extreme Compression of Large Language Models via Additive Quantization
Viaarxiv icon

Sparse Fine-tuning for Inference Acceleration of Large Language Models

Add code
Oct 13, 2023
Viaarxiv icon

Accurate Neural Network Pruning Requires Rethinking Sparse Optimization

Add code
Aug 03, 2023
Figure 1 for Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Figure 2 for Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Figure 3 for Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Figure 4 for Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Viaarxiv icon

SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression

Add code
Jun 05, 2023
Figure 1 for SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression
Figure 2 for SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression
Figure 3 for SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression
Figure 4 for SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression
Viaarxiv icon

Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression

Add code
Mar 25, 2023
Figure 1 for Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression
Figure 2 for Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression
Figure 3 for Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression
Figure 4 for Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression
Viaarxiv icon

Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts

Add code
Feb 27, 2023
Figure 1 for Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts
Figure 2 for Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts
Figure 3 for Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts
Figure 4 for Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts
Viaarxiv icon

A critical look at the evaluation of GNNs under heterophily: are we really making progress?

Add code
Feb 22, 2023
Figure 1 for A critical look at the evaluation of GNNs under heterophily: are we really making progress?
Figure 2 for A critical look at the evaluation of GNNs under heterophily: are we really making progress?
Figure 3 for A critical look at the evaluation of GNNs under heterophily: are we really making progress?
Figure 4 for A critical look at the evaluation of GNNs under heterophily: are we really making progress?
Viaarxiv icon