Alert button
Picture for Klaudia Bałazy

Klaudia Bałazy

Alert button

Exploiting Transformer Activation Sparsity with Dynamic Inference

Add code
Bookmark button
Alert button
Oct 06, 2023
Mikołaj Piórczyński, Filip Szatkowski, Klaudia Bałazy, Bartosz Wójcik

Figure 1 for Exploiting Transformer Activation Sparsity with Dynamic Inference
Figure 2 for Exploiting Transformer Activation Sparsity with Dynamic Inference
Viaarxiv icon

r-softmax: Generalized Softmax with Controllable Sparsity Rate

Add code
Bookmark button
Alert button
Apr 21, 2023
Klaudia Bałazy, Łukasz Struski, Marek Śmieja, Jacek Tabor

Figure 1 for r-softmax: Generalized Softmax with Controllable Sparsity Rate
Figure 2 for r-softmax: Generalized Softmax with Controllable Sparsity Rate
Figure 3 for r-softmax: Generalized Softmax with Controllable Sparsity Rate
Figure 4 for r-softmax: Generalized Softmax with Controllable Sparsity Rate
Viaarxiv icon

Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks

Add code
Bookmark button
Alert button
Feb 10, 2023
Piotr Gaiński, Klaudia Bałazy

Figure 1 for Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
Figure 2 for Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
Figure 3 for Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
Figure 4 for Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
Viaarxiv icon

Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models

Add code
Bookmark button
Alert button
Feb 08, 2023
Mohammadreza Banaei, Klaudia Bałazy, Artur Kasymov, Rémi Lebret, Jacek Tabor, Karl Aberer

Figure 1 for Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models
Figure 2 for Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models
Figure 3 for Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models
Figure 4 for Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models
Viaarxiv icon

Direction is what you need: Improving Word Embedding Compression in Large Language Models

Add code
Bookmark button
Alert button
Jun 15, 2021
Klaudia Bałazy, Mohammadreza Banaei, Rémi Lebret, Jacek Tabor, Karl Aberer

Figure 1 for Direction is what you need: Improving Word Embedding Compression in Large Language Models
Figure 2 for Direction is what you need: Improving Word Embedding Compression in Large Language Models
Figure 3 for Direction is what you need: Improving Word Embedding Compression in Large Language Models
Figure 4 for Direction is what you need: Improving Word Embedding Compression in Large Language Models
Viaarxiv icon

Zero Time Waste: Recycling Predictions in Early Exit Neural Networks

Add code
Bookmark button
Alert button
Jun 09, 2021
Maciej Wołczyk, Bartosz Wójcik, Klaudia Bałazy, Igor Podolak, Jacek Tabor, Marek Śmieja, Tomasz Trzciński

Figure 1 for Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
Figure 2 for Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
Figure 3 for Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
Figure 4 for Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
Viaarxiv icon

Finding the Optimal Network Depth in Classification Tasks

Add code
Bookmark button
Alert button
Apr 17, 2020
Bartosz Wójcik, Maciej Wołczyk, Klaudia Bałazy, Jacek Tabor

Figure 1 for Finding the Optimal Network Depth in Classification Tasks
Figure 2 for Finding the Optimal Network Depth in Classification Tasks
Figure 3 for Finding the Optimal Network Depth in Classification Tasks
Figure 4 for Finding the Optimal Network Depth in Classification Tasks
Viaarxiv icon