Alert button
Picture for Nadezhda Chirkova

Nadezhda Chirkova

Alert button

HSE University, Russia

Zero-shot cross-lingual transfer in instruction tuning of large language model

Add code
Bookmark button
Alert button
Feb 22, 2024
Nadezhda Chirkova, Vassilina Nikoulina

Viaarxiv icon

Key ingredients for effective zero-shot cross-lingual knowledge transfer in generative tasks

Add code
Bookmark button
Alert button
Feb 19, 2024
Nadezhda Chirkova, Vassilina Nikoulina

Viaarxiv icon

Empirical study of pretrained multilingual language models for zero-shot cross-lingual generation

Add code
Bookmark button
Alert button
Oct 15, 2023
Nadezhda Chirkova, Sheng Liang, Vassilina Nikoulina

Viaarxiv icon

CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code

Add code
Bookmark button
Alert button
Aug 01, 2023
Nadezhda Chirkova, Sergey Troshin

Viaarxiv icon

Should you marginalize over possible tokenizations?

Add code
Bookmark button
Alert button
Jun 30, 2023
Nadezhda Chirkova, Germán Kruszewski, Jos Rozen, Marc Dymetman

Figure 1 for Should you marginalize over possible tokenizations?
Figure 2 for Should you marginalize over possible tokenizations?
Figure 3 for Should you marginalize over possible tokenizations?
Figure 4 for Should you marginalize over possible tokenizations?
Viaarxiv icon

Parameter-Efficient Finetuning of Transformers for Source Code

Add code
Bookmark button
Alert button
Dec 12, 2022
Shamil Ayupov, Nadezhda Chirkova

Figure 1 for Parameter-Efficient Finetuning of Transformers for Source Code
Figure 2 for Parameter-Efficient Finetuning of Transformers for Source Code
Figure 3 for Parameter-Efficient Finetuning of Transformers for Source Code
Figure 4 for Parameter-Efficient Finetuning of Transformers for Source Code
Viaarxiv icon

Probing Pretrained Models of Source Code

Add code
Bookmark button
Alert button
Feb 16, 2022
Sergey Troshin, Nadezhda Chirkova

Figure 1 for Probing Pretrained Models of Source Code
Figure 2 for Probing Pretrained Models of Source Code
Figure 3 for Probing Pretrained Models of Source Code
Figure 4 for Probing Pretrained Models of Source Code
Viaarxiv icon

Machine Learning Methods for Spectral Efficiency Prediction in Massive MIMO Systems

Add code
Bookmark button
Alert button
Dec 29, 2021
Evgeny Bobrov, Sergey Troshin, Nadezhda Chirkova, Ekaterina Lobacheva, Sviatoslav Panchenko, Dmitry Vetrov, Dmitry Kropotov

Figure 1 for Machine Learning Methods for Spectral Efficiency Prediction in Massive MIMO Systems
Figure 2 for Machine Learning Methods for Spectral Efficiency Prediction in Massive MIMO Systems
Figure 3 for Machine Learning Methods for Spectral Efficiency Prediction in Massive MIMO Systems
Figure 4 for Machine Learning Methods for Spectral Efficiency Prediction in Massive MIMO Systems
Viaarxiv icon

On the Memorization Properties of Contrastive Learning

Add code
Bookmark button
Alert button
Jul 21, 2021
Ildus Sadrtdinov, Nadezhda Chirkova, Ekaterina Lobacheva

Figure 1 for On the Memorization Properties of Contrastive Learning
Figure 2 for On the Memorization Properties of Contrastive Learning
Figure 3 for On the Memorization Properties of Contrastive Learning
Figure 4 for On the Memorization Properties of Contrastive Learning
Viaarxiv icon

On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay

Add code
Bookmark button
Alert button
Jun 29, 2021
Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, Dmitry Vetrov

Figure 1 for On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay
Figure 2 for On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay
Figure 3 for On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay
Figure 4 for On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay
Viaarxiv icon