Alert button
Picture for Denis Dimitrov

Denis Dimitrov

Alert button

Sber AI

RusTitW: Russian Language Text Dataset for Visual Text in-the-Wild Recognition

Mar 29, 2023
Igor Markov, Sergey Nesteruk, Andrey Kuznetsov, Denis Dimitrov

Figure 1 for RusTitW: Russian Language Text Dataset for Visual Text in-the-Wild Recognition
Figure 2 for RusTitW: Russian Language Text Dataset for Visual Text in-the-Wild Recognition
Figure 3 for RusTitW: Russian Language Text Dataset for Visual Text in-the-Wild Recognition
Figure 4 for RusTitW: Russian Language Text Dataset for Visual Text in-the-Wild Recognition

Information surrounds people in modern life. Text is a very efficient type of information that people use for communication for centuries. However, automated text-in-the-wild recognition remains a challenging problem. The major limitation for a DL system is the lack of training data. For the competitive performance, training set must contain many samples that replicate the real-world cases. While there are many high-quality datasets for English text recognition; there are no available datasets for Russian language. In this paper, we present a large-scale human-labeled dataset for Russian text recognition in-the-wild. We also publish a synthetic dataset and code to reproduce the generation process

* 5 pages, 6 figures, 2 tables 
Viaarxiv icon

Eco2AI: carbon emissions tracking of machine learning models as the first step towards sustainable AI

Aug 03, 2022
Semen Budennyy, Vladimir Lazarev, Nikita Zakharenko, Alexey Korovin, Olga Plosskaya, Denis Dimitrov, Vladimir Arkhipkin, Ivan Oseledets, Ivan Barsola, Ilya Egorov, Aleksandra Kosterina, Leonid Zhukov

Figure 1 for Eco2AI: carbon emissions tracking of machine learning models as the first step towards sustainable AI
Figure 2 for Eco2AI: carbon emissions tracking of machine learning models as the first step towards sustainable AI
Figure 3 for Eco2AI: carbon emissions tracking of machine learning models as the first step towards sustainable AI
Figure 4 for Eco2AI: carbon emissions tracking of machine learning models as the first step towards sustainable AI

The size and complexity of deep neural networks continue to grow exponentially, significantly increasing energy consumption for training and inference by these models. We introduce an open-source package eco2AI to help data scientists and researchers to track energy consumption and equivalent CO2 emissions of their models in a straightforward way. In eco2AI we put emphasis on accuracy of energy consumption tracking and correct regional CO2 emissions accounting. We encourage research community to search for new optimal Artificial Intelligence (AI) architectures with a lower computational cost. The motivation also comes from the concept of AI-based green house gases sequestrating cycle with both Sustainable AI and Green AI pathways.

* Source code for eco2AI package (energy consumption and carbon emission tracker of code in python) is available at: https://github.com/sb-ai-lab/Eco2AI , the package is also available at PyPi: https://pypi.org/project/eco2ai/ 
Viaarxiv icon

RuCLIP -- new models and experiments: a technical report

Feb 22, 2022
Alex Shonenkov, Andrey Kuznetsov, Denis Dimitrov, Tatyana Shavrina, Daniil Chesakov, Anastasia Maltseva, Alena Fenogenova, Igor Pavlov, Anton Emelyanov, Sergey Markov, Daria Bakshandaeva, Vera Shybaeva, Andrey Chertok

Figure 1 for RuCLIP -- new models and experiments: a technical report
Figure 2 for RuCLIP -- new models and experiments: a technical report
Figure 3 for RuCLIP -- new models and experiments: a technical report
Figure 4 for RuCLIP -- new models and experiments: a technical report

In the report we propose six new implementations of ruCLIP model trained on our 240M pairs. The accuracy results are compared with original CLIP model with Ru-En translation (OPUS-MT) on 16 datasets from different domains. Our best implementations outperform CLIP + OPUS-MT solution on most of the datasets in few-show and zero-shot tasks. In the report we briefly describe the implementations and concentrate on the conducted experiments. Inference execution time comparison is also presented in the report.

Viaarxiv icon

Survey on Large Scale Neural Network Training

Feb 21, 2022
Julia Gusak, Daria Cherniuk, Alena Shilova, Alexander Katrutsa, Daniel Bershatsky, Xunyi Zhao, Lionel Eyraud-Dubois, Oleg Shlyazhko, Denis Dimitrov, Ivan Oseledets, Olivier Beaumont

Figure 1 for Survey on Large Scale Neural Network Training
Figure 2 for Survey on Large Scale Neural Network Training
Figure 3 for Survey on Large Scale Neural Network Training
Figure 4 for Survey on Large Scale Neural Network Training

Modern Deep Neural Networks (DNNs) require significant memory to store weight, activations, and other intermediate tensors during training. Hence, many models do not fit one GPU device or can be trained using only a small per-GPU batch size. This survey provides a systematic overview of the approaches that enable more efficient DNNs training. We analyze techniques that save memory and make good use of computation and communication resources on architectures with a single or several GPUs. We summarize the main categories of strategies and compare strategies within and across categories. Along with approaches proposed in the literature, we discuss available implementations.

Viaarxiv icon

A new face swap method for image and video domains: a technical report

Feb 07, 2022
Daniil Chesakov, Anastasia Maltseva, Alexander Groshev, Andrey Kuznetsov, Denis Dimitrov

Deep fake technology became a hot field of research in the last few years. Researchers investigate sophisticated Generative Adversarial Networks (GAN), autoencoders, and other approaches to establish precise and robust algorithms for face swapping. Achieved results show that the deep fake unsupervised synthesis task has problems in terms of the visual quality of generated data. These problems usually lead to high fake detection accuracy when an expert analyzes them. The first problem is that existing image-to-image approaches do not consider video domain specificity and frame-by-frame processing leads to face jittering and other clearly visible distortions. Another problem is the generated data resolution, which is low for many existing methods due to high computational complexity. The third problem appears when the source face has larger proportions (like bigger cheeks), and after replacement it becomes visible on the face border. Our main goal was to develop such an approach that could solve these problems and outperform existing solutions on a number of clue metrics. We introduce a new face swap pipeline that is based on FaceShifter architecture and fixes the problems stated above. With a new eye loss function, super-resolution block, and Gaussian-based face mask generation leads to improvements in quality which is confirmed during evaluation.

Viaarxiv icon

Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction

Feb 02, 2022
Georgii Novikov, Daniel Bershatsky, Julia Gusak, Alex Shonenkov, Denis Dimitrov, Ivan Oseledets

Figure 1 for Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction
Figure 2 for Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction
Figure 3 for Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction
Figure 4 for Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction

Memory footprint is one of the main limiting factors for large neural network training. In backpropagation, one needs to store the input to each operation in the computational graph. Every modern neural network model has quite a few pointwise nonlinearities in its architecture, and such operation induces additional memory costs which -- as we show -- can be significantly reduced by quantization of the gradients. We propose a systematic approach to compute optimal quantization of the retained gradients of the pointwise nonlinear functions with only a few bits per each element. We show that such approximation can be achieved by computing optimal piecewise-constant approximation of the derivative of the activation function, which can be done by dynamic programming. The drop-in replacements are implemented for all popular nonlinearities and can be used in any existing pipeline. We confirm the memory reduction and the same convergence on several open benchmarks.

* Submitted 
Viaarxiv icon

Handwritten text generation and strikethrough characters augmentation

Dec 14, 2021
Alex Shonenkov, Denis Karachev, Max Novopoltsev, Mark Potanin, Denis Dimitrov, Andrey Chertok

Figure 1 for Handwritten text generation and strikethrough characters augmentation
Figure 2 for Handwritten text generation and strikethrough characters augmentation
Figure 3 for Handwritten text generation and strikethrough characters augmentation
Figure 4 for Handwritten text generation and strikethrough characters augmentation

We introduce two data augmentation techniques, which, used with a Resnet-BiLSTM-CTC network, significantly reduce Word Error Rate (WER) and Character Error Rate (CER) beyond best-reported results on handwriting text recognition (HTR) tasks. We apply a novel augmentation that simulates strikethrough text (HandWritten Blots) and a handwritten text generation method based on printed text (StackMix), which proved to be very effective in HTR tasks. StackMix uses weakly-supervised framework to get character boundaries. Because these data augmentation techniques are independent of the network used, they could also be applied to enhance the performance of other networks and approaches to HTR. Extensive experiments on ten handwritten text datasets show that HandWritten Blots augmentation and StackMix significantly improve the quality of HTR models

* 16 pages, 15 figures. arXiv admin note: substantial text overlap with arXiv:2108.11667 
Viaarxiv icon

Emojich -- zero-shot emoji generation using Russian language: a technical report

Dec 04, 2021
Alex Shonenkov, Daria Bakshandaeva, Denis Dimitrov, Aleksandr Nikolich

Figure 1 for Emojich -- zero-shot emoji generation using Russian language: a technical report
Figure 2 for Emojich -- zero-shot emoji generation using Russian language: a technical report
Figure 3 for Emojich -- zero-shot emoji generation using Russian language: a technical report

This technical report presents a text-to-image neural network "Emojich" that generates emojis using captions in Russian language as a condition. We aim to keep the generalization ability of a pretrained big model ruDALL-E Malevich (XL) 1.3B parameters at the fine-tuning stage, while giving special style to the images generated. Here are presented some engineering methods, code realization, all hyper-parameters for reproducing results and a Telegram bot where everyone can create their own customized sets of stickers. Also, some newly generated emojis obtained by "Emojich" model are demonstrated.

* 5 pages, 4 figures and big figure at appendix, technical report 
Viaarxiv icon

Many Heads but One Brain: an Overview of Fusion Brain Challenge on AI Journey 2021

Nov 22, 2021
Daria Bakshandaeva, Denis Dimitrov, Alex Shonenkov, Mark Potanin, Vladimir Arkhipkin, Denis Karachev, Vera Davydova, Anton Voronov, Mikhail Martynov, Natalia Semenova, Mikhail Stepnov, Elena Tutubalina, Andrey Chertok, Aleksandr Petiushko

Figure 1 for Many Heads but One Brain: an Overview of Fusion Brain Challenge on AI Journey 2021
Figure 2 for Many Heads but One Brain: an Overview of Fusion Brain Challenge on AI Journey 2021
Figure 3 for Many Heads but One Brain: an Overview of Fusion Brain Challenge on AI Journey 2021
Figure 4 for Many Heads but One Brain: an Overview of Fusion Brain Challenge on AI Journey 2021

Supporting the current trend in the AI community, we propose the AI Journey 2021 Challenge called Fusion Brain which is targeted to make the universal architecture process different modalities (namely, images, texts, and code) and to solve multiple tasks for vision and language. The Fusion Brain Challenge https://github.com/sberbank-ai/fusion_brain_aij2021 combines the following specific tasks: Code2code Translation, Handwritten Text recognition, Zero-shot Object Detection, and Visual Question Answering. We have created datasets for each task to test the participants' submissions on it. Moreover, we have opened a new handwritten dataset in both Russian and English, which consists of 94,130 pairs of images and texts. The Russian part of the dataset is the largest Russian handwritten dataset in the world. We also propose the baseline solution and corresponding task-specific solutions as well as overall metrics.

Viaarxiv icon

StackMix and Blot Augmentations for Handwritten Text Recognition

Aug 26, 2021
Alex Shonenkov, Denis Karachev, Maxim Novopoltsev, Mark Potanin, Denis Dimitrov

Figure 1 for StackMix and Blot Augmentations for Handwritten Text Recognition
Figure 2 for StackMix and Blot Augmentations for Handwritten Text Recognition
Figure 3 for StackMix and Blot Augmentations for Handwritten Text Recognition
Figure 4 for StackMix and Blot Augmentations for Handwritten Text Recognition

This paper proposes a handwritten text recognition(HTR) system that outperforms current state-of-the-artmethods. The comparison was carried out on three of themost frequently used in HTR task datasets, namely Ben-tham, IAM, and Saint Gall. In addition, the results on tworecently presented datasets, Peter the Greats manuscriptsand HKR Dataset, are provided.The paper describes the architecture of the neural net-work and two ways of increasing the volume of train-ing data: augmentation that simulates strikethrough text(HandWritten Blots) and a new text generation method(StackMix), which proved to be very effective in HTR tasks.StackMix can also be applied to the standalone task of gen-erating handwritten text based on printed text.

* 17 pages, 9 figures 
Viaarxiv icon