Picture for Mikhail Pautov

Mikhail Pautov

Spread them Apart: Towards Robust Watermarking of Generated Content

Add code
Feb 11, 2025
Viaarxiv icon

Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment

Add code
Nov 19, 2024
Figure 1 for Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment
Figure 2 for Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment
Figure 3 for Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment
Figure 4 for Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment
Viaarxiv icon

Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples

Add code
Oct 21, 2024
Figure 1 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 2 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 3 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 4 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Viaarxiv icon

GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation

Add code
May 13, 2024
Figure 1 for GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Figure 2 for GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Figure 3 for GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Figure 4 for GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Viaarxiv icon

Certification of Speaker Recognition Models to Additive Perturbations

Add code
Apr 29, 2024
Figure 1 for Certification of Speaker Recognition Models to Additive Perturbations
Figure 2 for Certification of Speaker Recognition Models to Additive Perturbations
Figure 3 for Certification of Speaker Recognition Models to Additive Perturbations
Figure 4 for Certification of Speaker Recognition Models to Additive Perturbations
Viaarxiv icon

Probabilistically Robust Watermarking of Neural Networks

Add code
Jan 16, 2024
Figure 1 for Probabilistically Robust Watermarking of Neural Networks
Figure 2 for Probabilistically Robust Watermarking of Neural Networks
Figure 3 for Probabilistically Robust Watermarking of Neural Networks
Figure 4 for Probabilistically Robust Watermarking of Neural Networks
Viaarxiv icon

Translate your gibberish: black-box adversarial attack on machine translation systems

Add code
Mar 20, 2023
Viaarxiv icon

Smoothed Embeddings for Certified Few-Shot Learning

Add code
Feb 02, 2022
Figure 1 for Smoothed Embeddings for Certified Few-Shot Learning
Figure 2 for Smoothed Embeddings for Certified Few-Shot Learning
Figure 3 for Smoothed Embeddings for Certified Few-Shot Learning
Figure 4 for Smoothed Embeddings for Certified Few-Shot Learning
Viaarxiv icon

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

Add code
Sep 22, 2021
Figure 1 for CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Figure 2 for CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Figure 3 for CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Figure 4 for CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Viaarxiv icon

On adversarial patches: real-world attack on ArcFace-100 face recognition system

Add code
Oct 15, 2019
Figure 1 for On adversarial patches: real-world attack on ArcFace-100 face recognition system
Figure 2 for On adversarial patches: real-world attack on ArcFace-100 face recognition system
Figure 3 for On adversarial patches: real-world attack on ArcFace-100 face recognition system
Figure 4 for On adversarial patches: real-world attack on ArcFace-100 face recognition system
Viaarxiv icon