Picture for Mikhail Pautov

Mikhail Pautov

Contract And Conquer: How to Provably Compute Adversarial Examples for a Black-Box Model?

Add code
Mar 12, 2026
Viaarxiv icon

Towards Robust Speech Deepfake Detection via Human-Inspired Reasoning

Add code
Mar 12, 2026
Viaarxiv icon

Probabilistic Verification of Voice Anti-Spoofing Models

Add code
Mar 12, 2026
Viaarxiv icon

RandMark: On Random Watermarking of Visual Foundation Models

Add code
Mar 11, 2026
Viaarxiv icon

ActiveMark: on watermarking of visual foundation models via massive activations

Add code
Oct 06, 2025
Viaarxiv icon

Spread them Apart: Towards Robust Watermarking of Generated Content

Add code
Feb 11, 2025
Viaarxiv icon

Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment

Add code
Nov 19, 2024
Figure 1 for Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment
Figure 2 for Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment
Figure 3 for Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment
Figure 4 for Stochastic BIQA: Median Randomized Smoothing for Certified Blind Image Quality Assessment
Viaarxiv icon

Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples

Add code
Oct 21, 2024
Figure 1 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 2 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 3 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Figure 4 for Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Viaarxiv icon

GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation

Add code
May 13, 2024
Figure 1 for GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Figure 2 for GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Figure 3 for GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Figure 4 for GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Viaarxiv icon

Certification of Speaker Recognition Models to Additive Perturbations

Add code
Apr 29, 2024
Figure 1 for Certification of Speaker Recognition Models to Additive Perturbations
Figure 2 for Certification of Speaker Recognition Models to Additive Perturbations
Figure 3 for Certification of Speaker Recognition Models to Additive Perturbations
Figure 4 for Certification of Speaker Recognition Models to Additive Perturbations
Viaarxiv icon