Picture for Michael Lomnitz

Michael Lomnitz

Efficient Model Adaptation for Continual Learning at the Edge

Add code
Aug 03, 2023
Figure 1 for Efficient Model Adaptation for Continual Learning at the Edge
Figure 2 for Efficient Model Adaptation for Continual Learning at the Edge
Figure 3 for Efficient Model Adaptation for Continual Learning at the Edge
Figure 4 for Efficient Model Adaptation for Continual Learning at the Edge
Viaarxiv icon

Learning with Local Gradients at the Edge

Add code
Aug 17, 2022
Figure 1 for Learning with Local Gradients at the Edge
Figure 2 for Learning with Local Gradients at the Edge
Figure 3 for Learning with Local Gradients at the Edge
Figure 4 for Learning with Local Gradients at the Edge
Viaarxiv icon

Real-time Hyper-Dimensional Reconfiguration at the Edge using Hardware Accelerators

Add code
Jun 10, 2022
Figure 1 for Real-time Hyper-Dimensional Reconfiguration at the Edge using Hardware Accelerators
Figure 2 for Real-time Hyper-Dimensional Reconfiguration at the Edge using Hardware Accelerators
Figure 3 for Real-time Hyper-Dimensional Reconfiguration at the Edge using Hardware Accelerators
Figure 4 for Real-time Hyper-Dimensional Reconfiguration at the Edge using Hardware Accelerators
Viaarxiv icon

A general approach to bridge the reality-gap

Add code
Sep 03, 2020
Figure 1 for A general approach to bridge the reality-gap
Figure 2 for A general approach to bridge the reality-gap
Figure 3 for A general approach to bridge the reality-gap
Figure 4 for A general approach to bridge the reality-gap
Viaarxiv icon

Reducing audio membership inference attack accuracy to chance: 4 defenses

Add code
Oct 31, 2019
Figure 1 for Reducing audio membership inference attack accuracy to chance: 4 defenses
Figure 2 for Reducing audio membership inference attack accuracy to chance: 4 defenses
Figure 3 for Reducing audio membership inference attack accuracy to chance: 4 defenses
Figure 4 for Reducing audio membership inference attack accuracy to chance: 4 defenses
Viaarxiv icon

Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks

Add code
Jun 15, 2019
Figure 1 for Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Figure 2 for Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Figure 3 for Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Figure 4 for Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Viaarxiv icon