Alert button
Picture for Masanori Yamada

Masanori Yamada

Alert button

Analysis of Linear Mode Connectivity via Permutation-Based Weight Matching

Add code
Bookmark button
Alert button
Feb 19, 2024
Akira Ito, Masanori Yamada, Atsutoshi Kumagai

Viaarxiv icon

One-Shot Machine Unlearning with Mnemonic Code

Add code
Bookmark button
Alert button
Jun 09, 2023
Tomoya Yamashita, Masanori Yamada, Takashi Shibata

Figure 1 for One-Shot Machine Unlearning with Mnemonic Code
Figure 2 for One-Shot Machine Unlearning with Mnemonic Code
Figure 3 for One-Shot Machine Unlearning with Mnemonic Code
Figure 4 for One-Shot Machine Unlearning with Mnemonic Code
Viaarxiv icon

Revisiting Permutation Symmetry for Merging Models between Different Datasets

Add code
Bookmark button
Alert button
Jun 09, 2023
Masanori Yamada, Tomoya Yamashita, Shin'ya Yamaguchi, Daiki Chijiwa

Figure 1 for Revisiting Permutation Symmetry for Merging Models between Different Datasets
Figure 2 for Revisiting Permutation Symmetry for Merging Models between Different Datasets
Figure 3 for Revisiting Permutation Symmetry for Merging Models between Different Datasets
Figure 4 for Revisiting Permutation Symmetry for Merging Models between Different Datasets
Viaarxiv icon

ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation

Add code
Bookmark button
Alert button
Nov 01, 2022
Tomokatsu Takahashi, Masanori Yamada, Yuuki Yamanaka, Tomoya Yamashita

Figure 1 for ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation
Figure 2 for ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation
Figure 3 for ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation
Figure 4 for ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation
Viaarxiv icon

Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness

Add code
Bookmark button
Alert button
Jul 21, 2022
Sekitoshi Kanai, Shin'ya Yamaguchi, Masanori Yamada, Hiroshi Takahashi, Yasutoshi Ida

Figure 1 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 2 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 3 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 4 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Viaarxiv icon

Smoothness Analysis of Loss Functions of Adversarial Training

Add code
Bookmark button
Alert button
Mar 02, 2021
Sekitoshi Kanai, Masanori Yamada, Hiroshi Takahashi, Yuki Yamanaka, Yasutoshi Ida

Figure 1 for Smoothness Analysis of Loss Functions of Adversarial Training
Figure 2 for Smoothness Analysis of Loss Functions of Adversarial Training
Figure 3 for Smoothness Analysis of Loss Functions of Adversarial Training
Viaarxiv icon

Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression

Add code
Bookmark button
Alert button
Feb 05, 2021
Masanori Yamada, Sekitoshi Kanai, Tomoharu Iwata, Tomokatsu Takahashi, Yuki Yamanaka, Hiroshi Takahashi, Atsutoshi Kumagai

Figure 1 for Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression
Figure 2 for Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression
Figure 3 for Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression
Figure 4 for Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression
Viaarxiv icon

Constraining Logits by Bounded Function for Adversarial Robustness

Add code
Bookmark button
Alert button
Oct 06, 2020
Sekitoshi Kanai, Masanori Yamada, Shin'ya Yamaguchi, Hiroshi Takahashi, Yasutoshi Ida

Figure 1 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 2 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 3 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 4 for Constraining Logits by Bounded Function for Adversarial Robustness
Viaarxiv icon

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

Add code
Bookmark button
Alert button
Sep 19, 2019
Sekitoshi Kanai, Yasutoshi Ida, Yasuhiro Fujiwara, Masanori Yamada, Shuichi Adachi

Figure 1 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 2 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 3 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 4 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Viaarxiv icon