Alert button
Picture for Yasutoshi Ida

Yasutoshi Ida

Alert button

Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers

Add code
Bookmark button
Alert button
Mar 14, 2023
Yasutoshi Ida, Sekitoshi Kanai, Kazuki Adachi, Atsutoshi Kumagai, Yasuhiro Fujiwara

Figure 1 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 2 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 3 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Figure 4 for Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers
Viaarxiv icon

Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks

Add code
Bookmark button
Alert button
Oct 04, 2022
Kentaro Ohno, Sekitoshi Kanai, Yasutoshi Ida

Figure 1 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 2 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 3 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Figure 4 for Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks
Viaarxiv icon

Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness

Add code
Bookmark button
Alert button
Jul 21, 2022
Sekitoshi Kanai, Shin'ya Yamaguchi, Masanori Yamada, Hiroshi Takahashi, Yasutoshi Ida

Figure 1 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 2 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 3 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Figure 4 for Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness
Viaarxiv icon

Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks

Add code
Bookmark button
Alert button
May 31, 2022
Daiki Chijiwa, Shin'ya Yamaguchi, Atsutoshi Kumagai, Yasutoshi Ida

Figure 1 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 2 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 3 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 4 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Viaarxiv icon

Pruning Randomly Initialized Neural Networks with Iterative Randomization

Add code
Bookmark button
Alert button
Jun 17, 2021
Daiki Chijiwa, Shin'ya Yamaguchi, Yasutoshi Ida, Kenji Umakoshi, Tomohiro Inoue

Figure 1 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 2 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 3 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 4 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Viaarxiv icon

Smoothness Analysis of Loss Functions of Adversarial Training

Add code
Bookmark button
Alert button
Mar 02, 2021
Sekitoshi Kanai, Masanori Yamada, Hiroshi Takahashi, Yuki Yamanaka, Yasutoshi Ida

Figure 1 for Smoothness Analysis of Loss Functions of Adversarial Training
Figure 2 for Smoothness Analysis of Loss Functions of Adversarial Training
Figure 3 for Smoothness Analysis of Loss Functions of Adversarial Training
Viaarxiv icon

Constraining Logits by Bounded Function for Adversarial Robustness

Add code
Bookmark button
Alert button
Oct 06, 2020
Sekitoshi Kanai, Masanori Yamada, Shin'ya Yamaguchi, Hiroshi Takahashi, Yasutoshi Ida

Figure 1 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 2 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 3 for Constraining Logits by Bounded Function for Adversarial Robustness
Figure 4 for Constraining Logits by Bounded Function for Adversarial Robustness
Viaarxiv icon

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

Add code
Bookmark button
Alert button
Sep 19, 2019
Sekitoshi Kanai, Yasutoshi Ida, Yasuhiro Fujiwara, Masanori Yamada, Shuichi Adachi

Figure 1 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 2 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 3 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Figure 4 for Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Viaarxiv icon

Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining

Add code
Bookmark button
Alert button
Jun 10, 2019
Yasutoshi Ida, Yasuhiro Fujiwara

Figure 1 for Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining
Figure 2 for Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining
Viaarxiv icon

Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks

Add code
Bookmark button
Alert button
Sep 28, 2017
Yasutoshi Ida, Yasuhiro Fujiwara, Sotetsu Iwamura

Figure 1 for Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Figure 2 for Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Figure 3 for Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Figure 4 for Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Viaarxiv icon