Alert button
Picture for Daiki Chijiwa

Daiki Chijiwa

Alert button

Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks

Add code
Bookmark button
Alert button
Mar 15, 2024
Shin'ya Yamaguchi, Sekitoshi Kanai, Kazuki Adachi, Daiki Chijiwa

Figure 1 for Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks
Figure 2 for Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks
Figure 3 for Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks
Figure 4 for Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks
Viaarxiv icon

Partial Search in a Frozen Network is Enough to Find a Strong Lottery Ticket

Add code
Bookmark button
Alert button
Feb 20, 2024
Hikari Otsuka, Daiki Chijiwa, Ángel López García-Arias, Yasuyuki Okoshi, Kazushi Kawamura, Thiem Van Chu, Daichi Fujiki, Susumu Takeuchi, Masato Motomura

Viaarxiv icon

Regularizing Neural Networks with Meta-Learning Generative Models

Add code
Bookmark button
Alert button
Jul 26, 2023
Shin'ya Yamaguchi, Daiki Chijiwa, Sekitoshi Kanai, Atsutoshi Kumagai, Hisashi Kashima

Figure 1 for Regularizing Neural Networks with Meta-Learning Generative Models
Figure 2 for Regularizing Neural Networks with Meta-Learning Generative Models
Figure 3 for Regularizing Neural Networks with Meta-Learning Generative Models
Figure 4 for Regularizing Neural Networks with Meta-Learning Generative Models
Viaarxiv icon

Revisiting Permutation Symmetry for Merging Models between Different Datasets

Add code
Bookmark button
Alert button
Jun 09, 2023
Masanori Yamada, Tomoya Yamashita, Shin'ya Yamaguchi, Daiki Chijiwa

Figure 1 for Revisiting Permutation Symmetry for Merging Models between Different Datasets
Figure 2 for Revisiting Permutation Symmetry for Merging Models between Different Datasets
Figure 3 for Revisiting Permutation Symmetry for Merging Models between Different Datasets
Figure 4 for Revisiting Permutation Symmetry for Merging Models between Different Datasets
Viaarxiv icon

Transferring Learning Trajectories of Neural Networks

Add code
Bookmark button
Alert button
May 23, 2023
Daiki Chijiwa

Figure 1 for Transferring Learning Trajectories of Neural Networks
Figure 2 for Transferring Learning Trajectories of Neural Networks
Figure 3 for Transferring Learning Trajectories of Neural Networks
Figure 4 for Transferring Learning Trajectories of Neural Networks
Viaarxiv icon

Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks

Add code
Bookmark button
Alert button
May 31, 2022
Daiki Chijiwa, Shin'ya Yamaguchi, Atsutoshi Kumagai, Yasutoshi Ida

Figure 1 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 2 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 3 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Figure 4 for Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Viaarxiv icon

Transfer Learning with Pre-trained Conditional Generative Models

Add code
Bookmark button
Alert button
Apr 27, 2022
Shin'ya Yamaguchi, Sekitoshi Kanai, Atsutoshi Kumagai, Daiki Chijiwa, Hisashi Kashima

Figure 1 for Transfer Learning with Pre-trained Conditional Generative Models
Figure 2 for Transfer Learning with Pre-trained Conditional Generative Models
Figure 3 for Transfer Learning with Pre-trained Conditional Generative Models
Figure 4 for Transfer Learning with Pre-trained Conditional Generative Models
Viaarxiv icon

Pruning Randomly Initialized Neural Networks with Iterative Randomization

Add code
Bookmark button
Alert button
Jun 17, 2021
Daiki Chijiwa, Shin'ya Yamaguchi, Yasutoshi Ida, Kenji Umakoshi, Tomohiro Inoue

Figure 1 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 2 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 3 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Figure 4 for Pruning Randomly Initialized Neural Networks with Iterative Randomization
Viaarxiv icon