Alert button
Picture for Yuiko Sakuma

Yuiko Sakuma

Alert button

Mixed-precision Supernet Training from Vision Foundation Models using Low Rank Adapter

Add code
Bookmark button
Alert button
Mar 29, 2024
Yuiko Sakuma, Masakazu Yoshimura, Junji Otsuka, Atsushi Irie, Takeshi Ohashi

Figure 1 for Mixed-precision Supernet Training from Vision Foundation Models using Low Rank Adapter
Figure 2 for Mixed-precision Supernet Training from Vision Foundation Models using Low Rank Adapter
Figure 3 for Mixed-precision Supernet Training from Vision Foundation Models using Low Rank Adapter
Figure 4 for Mixed-precision Supernet Training from Vision Foundation Models using Low Rank Adapter
Viaarxiv icon

Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion

Add code
Bookmark button
Alert button
Mar 28, 2023
Hiromichi Kamata, Yuiko Sakuma, Akio Hayakawa, Masato Ishii, Takuya Narihira

Figure 1 for Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
Figure 2 for Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
Figure 3 for Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
Figure 4 for Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
Viaarxiv icon

DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter

Add code
Bookmark button
Alert button
Mar 23, 2023
Yuiko Sakuma, Masato Ishii, Takuya Narihira

Figure 1 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 2 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 3 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 4 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Viaarxiv icon

n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

Add code
Bookmark button
Alert button
Mar 22, 2021
Yuiko Sakuma, Hiroshi Sumihiro, Jun Nishikawa, Toshiki Nakamura, Ryoji Ikegaya

Figure 1 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 2 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 3 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 4 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Viaarxiv icon