Alert button
Picture for Takuya Narihira

Takuya Narihira

Alert button

Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion

Add code
Bookmark button
Alert button
Mar 28, 2023
Hiromichi Kamata, Yuiko Sakuma, Akio Hayakawa, Masato Ishii, Takuya Narihira

Figure 1 for Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
Figure 2 for Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
Figure 3 for Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
Figure 4 for Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
Viaarxiv icon

DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter

Add code
Bookmark button
Alert button
Mar 23, 2023
Yuiko Sakuma, Masato Ishii, Takuya Narihira

Figure 1 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 2 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 3 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 4 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Viaarxiv icon

NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights, and Materials of Real Object

Add code
Bookmark button
Alert button
Feb 02, 2023
Kazuki Yoshiyama, Takuya Narihira

Figure 1 for NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights, and Materials of Real Object
Figure 2 for NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights, and Materials of Real Object
Figure 3 for NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights, and Materials of Real Object
Figure 4 for NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights, and Materials of Real Object
Viaarxiv icon

Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models

Add code
Bookmark button
Alert button
Dec 08, 2022
Naoki Matsunaga, Masato Ishii, Akio Hayakawa, Kenji Suzuki, Takuya Narihira

Figure 1 for Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models
Figure 2 for Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models
Figure 3 for Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models
Figure 4 for Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models
Viaarxiv icon

Thinking the Fusion Strategy of Multi-reference Face Reenactment

Add code
Bookmark button
Alert button
Feb 22, 2022
Takuya Yashima, Takuya Narihira, Tamaki Kojima

Figure 1 for Thinking the Fusion Strategy of Multi-reference Face Reenactment
Figure 2 for Thinking the Fusion Strategy of Multi-reference Face Reenactment
Figure 3 for Thinking the Fusion Strategy of Multi-reference Face Reenactment
Figure 4 for Thinking the Fusion Strategy of Multi-reference Face Reenactment
Viaarxiv icon

Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions

Add code
Bookmark button
Alert button
Mar 22, 2021
Kenji Suzuki, Yoshiyuki Kobayashi, Takuya Narihira

Figure 1 for Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions
Figure 2 for Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions
Figure 3 for Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions
Figure 4 for Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions
Viaarxiv icon

Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision

Add code
Bookmark button
Alert button
Mar 06, 2021
Andrew Shin, Masato Ishii, Takuya Narihira

Figure 1 for Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision
Figure 2 for Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision
Figure 3 for Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision
Figure 4 for Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision
Viaarxiv icon

Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives

Add code
Bookmark button
Alert button
Feb 12, 2021
Akio Hayakawa, Masato Ishii, Yoshiyuki Kobayashi, Akira Nakamura, Takuya Narihira, Yukio Obuchi, Andrew Shin, Takuya Yashima, Kazuki Yoshiyama

Figure 1 for Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives
Figure 2 for Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives
Figure 3 for Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives
Figure 4 for Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives
Viaarxiv icon

Reference-Based Video Colorization with Spatiotemporal Correspondence

Add code
Bookmark button
Alert button
Nov 25, 2020
Naofumi Akimoto, Akio Hayakawa, Andrew Shin, Takuya Narihira

Figure 1 for Reference-Based Video Colorization with Spatiotemporal Correspondence
Figure 2 for Reference-Based Video Colorization with Spatiotemporal Correspondence
Figure 3 for Reference-Based Video Colorization with Spatiotemporal Correspondence
Figure 4 for Reference-Based Video Colorization with Spatiotemporal Correspondence
Viaarxiv icon

Out-of-core Training for Extremely Large-Scale Neural Networks With Adaptive Window-Based Scheduling

Add code
Bookmark button
Alert button
Oct 27, 2020
Akio Hayakawa, Takuya Narihira

Figure 1 for Out-of-core Training for Extremely Large-Scale Neural Networks With Adaptive Window-Based Scheduling
Figure 2 for Out-of-core Training for Extremely Large-Scale Neural Networks With Adaptive Window-Based Scheduling
Figure 3 for Out-of-core Training for Extremely Large-Scale Neural Networks With Adaptive Window-Based Scheduling
Figure 4 for Out-of-core Training for Extremely Large-Scale Neural Networks With Adaptive Window-Based Scheduling
Viaarxiv icon