Alert button
Picture for Hideyuki Tachibana

Hideyuki Tachibana

Alert button

Multilingual Sentence-T5: Scalable Sentence Encoders for Multilingual Applications

Add code
Bookmark button
Alert button
Mar 26, 2024
Chihiro Yano, Akihiko Fukuchi, Shoko Fukasawa, Hideyuki Tachibana, Yotaro Watanabe

Viaarxiv icon

gSwin: Gated MLP Vision Model with Hierarchical Structure of Shifted Window

Add code
Bookmark button
Alert button
Aug 24, 2022
Mocho Go, Hideyuki Tachibana

Figure 1 for gSwin: Gated MLP Vision Model with Hierarchical Structure of Shifted Window
Figure 2 for gSwin: Gated MLP Vision Model with Hierarchical Structure of Shifted Window
Figure 3 for gSwin: Gated MLP Vision Model with Hierarchical Structure of Shifted Window
Figure 4 for gSwin: Gated MLP Vision Model with Hierarchical Structure of Shifted Window
Viaarxiv icon

Itô-Taylor Sampling Scheme for Denoising Diffusion Probabilistic Models using Ideal Derivatives

Add code
Bookmark button
Alert button
Dec 26, 2021
Hideyuki Tachibana, Mocho Go, Muneyoshi Inahara, Yotaro Katayama, Yotaro Watanabe

Figure 1 for Itô-Taylor Sampling Scheme for Denoising Diffusion Probabilistic Models using Ideal Derivatives
Figure 2 for Itô-Taylor Sampling Scheme for Denoising Diffusion Probabilistic Models using Ideal Derivatives
Figure 3 for Itô-Taylor Sampling Scheme for Denoising Diffusion Probabilistic Models using Ideal Derivatives
Figure 4 for Itô-Taylor Sampling Scheme for Denoising Diffusion Probabilistic Models using Ideal Derivatives
Viaarxiv icon

Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm

Add code
Bookmark button
Alert button
Oct 22, 2020
Hideyuki Tachibana

Figure 1 for Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm
Figure 2 for Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm
Figure 3 for Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm
Figure 4 for Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm
Viaarxiv icon

Accent Estimation of Japanese Words from Their Surfaces and Romanizations for Building Large Vocabulary Accent Dictionaries

Add code
Bookmark button
Alert button
Sep 21, 2020
Hideyuki Tachibana, Yotaro Katayama

Figure 1 for Accent Estimation of Japanese Words from Their Surfaces and Romanizations for Building Large Vocabulary Accent Dictionaries
Figure 2 for Accent Estimation of Japanese Words from Their Surfaces and Romanizations for Building Large Vocabulary Accent Dictionaries
Figure 3 for Accent Estimation of Japanese Words from Their Surfaces and Romanizations for Building Large Vocabulary Accent Dictionaries
Figure 4 for Accent Estimation of Japanese Words from Their Surfaces and Romanizations for Building Large Vocabulary Accent Dictionaries
Viaarxiv icon

Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention

Add code
Bookmark button
Alert button
Oct 24, 2017
Hideyuki Tachibana, Katsuya Uenoyama, Shunsuke Aihara

Figure 1 for Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention
Figure 2 for Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention
Figure 3 for Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention
Figure 4 for Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention
Viaarxiv icon