Alert button
Picture for Kohei Yatabe

Kohei Yatabe

Alert button

Sampling-Frequency-Independent Universal Sound Separation

Add code
Bookmark button
Alert button
Sep 22, 2023
Tomohiko Nakamura, Kohei Yatabe

Viaarxiv icon

Simultaneous Measurement of Multiple Acoustic Attributes Using Structured Periodic Test Signals Including Music and Other Sound Materials

Add code
Bookmark button
Alert button
Sep 06, 2023
Hideki Kawahara, Kohei Yatabe, Ken-Ichi Sakakibara, Mitsunori Mizumachi, Tatsuya Kitamura

Viaarxiv icon

Versatile Time-Frequency Representations Realized by Convex Penalty on Magnitude Spectrogram

Add code
Bookmark button
Alert button
Aug 03, 2023
Keidai Arai, Koki Yamada, Kohei Yatabe

Figure 1 for Versatile Time-Frequency Representations Realized by Convex Penalty on Magnitude Spectrogram
Figure 2 for Versatile Time-Frequency Representations Realized by Convex Penalty on Magnitude Spectrogram
Figure 3 for Versatile Time-Frequency Representations Realized by Convex Penalty on Magnitude Spectrogram
Viaarxiv icon

Algorithms of Sampling-Frequency-Independent Layers for Non-integer Strides

Add code
Bookmark button
Alert button
Jun 19, 2023
Kanami Imamura, Tomohiko Nakamura, Norihiro Takamune, Kohei Yatabe, Hiroshi Saruwatari

Figure 1 for Algorithms of Sampling-Frequency-Independent Layers for Non-integer Strides
Figure 2 for Algorithms of Sampling-Frequency-Independent Layers for Non-integer Strides
Figure 3 for Algorithms of Sampling-Frequency-Independent Layers for Non-integer Strides
Figure 4 for Algorithms of Sampling-Frequency-Independent Layers for Non-integer Strides
Viaarxiv icon

LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus

Add code
Bookmark button
Alert button
May 30, 2023
Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Michiel Bacchiani, Yu Zhang, Wei Han, Ankur Bapna

Figure 1 for LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Figure 2 for LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Figure 3 for LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Figure 4 for LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Viaarxiv icon

Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations

Add code
Bookmark button
Alert button
Mar 03, 2023
Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Yu Zhang, Wei Han, Ankur Bapna, Michiel Bacchiani

Figure 1 for Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Figure 2 for Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Figure 3 for Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Figure 4 for Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Viaarxiv icon

Online Phase Reconstruction via DNN-based Phase Differences Estimation

Add code
Bookmark button
Alert button
Nov 12, 2022
Yoshiki Masuyama, Kohei Yatabe, Kento Nagatomo, Yasuhiro Oikawa

Figure 1 for Online Phase Reconstruction via DNN-based Phase Differences Estimation
Figure 2 for Online Phase Reconstruction via DNN-based Phase Differences Estimation
Figure 3 for Online Phase Reconstruction via DNN-based Phase Differences Estimation
Figure 4 for Online Phase Reconstruction via DNN-based Phase Differences Estimation
Viaarxiv icon

WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration

Add code
Bookmark button
Alert button
Oct 03, 2022
Yuma Koizumi, Kohei Yatabe, Heiga Zen, Michiel Bacchiani

Figure 1 for WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration
Figure 2 for WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration
Figure 3 for WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration
Figure 4 for WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration
Viaarxiv icon

Measuring pitch extractors' response to frequency-modulated multi-component signals

Add code
Bookmark button
Alert button
Apr 02, 2022
Hideki Kawahara, Kohei Yatabe, Ken-Ichi Sakakibara, Tatsuya Kitamura, Hideki Banno, Masanori Morise

Figure 1 for Measuring pitch extractors' response to frequency-modulated multi-component signals
Figure 2 for Measuring pitch extractors' response to frequency-modulated multi-component signals
Figure 3 for Measuring pitch extractors' response to frequency-modulated multi-component signals
Figure 4 for Measuring pitch extractors' response to frequency-modulated multi-component signals
Viaarxiv icon

An objective test tool for pitch extractors' response attributes

Add code
Bookmark button
Alert button
Apr 02, 2022
Hideki Kawahara, Kohei Yatabe, Ken-Ichi Sakakibara, Tatsuya Kitamura, Hideki Banno, Masanori Morise

Figure 1 for An objective test tool for pitch extractors' response attributes
Figure 2 for An objective test tool for pitch extractors' response attributes
Figure 3 for An objective test tool for pitch extractors' response attributes
Figure 4 for An objective test tool for pitch extractors' response attributes
Viaarxiv icon