Alert button
Picture for Hiroshi Saruwatari

Hiroshi Saruwatari

Alert button

Mean-square-error-based secondary source placement in sound field synthesis with prior information on desired field

Add code
Bookmark button
Alert button
Dec 10, 2021
Keisuke Kimura, Shoichi Koyama, Natsuki Ueno, Hiroshi Saruwatari

Figure 1 for Mean-square-error-based secondary source placement in sound field synthesis with prior information on desired field
Figure 2 for Mean-square-error-based secondary source placement in sound field synthesis with prior information on desired field
Figure 3 for Mean-square-error-based secondary source placement in sound field synthesis with prior information on desired field
Figure 4 for Mean-square-error-based secondary source placement in sound field synthesis with prior information on desired field
Viaarxiv icon

Kernel Learning For Sound Field Estimation With L1 and L2 Regularizations

Add code
Bookmark button
Alert button
Oct 12, 2021
Ryosuke Horiuchi, Shoichi Koyama, Juliano G. C. Ribeiro, Natsuki Ueno, Hiroshi Saruwatari

Figure 1 for Kernel Learning For Sound Field Estimation With L1 and L2 Regularizations
Figure 2 for Kernel Learning For Sound Field Estimation With L1 and L2 Regularizations
Figure 3 for Kernel Learning For Sound Field Estimation With L1 and L2 Regularizations
Viaarxiv icon

Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network

Add code
Bookmark button
Alert button
Sep 22, 2021
Takaaki Saeki, Shinnosuke Takamichi, Hiroshi Saruwatari

Figure 1 for Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network
Figure 2 for Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network
Figure 3 for Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network
Figure 4 for Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network
Viaarxiv icon

Binaural rendering from microphone array signals of arbitrary geometry

Add code
Bookmark button
Alert button
Sep 15, 2021
Naoto Iijima, Shoichi Koyama, Hiroshi Saruwatari

Figure 1 for Binaural rendering from microphone array signals of arbitrary geometry
Figure 2 for Binaural rendering from microphone array signals of arbitrary geometry
Figure 3 for Binaural rendering from microphone array signals of arbitrary geometry
Figure 4 for Binaural rendering from microphone array signals of arbitrary geometry
Viaarxiv icon

Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis

Add code
Bookmark button
Alert button
Sep 10, 2021
Sota Misawa, Norihiro Takamune, Tomohiko Nakamura, Daichi Kitamura, Hiroshi Saruwatari, Masakazu Une, Shoji Makino

Figure 1 for Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis
Figure 2 for Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis
Figure 3 for Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis
Figure 4 for Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis
Viaarxiv icon

Multichannel Audio Source Separation with Independent Deeply Learned Matrix Analysis Using Product of Source Models

Add code
Bookmark button
Alert button
Sep 02, 2021
Takuya Hasumi, Tomohiko Nakamura, Norihiro Takamune, Hiroshi Saruwatari, Daichi Kitamura, Yu Takahashi, Kazunobu Kondo

Figure 1 for Multichannel Audio Source Separation with Independent Deeply Learned Matrix Analysis Using Product of Source Models
Figure 2 for Multichannel Audio Source Separation with Independent Deeply Learned Matrix Analysis Using Product of Source Models
Figure 3 for Multichannel Audio Source Separation with Independent Deeply Learned Matrix Analysis Using Product of Source Models
Figure 4 for Multichannel Audio Source Separation with Independent Deeply Learned Matrix Analysis Using Product of Source Models
Viaarxiv icon

Prior Distribution Design for Music Bleeding-Sound Reduction Based on Nonnegative Matrix Factorization

Add code
Bookmark button
Alert button
Sep 01, 2021
Yusaku Mizobuchi, Daichi Kitamura, Tomohiko Nakamura, Hiroshi Saruwatari, Yu Takahashi, Kazunobu Kondo

Figure 1 for Prior Distribution Design for Music Bleeding-Sound Reduction Based on Nonnegative Matrix Factorization
Figure 2 for Prior Distribution Design for Music Bleeding-Sound Reduction Based on Nonnegative Matrix Factorization
Figure 3 for Prior Distribution Design for Music Bleeding-Sound Reduction Based on Nonnegative Matrix Factorization
Figure 4 for Prior Distribution Design for Music Bleeding-Sound Reduction Based on Nonnegative Matrix Factorization
Viaarxiv icon

Independent Deeply Learned Tensor Analysis for Determined Audio Source Separation

Add code
Bookmark button
Alert button
Jun 10, 2021
Naoki Narisawa, Rintaro Ikeshita, Norihiro Takamune, Daichi Kitamura, Tomohiko Nakamura, Hiroshi Saruwatari, Tomohiro Nakatani

Figure 1 for Independent Deeply Learned Tensor Analysis for Determined Audio Source Separation
Figure 2 for Independent Deeply Learned Tensor Analysis for Determined Audio Source Separation
Figure 3 for Independent Deeply Learned Tensor Analysis for Determined Audio Source Separation
Viaarxiv icon

Empirical Bayesian Independent Deeply Learned Matrix Analysis For Multichannel Audio Source Separation

Add code
Bookmark button
Alert button
Jun 07, 2021
Takuya Hasumi, Tomohiko Nakamura, Norihiro Takamune, Hiroshi Saruwatari, Daichi Kitamura, Yu Takahashi, Kazunobu Kondo

Figure 1 for Empirical Bayesian Independent Deeply Learned Matrix Analysis For Multichannel Audio Source Separation
Figure 2 for Empirical Bayesian Independent Deeply Learned Matrix Analysis For Multichannel Audio Source Separation
Figure 3 for Empirical Bayesian Independent Deeply Learned Matrix Analysis For Multichannel Audio Source Separation
Figure 4 for Empirical Bayesian Independent Deeply Learned Matrix Analysis For Multichannel Audio Source Separation
Viaarxiv icon

Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method

Add code
Bookmark button
Alert button
May 10, 2021
Koichi Saito, Tomohiko Nakamura, Kohei Yatabe, Yuma Koizumi, Hiroshi Saruwatari

Figure 1 for Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method
Figure 2 for Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method
Figure 3 for Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method
Figure 4 for Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method
Viaarxiv icon