Alert button
Picture for Keisuke Imoto

Keisuke Imoto

Alert button

Correlation of Fréchet Audio Distance With Human Perception of Environmental Audio Is Embedding Dependant

Add code
Bookmark button
Alert button
Mar 26, 2024
Modan Tailleur, Junwon Lee, Mathieu Lagrange, Keunwoo Choi, Laurie M. Heller, Keisuke Imoto, Yuki Okamoto

Viaarxiv icon

Discriminative Neighborhood Smoothing for Generative Anomalous Sound Detection

Add code
Bookmark button
Alert button
Mar 18, 2024
Takuya Fujimura, Keisuke Imoto, Tomoki Toda

Figure 1 for Discriminative Neighborhood Smoothing for Generative Anomalous Sound Detection
Figure 2 for Discriminative Neighborhood Smoothing for Generative Anomalous Sound Detection
Figure 3 for Discriminative Neighborhood Smoothing for Generative Anomalous Sound Detection
Figure 4 for Discriminative Neighborhood Smoothing for Generative Anomalous Sound Detection
Viaarxiv icon

Refining Knowledge Transfer on Audio-Image Temporal Agreement for Audio-Text Cross Retrieval

Add code
Bookmark button
Alert button
Mar 16, 2024
Shunsuke Tsubaki, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, Keisuke Imoto

Figure 1 for Refining Knowledge Transfer on Audio-Image Temporal Agreement for Audio-Text Cross Retrieval
Figure 2 for Refining Knowledge Transfer on Audio-Image Temporal Agreement for Audio-Text Cross Retrieval
Figure 3 for Refining Knowledge Transfer on Audio-Image Temporal Agreement for Audio-Text Cross Retrieval
Figure 4 for Refining Knowledge Transfer on Audio-Image Temporal Agreement for Audio-Text Cross Retrieval
Viaarxiv icon

F1-EV Score: Measuring the Likelihood of Estimating a Good Decision Threshold for Semi-Supervised Anomaly Detection

Add code
Bookmark button
Alert button
Dec 14, 2023
Kevin Wilkinghoff, Keisuke Imoto

Viaarxiv icon

CAPTDURE: Captioned Sound Dataset of Single Sources

Add code
Bookmark button
Alert button
May 28, 2023
Yuki Okamoto, Kanta Shimonishi, Keisuke Imoto, Kota Dohi, Shota Horiguchi, Yohei Kawaguchi

Figure 1 for CAPTDURE: Captioned Sound Dataset of Single Sources
Figure 2 for CAPTDURE: Captioned Sound Dataset of Single Sources
Figure 3 for CAPTDURE: Captioned Sound Dataset of Single Sources
Figure 4 for CAPTDURE: Captioned Sound Dataset of Single Sources
Viaarxiv icon

Description and Discussion on DCASE 2023 Challenge Task 2: First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring

Add code
Bookmark button
Alert button
May 13, 2023
Kota Dohi, Keisuke Imoto, Noboru Harada, Daisuke Niizumi, Yuma Koizumi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Yohei Kawaguchi

Figure 1 for Description and Discussion on DCASE 2023 Challenge Task 2: First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring
Viaarxiv icon

Environmental sound conversion from vocal imitations and sound event labels

Add code
Bookmark button
Alert button
Apr 29, 2023
Yuki Okamoto, Keisuke Imoto, Shinnosuke Takamichi, Ryotaro Nagase, Takahiro Fukumori, Yoichi Yamashita

Figure 1 for Environmental sound conversion from vocal imitations and sound event labels
Figure 2 for Environmental sound conversion from vocal imitations and sound event labels
Figure 3 for Environmental sound conversion from vocal imitations and sound event labels
Figure 4 for Environmental sound conversion from vocal imitations and sound event labels
Viaarxiv icon

Foley Sound Synthesis at the DCASE 2023 Challenge

Add code
Bookmark button
Alert button
Apr 26, 2023
Keunwoo Choi, Jaekwon Im, Laurie Heller, Brian McFee, Keisuke Imoto, Yuki Okamoto, Mathieu Lagrange, Shinosuke Takamichi

Figure 1 for Foley Sound Synthesis at the DCASE 2023 Challenge
Viaarxiv icon

Visual onoma-to-wave: environmental sound synthesis from visual onomatopoeias and sound-source images

Add code
Bookmark button
Alert button
Oct 17, 2022
Hien Ohnaka, Shinnosuke Takamichi, Keisuke Imoto, Yuki Okamoto, Kazuki Fujii, Hiroshi Saruwatari

Figure 1 for Visual onoma-to-wave: environmental sound synthesis from visual onomatopoeias and sound-source images
Figure 2 for Visual onoma-to-wave: environmental sound synthesis from visual onomatopoeias and sound-source images
Figure 3 for Visual onoma-to-wave: environmental sound synthesis from visual onomatopoeias and sound-source images
Figure 4 for Visual onoma-to-wave: environmental sound synthesis from visual onomatopoeias and sound-source images
Viaarxiv icon

How Should We Evaluate Synthesized Environmental Sounds

Add code
Bookmark button
Alert button
Aug 16, 2022
Yuki Okamoto, Keisuke Imoto, Shinnosuke Takamichi, Takahiro Fukumori, Yoichi Yamashita

Figure 1 for How Should We Evaluate Synthesized Environmental Sounds
Figure 2 for How Should We Evaluate Synthesized Environmental Sounds
Figure 3 for How Should We Evaluate Synthesized Environmental Sounds
Figure 4 for How Should We Evaluate Synthesized Environmental Sounds
Viaarxiv icon