Alert button
Picture for Shan Yang

Shan Yang

Alert button

MyoPS-Net: Myocardial Pathology Segmentation with Flexible Combination of Multi-Sequence CMR Images

Add code
Bookmark button
Alert button
Nov 06, 2022
Junyi Qiu, Lei Li, Sihan Wang, Ke Zhang, Yinyin Chen, Shan Yang, Xiahai Zhuang

Figure 1 for MyoPS-Net: Myocardial Pathology Segmentation with Flexible Combination of Multi-Sequence CMR Images
Figure 2 for MyoPS-Net: Myocardial Pathology Segmentation with Flexible Combination of Multi-Sequence CMR Images
Figure 3 for MyoPS-Net: Myocardial Pathology Segmentation with Flexible Combination of Multi-Sequence CMR Images
Figure 4 for MyoPS-Net: Myocardial Pathology Segmentation with Flexible Combination of Multi-Sequence CMR Images
Viaarxiv icon

TotalSegmentator: robust segmentation of 104 anatomical structures in CT images

Add code
Bookmark button
Alert button
Aug 11, 2022
Jakob Wasserthal, Manfred Meyer, Hanns-Christian Breit, Joshy Cyriac, Shan Yang, Martin Segeroth

Figure 1 for TotalSegmentator: robust segmentation of 104 anatomical structures in CT images
Figure 2 for TotalSegmentator: robust segmentation of 104 anatomical structures in CT images
Figure 3 for TotalSegmentator: robust segmentation of 104 anatomical structures in CT images
Figure 4 for TotalSegmentator: robust segmentation of 104 anatomical structures in CT images
Viaarxiv icon

Glow-WaveGAN 2: High-quality Zero-shot Text-to-speech Synthesis and Any-to-any Voice Conversion

Add code
Bookmark button
Alert button
Jul 05, 2022
Yi Lei, Shan Yang, Jian Cong, Lei Xie, Dan Su

Figure 1 for Glow-WaveGAN 2: High-quality Zero-shot Text-to-speech Synthesis and Any-to-any Voice Conversion
Figure 2 for Glow-WaveGAN 2: High-quality Zero-shot Text-to-speech Synthesis and Any-to-any Voice Conversion
Figure 3 for Glow-WaveGAN 2: High-quality Zero-shot Text-to-speech Synthesis and Any-to-any Voice Conversion
Figure 4 for Glow-WaveGAN 2: High-quality Zero-shot Text-to-speech Synthesis and Any-to-any Voice Conversion
Viaarxiv icon

Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers

Add code
Bookmark button
Alert button
Jul 02, 2022
Liumeng Xue, Shan Yang, Na Hu, Dan Su, Lei Xie

Figure 1 for Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers
Figure 2 for Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers
Figure 3 for Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers
Figure 4 for Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers
Viaarxiv icon

End-to-End Voice Conversion with Information Perturbation

Add code
Bookmark button
Alert button
Jun 15, 2022
Qicong Xie, Shan Yang, Yi Lei, Lei Xie, Dan Su

Figure 1 for End-to-End Voice Conversion with Information Perturbation
Figure 2 for End-to-End Voice Conversion with Information Perturbation
Figure 3 for End-to-End Voice Conversion with Information Perturbation
Figure 4 for End-to-End Voice Conversion with Information Perturbation
Viaarxiv icon

VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion

Add code
Bookmark button
Alert button
Feb 18, 2022
Disong Wang, Shan Yang, Dan Su, Xunying Liu, Dong Yu, Helen Meng

Figure 1 for VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion
Figure 2 for VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion
Figure 3 for VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion
Figure 4 for VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion
Viaarxiv icon

Deep Graph Learning for Spatially-Varying Indoor Lighting Prediction

Add code
Bookmark button
Alert button
Feb 13, 2022
Jiayang Bai, Jie Guo, Chenchen Wan, Zhenyu Chen, Zhen He, Shan Yang, Piaopiao Yu, Yan Zhang, Yanwen Guo

Figure 1 for Deep Graph Learning for Spatially-Varying Indoor Lighting Prediction
Figure 2 for Deep Graph Learning for Spatially-Varying Indoor Lighting Prediction
Figure 3 for Deep Graph Learning for Spatially-Varying Indoor Lighting Prediction
Figure 4 for Deep Graph Learning for Spatially-Varying Indoor Lighting Prediction
Viaarxiv icon

MsEmoTTS: Multi-scale emotion transfer, prediction, and control for emotional speech synthesis

Add code
Bookmark button
Alert button
Jan 17, 2022
Yi Lei, Shan Yang, Xinsheng Wang, Lei Xie

Figure 1 for MsEmoTTS: Multi-scale emotion transfer, prediction, and control for emotional speech synthesis
Figure 2 for MsEmoTTS: Multi-scale emotion transfer, prediction, and control for emotional speech synthesis
Figure 3 for MsEmoTTS: Multi-scale emotion transfer, prediction, and control for emotional speech synthesis
Figure 4 for MsEmoTTS: Multi-scale emotion transfer, prediction, and control for emotional speech synthesis
Viaarxiv icon

A Color Image Steganography Based on Frequency Sub-band Selection

Add code
Bookmark button
Alert button
Dec 29, 2021
Hai Su, Shan Yang, Shuqing Zhang, Songsen Yu

Figure 1 for A Color Image Steganography Based on Frequency Sub-band Selection
Figure 2 for A Color Image Steganography Based on Frequency Sub-band Selection
Figure 3 for A Color Image Steganography Based on Frequency Sub-band Selection
Figure 4 for A Color Image Steganography Based on Frequency Sub-band Selection
Viaarxiv icon

Referee: Towards reference-free cross-speaker style transfer with low-quality data for expressive speech synthesis

Add code
Bookmark button
Alert button
Sep 08, 2021
Songxiang Liu, Shan Yang, Dan Su, Dong Yu

Figure 1 for Referee: Towards reference-free cross-speaker style transfer with low-quality data for expressive speech synthesis
Figure 2 for Referee: Towards reference-free cross-speaker style transfer with low-quality data for expressive speech synthesis
Figure 3 for Referee: Towards reference-free cross-speaker style transfer with low-quality data for expressive speech synthesis
Figure 4 for Referee: Towards reference-free cross-speaker style transfer with low-quality data for expressive speech synthesis
Viaarxiv icon