Alert button
Picture for Yihan Wu

Yihan Wu

Alert button

Characterizing normal perinatal development of the human brain structural connectivity

Aug 22, 2023
Yihan Wu, Lana Vasung, Camilo Calixto, Ali Gholipour, Davood Karimi

Early brain development is characterized by the formation of a highly organized structural connectome. The interconnected nature of this connectome underlies the brain's cognitive abilities and influences its response to diseases and environmental factors. Hence, quantitative assessment of structural connectivity in the perinatal stage is useful for studying normal and abnormal neurodevelopment. However, estimation of the connectome from diffusion MRI data involves complex computations. For the perinatal period, these computations are further challenged by the rapid brain development and imaging difficulties. Combined with high inter-subject variability, these factors make it difficult to chart the normal development of the structural connectome. As a result, there is a lack of reliable normative baselines of structural connectivity metrics at this critical stage in brain development. In this study, we developed a computational framework, based on spatio-temporal averaging, for determining such baselines. We used this framework to analyze the structural connectivity between 33 and 44 postmenstrual weeks using data from 166 subjects. Our results unveiled clear and strong trends in the development of structural connectivity in perinatal stage. Connection weighting based on fractional anisotropy and neurite density produced the most consistent results. We observed increases in global and local efficiency, a decrease in characteristic path length, and widespread strengthening of the connections within and across brain lobes and hemispheres. We also observed asymmetry patterns that were consistent between different connection weighting approaches. The new computational method and results are useful for assessing normal and abnormal development of the structural connectome early in life.

Viaarxiv icon

Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets

Jun 27, 2023
Yimu Wang, Dinghuai Zhang, Yihan Wu, Heng Huang, Hongyang Zhang

Figure 1 for Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets
Figure 2 for Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets
Figure 3 for Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets
Figure 4 for Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets

Despite incredible advances, deep learning has been shown to be susceptible to adversarial attacks. Numerous approaches have been proposed to train robust networks both empirically and certifiably. However, most of them defend against only a single type of attack, while recent work takes steps forward in defending against multiple attacks. In this paper, to understand multi-target robustness, we view this problem as a bargaining game in which different players (adversaries) negotiate to reach an agreement on a joint direction of parameter updating. We identify a phenomenon named player domination in the bargaining game, namely that the existing max-based approaches, such as MAX and MSD, do not converge. Based on our theoretical analysis, we design a novel framework that adjusts the budgets of different adversaries to avoid any player dominance. Experiments on standard benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.

Viaarxiv icon

ComedicSpeech: Text To Speech For Stand-up Comedies in Low-Resource Scenarios

May 20, 2023
Yuyue Wang, Huan Xiao, Yihan Wu, Ruihua Song

Figure 1 for ComedicSpeech: Text To Speech For Stand-up Comedies in Low-Resource Scenarios
Figure 2 for ComedicSpeech: Text To Speech For Stand-up Comedies in Low-Resource Scenarios
Figure 3 for ComedicSpeech: Text To Speech For Stand-up Comedies in Low-Resource Scenarios
Figure 4 for ComedicSpeech: Text To Speech For Stand-up Comedies in Low-Resource Scenarios

Text to Speech (TTS) models can generate natural and high-quality speech, but it is not expressive enough when synthesizing speech with dramatic expressiveness, such as stand-up comedies. Considering comedians have diverse personal speech styles, including personal prosody, rhythm, and fillers, it requires real-world datasets and strong speech style modeling capabilities, which brings challenges. In this paper, we construct a new dataset and develop ComedicSpeech, a TTS system tailored for the stand-up comedy synthesis in low-resource scenarios. First, we extract prosody representation by the prosody encoder and condition it to the TTS model in a flexible way. Second, we enhance the personal rhythm modeling by a conditional duration predictor. Third, we model the personal fillers by introducing comedian-related special tokens. Experiments show that ComedicSpeech achieves better expressiveness than baselines with only ten-minute training data for each comedian. The audio samples are available at https://xh621.github.io/stand-up-comedy-demo/

* 5 pages, 4 tables, 2 figure 
Viaarxiv icon

ResGrad: Residual Denoising Diffusion Probabilistic Models for Text to Speech

Dec 30, 2022
Zehua Chen, Yihan Wu, Yichong Leng, Jiawei Chen, Haohe Liu, Xu Tan, Yang Cui, Ke Wang, Lei He, Sheng Zhao, Jiang Bian, Danilo Mandic

Figure 1 for ResGrad: Residual Denoising Diffusion Probabilistic Models for Text to Speech
Figure 2 for ResGrad: Residual Denoising Diffusion Probabilistic Models for Text to Speech
Figure 3 for ResGrad: Residual Denoising Diffusion Probabilistic Models for Text to Speech
Figure 4 for ResGrad: Residual Denoising Diffusion Probabilistic Models for Text to Speech

Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.

* 13 pages, 5 figures 
Viaarxiv icon

Adversarial Weight Perturbation Improves Generalization in Graph Neural Network

Dec 09, 2022
Yihan Wu, Aleksandar Bojchevski, Heng Huang

Figure 1 for Adversarial Weight Perturbation Improves Generalization in Graph Neural Network
Figure 2 for Adversarial Weight Perturbation Improves Generalization in Graph Neural Network
Figure 3 for Adversarial Weight Perturbation Improves Generalization in Graph Neural Network
Figure 4 for Adversarial Weight Perturbation Improves Generalization in Graph Neural Network

A lot of theoretical and empirical evidence shows that the flatter local minima tend to improve generalization. Adversarial Weight Perturbation (AWP) is an emerging technique to efficiently and effectively find such minima. In AWP we minimize the loss w.r.t. a bounded worst-case perturbation of the model parameters thereby favoring local minima with a small loss in a neighborhood around them. The benefits of AWP, and more generally the connections between flatness and generalization, have been extensively studied for i.i.d. data such as images. In this paper, we extensively study this phenomenon for graph data. Along the way, we first derive a generalization bound for non-i.i.d. node classification tasks. Then we identify a vanishing-gradient issue with all existing formulations of AWP and we propose a new Weighted Truncated AWP (WT-AWP) to alleviate this issue. We show that regularizing graph neural networks with WT-AWP consistently improves both natural and robust generalization across many different graph learning tasks and models.

* AAAI 2023 
Viaarxiv icon

VideoDubber: Machine Translation with Speech-Aware Length Control for Video Dubbing

Nov 30, 2022
Yihan Wu, Junliang Guo, Xu Tan, Chen Zhang, Bohan Li, Ruihua Song, Lei He, Sheng Zhao, Arul Menezes, Jiang Bian

Figure 1 for VideoDubber: Machine Translation with Speech-Aware Length Control for Video Dubbing
Figure 2 for VideoDubber: Machine Translation with Speech-Aware Length Control for Video Dubbing
Figure 3 for VideoDubber: Machine Translation with Speech-Aware Length Control for Video Dubbing
Figure 4 for VideoDubber: Machine Translation with Speech-Aware Length Control for Video Dubbing

Video dubbing aims to translate the original speech in a film or television program into the speech in a target language, which can be achieved with a cascaded system consisting of speech recognition, machine translation and speech synthesis. To ensure the translated speech to be well aligned with the corresponding video, the length/duration of the translated speech should be as close as possible to that of the original speech, which requires strict length control. Previous works usually control the number of words or characters generated by the machine translation model to be similar to the source sentence, without considering the isochronicity of speech as the speech duration of words/characters in different languages varies. In this paper, we propose a machine translation system tailored for the task of video dubbing, which directly considers the speech duration of each token in translation, to match the length of source and target speech. Specifically, we control the speech length of generated sentence by guiding the prediction of each word with the duration information, including the speech duration of itself as well as how much duration is left for the remaining words. We design experiments on four language directions (German -> English, Spanish -> English, Chinese <-> English), and the results show that the proposed method achieves better length control ability on the generated speech than baseline methods. To make up the lack of real-world datasets, we also construct a real-world test set collected from films to provide comprehensive evaluations on the video dubbing task.

* AAAI 2023 camera version 
Viaarxiv icon

PromptTTS: Controllable Text-to-Speech with Text Descriptions

Nov 22, 2022
Zhifang Guo, Yichong Leng, Yihan Wu, Sheng Zhao, Xu Tan

Figure 1 for PromptTTS: Controllable Text-to-Speech with Text Descriptions
Figure 2 for PromptTTS: Controllable Text-to-Speech with Text Descriptions
Figure 3 for PromptTTS: Controllable Text-to-Speech with Text Descriptions
Figure 4 for PromptTTS: Controllable Text-to-Speech with Text Descriptions

Using a text description as prompt to guide the generation of text or images (e.g., GPT-3 or DALLE-2) has drawn wide attention recently. Beyond text and image generation, in this work, we explore the possibility of utilizing text descriptions to guide speech synthesis. Thus, we develop a text-to-speech (TTS) system (dubbed as PromptTTS) that takes a prompt with both style and content descriptions as input to synthesize the corresponding speech. Specifically, PromptTTS consists of a style encoder and a content encoder to extract the corresponding representations from the prompt, and a speech decoder to synthesize speech according to the extracted style and content representations. Compared with previous works in controllable TTS that require users to have acoustic knowledge to understand style factors such as prosody and pitch, PromptTTS is more user-friendly since text descriptions are a more natural way to express speech style (e.g., ''A lady whispers to her friend slowly''). Given that there is no TTS dataset with prompts, to benchmark the task of PromptTTS, we construct and release a dataset containing prompts with style and content information and the corresponding speech. Experiments show that PromptTTS can generate speech with precise style control and high speech quality. Audio samples and our dataset are publicly available.

* Submitted to ICASSP 2023 
Viaarxiv icon

Towards Robust Dataset Learning

Nov 19, 2022
Yihan Wu, Xinda Li, Florian Kerschbaum, Heng Huang, Hongyang Zhang

Figure 1 for Towards Robust Dataset Learning
Figure 2 for Towards Robust Dataset Learning
Figure 3 for Towards Robust Dataset Learning
Figure 4 for Towards Robust Dataset Learning

Adversarial training has been actively studied in recent computer vision research to improve the robustness of models. However, due to the huge computational cost of generating adversarial samples, adversarial training methods are often slow. In this paper, we study the problem of learning a robust dataset such that any classifier naturally trained on the dataset is adversarially robust. Such a dataset benefits the downstream tasks as natural training is much faster than adversarial training, and demonstrates that the desired property of robustness is transferable between models and data. In this work, we propose a principled, tri-level optimization to formulate the robust dataset learning problem. We show that, under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset. Extensive experiments on MNIST, CIFAR10, and TinyImageNet demostrate the effectiveness of our algorithm with different network initializations and architectures.

Viaarxiv icon

Semantic scene descriptions as an objective of human vision

Sep 23, 2022
Adrien Doerig, Tim C Kietzmann, Emily Allen, Yihan Wu, Thomas Naselaris, Kendrick Kay, Ian Charest

Figure 1 for Semantic scene descriptions as an objective of human vision
Figure 2 for Semantic scene descriptions as an objective of human vision
Figure 3 for Semantic scene descriptions as an objective of human vision
Figure 4 for Semantic scene descriptions as an objective of human vision

Interpreting the meaning of a visual scene requires not only identification of its constituent objects, but also a rich semantic characterization of object interrelations. Here, we study the neural mechanisms underlying visuo-semantic transformations by applying modern computational techniques to a large-scale 7T fMRI dataset of human brain responses elicited by complex natural scenes. Using semantic embeddings obtained by applying linguistic deep learning models to human-generated scene descriptions, we identify a widely distributed network of brain regions that encode semantic scene descriptions. Importantly, these semantic embeddings better explain activity in these regions than traditional object category labels. In addition, they are effective predictors of activity despite the fact that the participants did not actively engage in a semantic task, suggesting that visuo-semantic transformations are a default mode of vision. In support of this view, we then show that highly accurate reconstructions of scene captions can be directly linearly decoded from patterns of brain activity. Finally, a recurrent convolutional neural network trained on semantic embeddings further outperforms semantic embeddings in predicting brain activity, providing a mechanistic model of the brain's visuo-semantic transformations. Together, these experimental and computational results suggest that transforming visual input into rich semantic scene descriptions may be a central objective of the visual system, and that focusing efforts on this new objective may lead to improved models of visual information processing in the human brain.

Viaarxiv icon

Schizophrenia detection based on EEG using Recurrent Auto-Encoder framework

Jul 09, 2022
Yihan Wu, Min Xia, Xiuzhu Wang, Yangsong Zhang

Figure 1 for Schizophrenia detection based on EEG using Recurrent Auto-Encoder framework
Figure 2 for Schizophrenia detection based on EEG using Recurrent Auto-Encoder framework
Figure 3 for Schizophrenia detection based on EEG using Recurrent Auto-Encoder framework
Figure 4 for Schizophrenia detection based on EEG using Recurrent Auto-Encoder framework

Schizophrenia (SZ) is a serious mental disorder that could seriously affect the patient's quality of life. In recent years, detection of SZ based on deep learning (DL) using electroencephalogram (EEG) has received increasing attention. In this paper, we proposed an end-to-end recurrent auto-encoder (RAE) model to detect SZ. In the RAE model, the raw data was input into one auto-encoder block, and the reconstructed data were recurrently input into the same block. The extracted code by auto-encoder block was simultaneously served as an input of a classifier block to discriminate SZ patients from healthy controls (HC). Evaluated on the dataset containing 14 SZ patients and 14 HC subjects, and the proposed method achieved an average classification accuracy of 81.81% in subject-independent experiment scenario. This study demonstrated that the structure of RAE is able to capture the differential features between SZ patients and HC subjects.

Viaarxiv icon