Alert button
Picture for Cheng-Ping Hsieh

Cheng-Ping Hsieh

Alert button

Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers

Nov 01, 2022
Cheng-Ping Hsieh, Subhankar Ghosh, Boris Ginsburg

Figure 1 for Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers
Figure 2 for Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers
Figure 3 for Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers
Figure 4 for Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers

Fine-tuning is a popular method for adapting text-to-speech (TTS) models to new speakers. However this approach has some challenges. Usually fine-tuning requires several hours of high quality speech per speaker. There is also that fine-tuning will negatively affect the quality of speech synthesis for previously learnt speakers. In this paper we propose an alternative approach for TTS adaptation based on using parameter-efficient adapter modules. In the proposed approach, a few small adapter modules are added to the original network. The original weights are frozen, and only the adapters are fine-tuned on speech for new speaker. The parameter-efficient fine-tuning approach will produce a new model with high level of parameter sharing with original model. Our experiments on LibriTTS, HiFi-TTS and VCTK datasets validate the effectiveness of adapter-based method through objective and subjective metrics.

* Submitted to ICASSP 2023 
Viaarxiv icon

Mr. Right: Multimodal Retrieval on Representation of ImaGe witH Text

Sep 28, 2022
Cheng-An Hsieh, Cheng-Ping Hsieh, Pu-Jen Cheng

Figure 1 for Mr. Right: Multimodal Retrieval on Representation of ImaGe witH Text
Figure 2 for Mr. Right: Multimodal Retrieval on Representation of ImaGe witH Text
Figure 3 for Mr. Right: Multimodal Retrieval on Representation of ImaGe witH Text
Figure 4 for Mr. Right: Multimodal Retrieval on Representation of ImaGe witH Text

Multimodal learning is a recent challenge that extends unimodal learning by generalizing its domain to diverse modalities, such as texts, images, or speech. This extension requires models to process and relate information from multiple modalities. In Information Retrieval, traditional retrieval tasks focus on the similarity between unimodal documents and queries, while image-text retrieval hypothesizes that most texts contain the scene context from images. This separation has ignored that real-world queries may involve text content, image captions, or both. To address this, we introduce Multimodal Retrieval on Representation of ImaGe witH Text (Mr. Right), a novel and comprehensive dataset for multimodal retrieval. We utilize the Wikipedia dataset with rich text-image examples and generate three types of text-based queries with different modality information: text-related, image-related, and mixed. To validate the effectiveness of our dataset, we provide a multimodal training paradigm and evaluate previous text retrieval and image retrieval frameworks. The results show that proposed multimodal retrieval can improve retrieval performance, but creating a well-unified document representation with texts and images is still a challenge. We hope Mr. Right allows us to broaden current retrieval systems better and contributes to accelerating the advancement of multimodal learning in the Information Retrieval.

* Dataset available at https://github.com/hsiehjackson/Mr.Right 
Viaarxiv icon

RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning

May 25, 2022
Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P. Xing, Zhiting Hu

Figure 1 for RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning
Figure 2 for RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning
Figure 3 for RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning
Figure 4 for RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning

Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform diverse NLP tasks, especially when only few downstream data are available. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning soft prompt (e.g., embeddings) which falls short of interpretability, reusability across LMs, and applicability when gradients are not accessible. Discrete prompt, on the other hand, is difficult to optimize, and is often created by "enumeration (e.g., paraphrasing)-then-selection" heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the desired discrete prompt after training with reward. To overcome the complexity and stochasticity of reward signals by the large LM environment, we incorporate effective reward stabilization that substantially enhances the training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating LM prompting may not follow human language patterns.

* Code available at https://github.com/mingkaid/rl-prompt 
Viaarxiv icon