Alert button
Picture for Jing Huang

Jing Huang

Alert button

Rigorously Assessing Natural Language Explanations of Neurons

Sep 19, 2023
Jing Huang, Atticus Geiger, Karel D'Oosterlinck, Zhengxuan Wu, Christopher Potts

Natural language is an appealing medium for explaining how large language models process and store information, but evaluating the faithfulness of such explanations is challenging. To help address this, we develop two modes of evaluation for natural language explanations that claim individual neurons represent a concept in a text input. In the observational mode, we evaluate claims that a neuron $a$ activates on all and only input strings that refer to a concept picked out by the proposed explanation $E$. In the intervention mode, we construe $E$ as a claim that the neuron $a$ is a causal mediator of the concept denoted by $E$. We apply our framework to the GPT-4-generated explanations of GPT-2 XL neurons of Bills et al. (2023) and show that even the most confident explanations have high error rates and little to no causal efficacy. We close the paper by critically assessing whether natural language is a good choice for explanations and whether neurons are the best level of analysis.

Viaarxiv icon

A Survey of Diffusion Based Image Generation Models: Issues and Their Solutions

Aug 25, 2023
Tianyi Zhang, Zheng Wang, Jing Huang, Mohiuddin Muhammad Tasnim, Wei Shi

Recently, there has been significant progress in the development of large models. Following the success of ChatGPT, numerous language models have been introduced, demonstrating remarkable performance. Similar advancements have also been observed in image generation models, such as Google's Imagen model, OpenAI's DALL-E 2, and stable diffusion models, which have exhibited impressive capabilities in generating images. However, similar to large language models, these models still encounter unresolved challenges. Fortunately, the availability of open-source stable diffusion models and their underlying mathematical principles has enabled the academic community to extensively analyze the performance of current image generation models and make improvements based on this stable diffusion framework. This survey aims to examine the existing issues and the current solutions pertaining to image generation models.

Viaarxiv icon

Model-Free Large-Scale Cloth Spreading With Mobile Manipulation: Initial Feasibility Study

Aug 21, 2023
Xiangyu Chu+, Shengzhi Wang+, Minjian Feng, Jiaxi Zheng, Yuxuan Zhao, Jing Huang, K. W. Samuel Au

Cloth manipulation is common in domestic and service tasks, and most studies use fixed-base manipulators to manipulate objects whose sizes are relatively small with respect to the manipulators' workspace, such as towels, shirts, and rags. In contrast, manipulation of large-scale cloth, such as bed making and tablecloth spreading, poses additional challenges of reachability and manipulation control. To address them, this paper presents a novel framework to spread large-scale cloth, with a single-arm mobile manipulator that can solve the reachability issue, for an initial feasibility study. On the manipulation control side, without modeling highly deformable cloth, a vision-based manipulation control scheme is applied and based on an online-update Jacobian matrix mapping from selected feature points to the end-effector motion. To coordinate the control of the manipulator and mobile platform, Behavior Trees (BTs) are used because of their modularity. Finally, experiments are conducted, including validation of the model-free manipulation control for cloth spreading in different conditions and the large-scale cloth spreading framework. The experimental results demonstrate the large-scale cloth spreading task feasibility with a single-arm mobile manipulator and the model-free deformation controller.

* 2023 IEEE International Conference on Automation Science and Engineering (CASE)  
* 6 pages, 6 figures, submit to CASE2023 
Viaarxiv icon

Unsupervised Melody-to-Lyric Generation

May 30, 2023
Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, Nanyun Peng

Figure 1 for Unsupervised Melody-to-Lyric Generation
Figure 2 for Unsupervised Melody-to-Lyric Generation
Figure 3 for Unsupervised Melody-to-Lyric Generation
Figure 4 for Unsupervised Melody-to-Lyric Generation

Automatic melody-to-lyric generation is a task in which song lyrics are generated to go with a given melody. It is of significant practical interest and more challenging than unconstrained lyric generation as the music imposes additional constraints onto the lyrics. The training data is limited as most songs are copyrighted, resulting in models that underfit the complicated cross-modal relationship between melody and lyrics. In this work, we propose a method for generating high-quality lyrics without training on any aligned melody-lyric data. Specifically, we design a hierarchical lyric generation framework that first generates a song outline and second the complete lyrics. The framework enables disentanglement of training (based purely on text) from inference (melody-guided text generation) to circumvent the shortage of parallel data. We leverage the segmentation and rhythm alignment between melody and lyrics to compile the given melody into decoding constraints as guidance during inference. The two-step hierarchical design also enables content control via the lyric outline, a much-desired feature for democratizing collaborative song creation. Experimental results show that our model can generate high-quality lyrics that are more on-topic, singable, intelligible, and coherent than strong baselines, for example SongMASS, a SOTA model trained on a parallel dataset, with a 24% relative overall quality improvement based on human ratings. O

* Accepted to ACL 23. arXiv admin note: substantial text overlap with arXiv:2305.07760 
Viaarxiv icon

Code-Switched Text Synthesis in Unseen Language Pairs

May 26, 2023
I-Hung Hsu, Avik Ray, Shubham Garg, Nanyun Peng, Jing Huang

Figure 1 for Code-Switched Text Synthesis in Unseen Language Pairs
Figure 2 for Code-Switched Text Synthesis in Unseen Language Pairs
Figure 3 for Code-Switched Text Synthesis in Unseen Language Pairs
Figure 4 for Code-Switched Text Synthesis in Unseen Language Pairs

Existing efforts on text synthesis for code-switching mostly require training on code-switched texts in the target language pairs, limiting the deployment of the models to cases lacking code-switched data. In this work, we study the problem of synthesizing code-switched texts for language pairs absent from the training data. We introduce GLOSS, a model built on top of a pre-trained multilingual machine translation model (PMMTM) with an additional code-switching module. This module, either an adapter or extra prefixes, learns code-switching patterns from code-switched data during training, while the primary component of GLOSS, i.e., the PMMTM, is frozen. The design of only adjusting the code-switching module prevents our model from overfitting to the constrained training data for code-switching. Hence, GLOSS exhibits the ability to generalize and synthesize code-switched texts across a broader spectrum of language pairs. Additionally, we develop a self-training algorithm on target language pairs further to enhance the reliability of GLOSS. Automatic evaluations on four language pairs show that GLOSS achieves at least 55% relative BLEU and METEOR scores improvements compared to strong baselines. Human evaluations on two language pairs further validate the success of GLOSS.

* Paper accepted by ACL2023 as a Finding paper 
Viaarxiv icon

Unsupervised Melody-Guided Lyrics Generation

May 26, 2023
Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, Nanyun Peng

Figure 1 for Unsupervised Melody-Guided Lyrics Generation
Figure 2 for Unsupervised Melody-Guided Lyrics Generation
Figure 3 for Unsupervised Melody-Guided Lyrics Generation
Figure 4 for Unsupervised Melody-Guided Lyrics Generation

Automatic song writing is a topic of significant practical interest. However, its research is largely hindered by the lack of training data due to copyright concerns and challenged by its creative nature. Most noticeably, prior works often fall short of modeling the cross-modal correlation between melody and lyrics due to limited parallel data, hence generating lyrics that are less singable. Existing works also lack effective mechanisms for content control, a much desired feature for democratizing song creation for people with limited music background. In this work, we propose to generate pleasantly listenable lyrics without training on melody-lyric aligned data. Instead, we design a hierarchical lyric generation framework that disentangles training (based purely on text) from inference (melody-guided text generation). At inference time, we leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process. Evaluation results show that our model can generate high-quality lyrics that are more singable, intelligible, coherent, and in rhyme than strong baselines including those supervised on parallel data.

* Presented at AAAI23 CreativeAI workshop (Non-Archival). A later version is accepted to ACL23 
Viaarxiv icon

Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?

May 20, 2023
Bing Liu, Wei Luo, Gang Li, Jing Huang, Bo Yang

Figure 1 for Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?
Figure 2 for Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?
Figure 3 for Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?
Figure 4 for Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?

As deep learning gains popularity in modelling dynamical systems, we expose an underappreciated misunderstanding relevant to modelling dynamics on networks. Strongly influenced by graph neural networks, latent vertex embeddings are naturally adopted in many neural dynamical network models. However, we show that embeddings tend to induce a model that fits observations well but simultaneously has incorrect dynamical behaviours. Recognising that previous studies narrowly focus on short-term predictions during the transient phase of a flow, we propose three tests for correct long-term behaviour, and illustrate how an embedding-based dynamical model fails these tests, and analyse the causes, particularly through the lens of topological conjugacy. In doing so, we show that the difficulties can be avoided by not using embedding. We propose a simple embedding-free alternative based on parametrising two additive vector-field components. Through extensive experiments, we verify that the proposed model can reliably recover a broad class of dynamics on different network topologies from time series data.

* Accepted by IJCAI 2023 
Viaarxiv icon

Crowd3D: Towards Hundreds of People Reconstruction from a Single Image

Jan 23, 2023
Hao Wen, Jing Huang, Huili Cui, Haozhe Lin, YuKun Lai, Lu Fang, Kun Li

Figure 1 for Crowd3D: Towards Hundreds of People Reconstruction from a Single Image
Figure 2 for Crowd3D: Towards Hundreds of People Reconstruction from a Single Image
Figure 3 for Crowd3D: Towards Hundreds of People Reconstruction from a Single Image
Figure 4 for Crowd3D: Towards Hundreds of People Reconstruction from a Single Image

Image-based multi-person reconstruction in wide-field large scenes is critical for crowd analysis and security alert. However, existing methods cannot deal with large scenes containing hundreds of people, which encounter the challenges of large number of people, large variations in human scale, and complex spatial distribution. In this paper, we propose Crowd3D, the first framework to reconstruct the 3D poses, shapes and locations of hundreds of people with global consistency from a single large-scene image. The core of our approach is to convert the problem of complex crowd localization into pixel localization with the help of our newly defined concept, Human-scene Virtual Interaction Point (HVIP). To reconstruct the crowd with global consistency, we propose a progressive reconstruction network based on HVIP by pre-estimating a scene-level camera and a ground plane. To deal with a large number of persons and various human sizes, we also design an adaptive human-centric cropping scheme. Besides, we contribute a benchmark dataset, LargeCrowd, for crowd reconstruction in a large scene. Experimental results demonstrate the effectiveness of the proposed method. The code and datasets will be made public.

* 8 pages (not including reference) 
Viaarxiv icon

Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training

Dec 19, 2022
Jing Huang, Zhengxuan Wu, Kyle Mahowald, Christopher Potts

Figure 1 for Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training
Figure 2 for Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training
Figure 3 for Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training
Figure 4 for Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training

Language tasks involving character-level manipulations (e.g., spelling correction, many word games) are challenging for models based in subword tokenization. To address this, we adapt the interchange intervention training method of Geiger et al. (2021) to operate on type-level variables over characters. This allows us to encode robust, position-independent character-level information in the internal representations of subword-based models. We additionally introduce a suite of character-level tasks that systematically vary in their dependence on meaning and sequence-level context. While simple character-level tokenization approaches still perform best on purely form-based tasks like string reversal, our method is superior for more complex tasks that blend form, meaning, and context, such as spelling correction in context and word search games. Our approach also leads to subword-based models with human-intepretable internal representations of characters.

Viaarxiv icon