Alert button
Picture for Chen Liu

Chen Liu

Alert button

Substituting Gadolinium in Brain MRI Using DeepContrast

Jan 15, 2020
Haoran Sun, Xueqing Liu, Xinyang Feng, Chen Liu, Nanyan Zhu, Sabrina J. Gjerswold-Selleck, Hong-Jian Wei, Pavan S. Upadhyayula, Angeliki Mela, Cheng-Chia Wu, Peter D. Canoll, Andrew F. Laine, J. Thomas Vaughan, Scott A. Small, Jia Guo

Figure 1 for Substituting Gadolinium in Brain MRI Using DeepContrast
Figure 2 for Substituting Gadolinium in Brain MRI Using DeepContrast
Figure 3 for Substituting Gadolinium in Brain MRI Using DeepContrast
Figure 4 for Substituting Gadolinium in Brain MRI Using DeepContrast

Cerebral blood volume (CBV) is a hemodynamic correlate of oxygen metabolism and reflects brain activity and function. High-resolution CBV maps can be generated using the steady-state gadolinium-enhanced MRI technique. Such a technique requires an intravenous injection of exogenous gadolinium based contrast agent (GBCA) and recent studies suggest that the GBCA can accumulate in the brain after frequent use. We hypothesize that endogenous sources of contrast might exist within the most conventional and commonly acquired structural MRI, potentially obviating the need for exogenous contrast. Here, we test this hypothesis by developing and optimizing a deep learning algorithm, which we call DeepContrast, in mice. We find that DeepContrast performs equally well as exogenous GBCA in mapping CBV of the normal brain tissue and enhancing glioblastoma. Together, these studies validate our hypothesis that a deep learning approach can potentially replace the need for GBCAs in brain MRI.

* The IEEE International Symposium on Biomedical Imaging (ISBI) 2020  
Viaarxiv icon

Segmentation with Residual Attention U-Net and an Edge-Enhancement Approach Preserves Cell Shape Features

Jan 15, 2020
Nanyan Zhu, Chen Liu, Zakary S. Singer, Tal Danino, Andrew F. Laine, Jia Guo

Figure 1 for Segmentation with Residual Attention U-Net and an Edge-Enhancement Approach Preserves Cell Shape Features
Figure 2 for Segmentation with Residual Attention U-Net and an Edge-Enhancement Approach Preserves Cell Shape Features
Figure 3 for Segmentation with Residual Attention U-Net and an Edge-Enhancement Approach Preserves Cell Shape Features
Figure 4 for Segmentation with Residual Attention U-Net and an Edge-Enhancement Approach Preserves Cell Shape Features

The ability to extrapolate gene expression dynamics in living single cells requires robust cell segmentation, and one of the challenges is the amorphous or irregularly shaped cell boundaries. To address this issue, we modified the U-Net architecture to segment cells in fluorescence widefield microscopy images and quantitatively evaluated its performance. We also proposed a novel loss function approach that emphasizes the segmentation accuracy on cell boundaries and encourages shape feature preservation. With a 97% sensitivity, 93% specificity, 91% Jaccard similarity, and 95% Dice coefficient, our proposed method called Residual Attention U-Net with edge-enhancement surpassed the state-of-the-art U-Net in segmentation performance as evaluated by the traditional metrics. More remarkably, the same proposed candidate also performed the best in terms of the preservation of valuable shape features, namely area, eccentricity, major axis length, solidity and orientation. These improvements on shape feature preservation can serve as useful assets for downstream cell tracking and quantification of changes in cell statistics or features over time.

* 7 pages, 4 figures, 1 table. Nanyan Zhu and Chen Liu share equal contribution and are listed as co-first authors 
Viaarxiv icon

On Certifying Robust Models by Polyhedral Envelope

Dec 10, 2019
Chen Liu, Mathieu Salzmann, Sabine Süsstrunk

Figure 1 for On Certifying Robust Models by Polyhedral Envelope
Figure 2 for On Certifying Robust Models by Polyhedral Envelope
Figure 3 for On Certifying Robust Models by Polyhedral Envelope
Figure 4 for On Certifying Robust Models by Polyhedral Envelope

Certifying neural networks enables one to offer guarantees on a model's robustness. In this work, we use linear approximation to obtain an upper and lower bound of the model's output when the input data is perturbed within a predefined adversarial budget. This allows us to bound the adversary-free region in the data neighborhood by a polyhedral envelope, and calculate robustness guarantees based on this geometric approximation. Compared with existing methods, our approach gives a finer-grain quantitative evaluation of a model's robustness. Therefore, the certification method can not only obtain better certified bounds than the state-of-the-art techniques given the same adversarial budget but also derives a faster search scheme for the optimal adversarial budget. Furthermore, we introduce a simple regularization scheme based on our method that enables us to effectively train robust models.

Viaarxiv icon

DENS: A Dataset for Multi-class Emotion Analysis

Oct 25, 2019
Chen Liu, Muhammad Osama, Anderson de Andrade

Figure 1 for DENS: A Dataset for Multi-class Emotion Analysis
Figure 2 for DENS: A Dataset for Multi-class Emotion Analysis
Figure 3 for DENS: A Dataset for Multi-class Emotion Analysis
Figure 4 for DENS: A Dataset for Multi-class Emotion Analysis

We introduce a new dataset for multi-class emotion analysis from long-form narratives in English. The Dataset for Emotions of Narrative Sequences (DENS) was collected from both classic literature available on Project Gutenberg and modern online narratives available on Wattpad, annotated using Amazon Mechanical Turk. A number of statistics and baseline benchmarks are provided for the dataset. Of the tested techniques, we find that the fine-tuning of a pre-trained BERT model achieves the best results, with an average micro-F1 score of 60.4%. Our results show that the dataset provides a novel opportunity in emotion analysis that requires moving beyond existing sentence-level techniques.

* Accepted to EMNLP 2019 
Viaarxiv icon

Exploring Multilingual Syntactic Sentence Representations

Oct 25, 2019
Chen Liu, Anderson de Andrade, Muhammad Osama

Figure 1 for Exploring Multilingual Syntactic Sentence Representations
Figure 2 for Exploring Multilingual Syntactic Sentence Representations
Figure 3 for Exploring Multilingual Syntactic Sentence Representations
Figure 4 for Exploring Multilingual Syntactic Sentence Representations

We study methods for learning sentence embeddings with syntactic structure. We focus on methods of learning syntactic sentence-embeddings by using a multilingual parallel-corpus augmented by Universal Parts-of-Speech tags. We evaluate the quality of the learned embeddings by examining sentence-level nearest neighbours and functional dissimilarity in the embedding space. We also evaluate the ability of the method to learn syntactic sentence-embeddings for low-resource languages and demonstrate strong evidence for transfer learning. Our results show that syntactic sentence-embeddings can be learned while using less training data, fewer model parameters, and resulting in better evaluation metrics than state-of-the-art language models.

Viaarxiv icon

Floor-SP: Inverse CAD for Floorplans by Sequential Room-wise Shortest Path

Aug 19, 2019
Jiacheng Chen, Chen Liu, Jiaye Wu, Yasutaka Furukawa

Figure 1 for Floor-SP: Inverse CAD for Floorplans by Sequential Room-wise Shortest Path
Figure 2 for Floor-SP: Inverse CAD for Floorplans by Sequential Room-wise Shortest Path
Figure 3 for Floor-SP: Inverse CAD for Floorplans by Sequential Room-wise Shortest Path
Figure 4 for Floor-SP: Inverse CAD for Floorplans by Sequential Room-wise Shortest Path

This paper proposes a new approach for automated floorplan reconstruction from RGBD scans, a major milestone in indoor mapping research. The approach, dubbed Floor-SP, formulates a novel optimization problem, where room-wise coordinate descent sequentially solves dynamic programming to optimize the floorplan graph structure. The objective function consists of data terms guided by deep neural networks, consistency terms encouraging adjacent rooms to share corners and walls, and the model complexity term. The approach does not require corner/edge detection with thresholds, unlike most other methods. We have evaluated our system on production-quality RGBD scans of 527 apartments or houses, including many units with non-Manhattan structures. Qualitative and quantitative evaluations demonstrate a significant performance boost over the current state-of-the-art. Please refer to our project website http://jcchen.me/floor-sp/ for code and data.

* 10 pages, 9 figures, accepted to ICCV 2019 
Viaarxiv icon

Semantic Parsing with Dual Learning

Jul 24, 2019
Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, Kai Yu

Figure 1 for Semantic Parsing with Dual Learning
Figure 2 for Semantic Parsing with Dual Learning
Figure 3 for Semantic Parsing with Dual Learning
Figure 4 for Semantic Parsing with Dual Learning

Semantic parsing converts natural language queries into structured logical forms. The paucity of annotated training samples is a fundamental challenge in this field. In this work, we develop a semantic parsing framework with the dual learning algorithm, which enables a semantic parser to make full use of data (labeled and even unlabeled) through a dual-learning game. This game between a primal model (semantic parsing) and a dual model (logical form to query) forces them to regularize each other, and can achieve feedback signals from some prior-knowledge. By utilizing the prior-knowledge of logical form structures, we propose a novel reward signal at the surface and semantic levels which tends to generate complete and reasonable logical forms. Experimental results show that our approach achieves new state-of-the-art performance on ATIS dataset and gets competitive performance on Overnight dataset.

* Accepted by ACL 2019 Long Paper 
Viaarxiv icon

Using Discriminative Methods to Learn Fashion Compatibility Across Datasets

Jun 17, 2019
Kedan Li, Chen Liu, Ranjitha Kumar, David Forsyth

Figure 1 for Using Discriminative Methods to Learn Fashion Compatibility Across Datasets
Figure 2 for Using Discriminative Methods to Learn Fashion Compatibility Across Datasets
Figure 3 for Using Discriminative Methods to Learn Fashion Compatibility Across Datasets
Figure 4 for Using Discriminative Methods to Learn Fashion Compatibility Across Datasets

Determining whether a pair of garments are compatible with each other is a challenging matching problem. Past works explored various embedding methods for learning such a relationship. This paper introduces using discriminative methods to learn compatibility, by formulating the task as a simple binary classification problem. We evaluate our approach using an established dataset of outfits created by non-experts and demonstrated an improvement of ~2.5% on established metrics over the state-of-the-art method. We introduce three new datasets of professionally curated outfits and show the consistent performance of our approach on expert-curated datasets. To facilitate comparing across outfit datasets, we propose a new metric which, unlike previously used metrics, is not biased by the average size of outfits. We also demonstrate that compatibility between two types of items can be query indirectly, and such query strategy yield improvements.

Viaarxiv icon

Parsimonious Deep Learning: A Differential Inclusion Approach with Global Convergence

May 23, 2019
Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan Yao

Figure 1 for Parsimonious Deep Learning: A Differential Inclusion Approach with Global Convergence
Figure 2 for Parsimonious Deep Learning: A Differential Inclusion Approach with Global Convergence
Figure 3 for Parsimonious Deep Learning: A Differential Inclusion Approach with Global Convergence
Figure 4 for Parsimonious Deep Learning: A Differential Inclusion Approach with Global Convergence

Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error. However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima. In this paper, instead of pruning or distilling an over-parameterized model to compressive ones, we propose a parsimonious learning approach based on differential inclusions of inverse scale spaces, that generates a family of models from simple to complex ones with a better efficiency and interpretability than stochastic gradient descent in exploring the model space. It enjoys a simple discretization, the Split Linearized Bregman Iterations, with provable global convergence that from any initializations, algorithmic iterations converge to a critical point of empirical risks. One may exploit the proposed method to boost the complexity of neural networks progressively. Numerical experiments with MNIST, Cifar-10/100, and ImageNet are conducted to show the method is promising in training large scale models with a favorite interpretability.

* 25 pages, 7 figures 
Viaarxiv icon