Alert button
Picture for Iain Lee

Iain Lee

Alert button

dpVAEs: Fixing Sample Generation for Regularized VAEs

Nov 24, 2019
Riddhish Bhalodia, Iain Lee, Shireen Elhabian

Figure 1 for dpVAEs: Fixing Sample Generation for Regularized VAEs
Figure 2 for dpVAEs: Fixing Sample Generation for Regularized VAEs
Figure 3 for dpVAEs: Fixing Sample Generation for Regularized VAEs
Figure 4 for dpVAEs: Fixing Sample Generation for Regularized VAEs

Unsupervised representation learning via generative modeling is a staple to many computer vision applications in the absence of labeled data. Variational Autoencoders (VAEs) are powerful generative models that learn representations useful for data generation. However, due to inherent challenges in the training objective, VAEs fail to learn useful representations amenable for downstream tasks. Regularization-based methods that attempt to improve the representation learning aspect of VAEs come at a price: poor sample generation. In this paper, we explore this representation-generation trade-off for regularized VAEs and introduce a new family of priors, namely decoupled priors, or dpVAEs, that decouple the representation space from the generation space. This decoupling enables the use of VAE regularizers on the representation space without impacting the distribution used for sample generation, and thereby reaping the representation learning benefits of the regularizations without sacrificing the sample generation. dpVAE leverages invertible networks to learn a bijective mapping from an arbitrarily complex representation distribution to a simple, tractable, generative distribution. Decoupled priors can be adapted to the state-of-the-art VAE regularizers without additional hyperparameter tuning. We showcase the use of dpVAEs with different regularizers. Experiments on MNIST, SVHN, and CelebA demonstrate, quantitatively and qualitatively, that dpVAE fixes sample generation for regularized VAEs.

Viaarxiv icon

PageNet: Page Boundary Extraction in Historical Handwritten Documents

Sep 05, 2017
Chris Tensmeyer, Brian Davis, Curtis Wigington, Iain Lee, Bill Barrett

Figure 1 for PageNet: Page Boundary Extraction in Historical Handwritten Documents
Figure 2 for PageNet: Page Boundary Extraction in Historical Handwritten Documents
Figure 3 for PageNet: Page Boundary Extraction in Historical Handwritten Documents
Figure 4 for PageNet: Page Boundary Extraction in Historical Handwritten Documents

When digitizing a document into an image, it is common to include a surrounding border region to visually indicate that the entire document is present in the image. However, this border should be removed prior to automated processing. In this work, we present a deep learning based system, PageNet, which identifies the main page region in an image in order to segment content from both textual and non-textual border noise. In PageNet, a Fully Convolutional Network obtains a pixel-wise segmentation which is post-processed into the output quadrilateral region. We evaluate PageNet on 4 collections of historical handwritten documents and obtain over 94% mean intersection over union on all datasets and approach human performance on 2 of these collections. Additionally, we show that PageNet can segment documents that are overlayed on top of other documents.

* HIP 2017 (in submission) 
Viaarxiv icon