Alert button
Picture for David Harrison

David Harrison

Alert button

MultiPathGAN: Structure Preserving Stain Normalization using Unsupervised Multi-domain Adversarial Network with Perception Loss

Apr 20, 2022
Haseeb Nazki, Ognjen Arandjelović, InHwa Um, David Harrison

Figure 1 for MultiPathGAN: Structure Preserving Stain Normalization using Unsupervised Multi-domain Adversarial Network with Perception Loss
Figure 2 for MultiPathGAN: Structure Preserving Stain Normalization using Unsupervised Multi-domain Adversarial Network with Perception Loss
Figure 3 for MultiPathGAN: Structure Preserving Stain Normalization using Unsupervised Multi-domain Adversarial Network with Perception Loss
Figure 4 for MultiPathGAN: Structure Preserving Stain Normalization using Unsupervised Multi-domain Adversarial Network with Perception Loss

Histopathology relies on the analysis of microscopic tissue images to diagnose disease. A crucial part of tissue preparation is staining whereby a dye is used to make the salient tissue components more distinguishable. However, differences in laboratory protocols and scanning devices result in significant confounding appearance variation in the corresponding images. This variation increases both human error and the inter-rater variability, as well as hinders the performance of automatic or semi-automatic methods. In the present paper we introduce an unsupervised adversarial network to translate (and hence normalize) whole slide images across multiple data acquisition domains. Our key contributions are: (i) an adversarial architecture which learns across multiple domains with a single generator-discriminator network using an information flow branch which optimizes for perceptual loss, and (ii) the inclusion of an additional feature extraction network during training which guides the transformation network to keep all the structural features in the tissue image intact. We: (i) demonstrate the effectiveness of the proposed method firstly on H\&E slides of 120 cases of kidney cancer, as well as (ii) show the benefits of the approach on more general problems, such as flexible illumination based natural image enhancement and light source adaptation.

Viaarxiv icon

Believe The HiPe: Hierarchical Perturbation for Fast and Robust Explanation of Black Box Models

Feb 22, 2021
Jessica Cooper, Ognjen Arandjelović, David Harrison

Figure 1 for Believe The HiPe: Hierarchical Perturbation for Fast and Robust Explanation of Black Box Models
Figure 2 for Believe The HiPe: Hierarchical Perturbation for Fast and Robust Explanation of Black Box Models
Figure 3 for Believe The HiPe: Hierarchical Perturbation for Fast and Robust Explanation of Black Box Models
Figure 4 for Believe The HiPe: Hierarchical Perturbation for Fast and Robust Explanation of Black Box Models

Understanding the predictions made by Artificial Intelligence (AI) systems is becoming more and more important as deep learning models are used for increasingly complex and high-stakes tasks. Saliency mapping - an easily interpretable visual attribution method - is one important tool for this, but existing formulations are limited by either computational cost or architectural constraints. We therefore propose Hierarchical Perturbation, a very fast and completely model-agnostic method for explaining model predictions with robust saliency maps. Using standard benchmarks and datasets, we show that our saliency maps are of competitive or superior quality to those generated by existing black-box methods - and are over 20x faster to compute.

* github.com/jessicamarycooper/Hierarchical-Perturbation 
Viaarxiv icon

Speech Recognition: Keyword Spotting Through Image Recognition

Mar 10, 2018
Sanjay Krishna Gouda, Salil Kanetkar, David Harrison, Manfred K Warmuth

Figure 1 for Speech Recognition: Keyword Spotting Through Image Recognition
Figure 2 for Speech Recognition: Keyword Spotting Through Image Recognition
Figure 3 for Speech Recognition: Keyword Spotting Through Image Recognition
Figure 4 for Speech Recognition: Keyword Spotting Through Image Recognition

The problem of identifying voice commands has always been a challenge due to the presence of noise and variability in speed, pitch, etc. We will compare the efficacies of several neural network architectures for the speech recognition problem. In particular, we will build a model to determine whether a one second audio clip contains a particular word (out of a set of 10), an unknown word, or silence. The models to be implemented are a CNN recommended by the Tensorflow Speech Recognition tutorial, a low-latency CNN, and an adversarially trained CNN. The result is a demonstration of how to convert a problem in audio recognition to the better-studied domain of image classification, where the powerful techniques of convolutional neural networks are fully developed. Additionally, we demonstrate the applicability of the technique of Virtual Adversarial Training (VAT) to this problem domain, functioning as a powerful regularizer with promising potential future applications.

Viaarxiv icon