Alert button
Picture for Richard Hartley

Richard Hartley

Alert button

SegReg: Segmenting OARs by Registering MR Images and CT Annotations

Nov 12, 2023
Zeyu Zhang, Xuyin Qi, Bowen Zhang, Biao Wu, Hien Le, Bora Jeong, Minh-Son To, Richard Hartley

Organ at risk (OAR) segmentation is a critical process in radiotherapy treatment planning such as head and neck tumors. Nevertheless, in clinical practice, radiation oncologists predominantly perform OAR segmentations manually on CT scans. This manual process is highly time-consuming and expensive, limiting the number of patients who can receive timely radiotherapy. Additionally, CT scans offer lower soft-tissue contrast compared to MRI. Despite MRI providing superior soft-tissue visualization, its time-consuming nature makes it infeasible for real-time treatment planning. To address these challenges, we propose a method called SegReg, which utilizes Elastic Symmetric Normalization for registering MRI to perform OAR segmentation. SegReg outperforms the CT-only baseline by 16.78% in mDSC and 18.77% in mIoU, showing that it effectively combines the geometric accuracy of CT with the superior soft-tissue contrast of MRI, making accurate automated OAR segmentation for clinical practice become possible.

* Contact: steve.zeyu.zhang@outlook.com 
Viaarxiv icon

IMPUS: Image Morphing with Perceptually-Uniform Sampling Using Diffusion Models

Nov 12, 2023
Zhaoyuan Yang, Zhengyang Yu, Zhiwei Xu, Jaskirat Singh, Jing Zhang, Dylan Campbell, Peter Tu, Richard Hartley

We present a diffusion-based image morphing approach with perceptually-uniform sampling (IMPUS) that produces smooth, direct, and realistic interpolations given an image pair. A latent diffusion model has distinct conditional distributions and data embeddings for each of the two images, especially when they are from different classes. To bridge this gap, we interpolate in the locally linear and continuous text embedding space and Gaussian latent space. We first optimize the endpoint text embeddings and then map the images to the latent space using a probability flow ODE. Unlike existing work that takes an indirect morphing path, we show that the model adaptation yields a direct path and suppresses ghosting artifacts in the interpolated images. To achieve this, we propose an adaptive bottleneck constraint based on a novel relative perceptual path diversity score that automatically controls the bottleneck size and balances the diversity along the path with its directness. We also propose a perceptually-uniform sampling technique that enables visually smooth changes between the interpolated images. Extensive experiments validate that our IMPUS can achieve smooth, direct, and realistic image morphing and be applied to other image generation tasks.

Viaarxiv icon

BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models

Jul 31, 2023
Jordan Vice, Naveed Akhtar, Richard Hartley, Ajmal Mian

Figure 1 for BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models
Figure 2 for BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models
Figure 3 for BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models
Figure 4 for BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models

The rise in popularity of text-to-image generative artificial intelligence (AI) has attracted widespread public interest. At the same time, backdoor attacks are well-known in machine learning literature for their effective manipulation of neural models, which is a growing concern among practitioners. We highlight this threat for generative AI by introducing a Backdoor Attack on text-to-image Generative Models (BAGM). Our attack targets various stages of the text-to-image generative pipeline, modifying the behaviour of the embedded tokenizer and the pre-trained language and visual neural networks. Based on the penetration level, BAGM takes the form of a suite of attacks that are referred to as surface, shallow and deep attacks in this article. We compare the performance of BAGM to recently emerging related methods. We also contribute a set of quantitative metrics for assessing the performance of backdoor attacks on generative AI models in the future. The efficacy of the proposed framework is established by targeting the state-of-the-art stable diffusion pipeline in a digital marketing scenario as the target domain. To that end, we also contribute a Marketable Foods dataset of branded product images. We hope this work contributes towards exposing the contemporary generative AI security challenges and fosters discussions on preemptive efforts for addressing those challenges. Keywords: Generative Artificial Intelligence, Generative Models, Text-to-Image generation, Backdoor Attacks, Trojan, Stable Diffusion.

* This research was supported by National Intelligence and Security Discovery Research Grants (project# NS220100007), funded by the Department of Defence Australia 
Viaarxiv icon

Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications

Jul 10, 2023
Peter Tu, Zhaoyuan Yang, Richard Hartley, Zhiwei Xu, Jing Zhang, Dylan Campbell, Jaskirat Singh, Tianyu Wang

Figure 1 for Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications
Figure 2 for Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications
Figure 3 for Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications
Figure 4 for Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications

This paper begins with a description of methods for estimating probability density functions for images that reflects the observation that such data is usually constrained to lie in restricted regions of the high-dimensional image space - not every pattern of pixels is an image. It is common to say that images lie on a lower-dimensional manifold in the high-dimensional space. However, although images may lie on such lower-dimensional manifolds, it is not the case that all points on the manifold have an equal probability of being images. Images are unevenly distributed on the manifold, and our task is to devise ways to model this distribution as a probability distribution. In pursuing this goal, we consider generative models that are popular in AI and computer vision community. For our purposes, generative/probabilistic models should have the properties of 1) sample generation: it should be possible to sample from this distribution according to the modelled density function, and 2) probability computation: given a previously unseen sample from the dataset of interest, one should be able to compute the probability of the sample, at least up to a normalising constant. To this end, we investigate the use of methods such as normalising flow and diffusion models. We then show that such probabilistic descriptions can be used to construct defences against adversarial attacks. In addition to describing the manifold in terms of density, we also consider how semantic interpretations can be used to describe points on the manifold. To this end, we consider an emergent language framework which makes use of variational encoders to produce a disentangled representation of points that reside on a given manifold. Trajectories between points on a manifold can then be described in terms of evolving semantic descriptions.

* 24 pages, 17 figures, 1 table 
Viaarxiv icon

Adaptive Test-Time Defense with the Manifold Hypothesis

Oct 27, 2022
Zhaoyuan Yang, Zhiwei Xu, Jing Zhang, Richard Hartley, Peter Tu

Figure 1 for Adaptive Test-Time Defense with the Manifold Hypothesis
Figure 2 for Adaptive Test-Time Defense with the Manifold Hypothesis
Figure 3 for Adaptive Test-Time Defense with the Manifold Hypothesis
Figure 4 for Adaptive Test-Time Defense with the Manifold Hypothesis

In this work, we formulate a novel framework of adversarial robustness using the manifold hypothesis. Our framework provides sufficient conditions for defending against adversarial examples. We develop a test-time defense method with our formulation and variational inference. The developed approach combines manifold learning with the Bayesian framework to provide adversarial robustness without the need for adversarial training. We show that our proposed approach can provide adversarial robustness even if attackers are aware of existence of test-time defense. In additions, our approach can also serve as a test-time defense mechanism for variational autoencoders.

Viaarxiv icon

Contact-aware Human Motion Forecasting

Oct 08, 2022
Wei Mao, Miaomiao Liu, Richard Hartley, Mathieu Salzmann

Figure 1 for Contact-aware Human Motion Forecasting
Figure 2 for Contact-aware Human Motion Forecasting
Figure 3 for Contact-aware Human Motion Forecasting
Figure 4 for Contact-aware Human Motion Forecasting

In this paper, we tackle the task of scene-aware 3D human motion forecasting, which consists of predicting future human poses given a 3D scene and a past human motion. A key challenge of this task is to ensure consistency between the human and the scene, accounting for human-scene interactions. Previous attempts to do so model such interactions only implicitly, and thus tend to produce artifacts such as "ghost motion" because of the lack of explicit constraints between the local poses and the global motion. Here, by contrast, we propose to explicitly model the human-scene contacts. To this end, we introduce distance-based contact maps that capture the contact relationships between every joint and every 3D scene point at each time instant. We then develop a two-stage pipeline that first predicts the future contact maps from the past ones and the scene point cloud, and then forecasts the future human poses by conditioning them on the predicted contact maps. During training, we explicitly encourage consistency between the global motion and the local poses via a prior defined using the contact maps and future poses. Our approach outperforms the state-of-the-art human motion forecasting and human synthesis methods on both synthetic and real datasets. Our code is available at https://github.com/wei-mao-2019/ContAwareMotionPred.

* Accepted to NeurIPS2022 
Viaarxiv icon

Manifold Learning Benefits GANs

Dec 23, 2021
Yao Ni, Piotr Koniusz, Richard Hartley, Richard Nock

Figure 1 for Manifold Learning Benefits GANs
Figure 2 for Manifold Learning Benefits GANs
Figure 3 for Manifold Learning Benefits GANs
Figure 4 for Manifold Learning Benefits GANs

In this paper, we improve Generative Adversarial Networks by incorporating a manifold learning step into the discriminator. We consider locality-constrained linear and subspace-based manifolds, and locality-constrained non-linear manifolds. In our design, the manifold learning and coding steps are intertwined with layers of the discriminator, with the goal of attracting intermediate feature representations onto manifolds. We adaptively balance the discrepancy between feature representations and their manifold view, which represents a trade-off between denoising on the manifold and refining the manifold. We conclude that locality-constrained non-linear manifolds have the upper hand over linear manifolds due to their non-uniform density and smoothness. We show substantial improvements over different recent state-of-the-art baselines.

* 30 pages full version 
Viaarxiv icon

Semi-Supervised 3D Hand Shape and Pose Estimation with Label Propagation

Nov 30, 2021
Samira Kaviani, Amir Rahimi, Richard Hartley

Figure 1 for Semi-Supervised 3D Hand Shape and Pose Estimation with Label Propagation
Figure 2 for Semi-Supervised 3D Hand Shape and Pose Estimation with Label Propagation
Figure 3 for Semi-Supervised 3D Hand Shape and Pose Estimation with Label Propagation
Figure 4 for Semi-Supervised 3D Hand Shape and Pose Estimation with Label Propagation

To obtain 3D annotations, we are restricted to controlled environments or synthetic datasets, leading us to 3D datasets with less generalizability to real-world scenarios. To tackle this issue in the context of semi-supervised 3D hand shape and pose estimation, we propose the Pose Alignment network to propagate 3D annotations from labelled frames to nearby unlabelled frames in sparsely annotated videos. We show that incorporating the alignment supervision on pairs of labelled-unlabelled frames allows us to improve the pose estimation accuracy. Besides, we show that the proposed Pose Alignment network can effectively propagate annotations on unseen sparsely labelled videos without fine-tuning.

* DICTA 2021 
Viaarxiv icon

Dense Uncertainty Estimation via an Ensemble-based Conditional Latent Variable Model

Nov 22, 2021
Jing Zhang, Yuchao Dai, Mehrtash Harandi, Yiran Zhong, Nick Barnes, Richard Hartley

Figure 1 for Dense Uncertainty Estimation via an Ensemble-based Conditional Latent Variable Model
Figure 2 for Dense Uncertainty Estimation via an Ensemble-based Conditional Latent Variable Model
Figure 3 for Dense Uncertainty Estimation via an Ensemble-based Conditional Latent Variable Model
Figure 4 for Dense Uncertainty Estimation via an Ensemble-based Conditional Latent Variable Model

Uncertainty estimation has been extensively studied in recent literature, which can usually be classified as aleatoric uncertainty and epistemic uncertainty. In current aleatoric uncertainty estimation frameworks, it is often neglected that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model. Since the oracle model is inaccessible in most cases, we propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation. Further, we show a trivial solution in the dual-head based heteroscedastic aleatoric uncertainty estimation framework and introduce a new uncertainty consistency loss to avoid it. For epistemic uncertainty estimation, we argue that the internal variable in a conditional latent variable model is another source of epistemic uncertainty to model the predictive distribution and explore the limited knowledge about the hidden true model. We validate our observation on a dense prediction task, i.e., camouflaged object detection. Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.

Viaarxiv icon

Invertible Attention

Jun 27, 2021
Jiajun Zha, Yiran Zhong, Jing Zhang, Richard Hartley, Liang Zheng

Figure 1 for Invertible Attention
Figure 2 for Invertible Attention
Figure 3 for Invertible Attention
Figure 4 for Invertible Attention

Attention has been proved to be an efficient mechanism to capture long-range dependencies. However, so far it has not been deployed in invertible networks. This is due to the fact that in order to make a network invertible, every component within the network needs to be a bijective transformation, but a normal attention block is not. In this paper, we propose invertible attention that can be plugged into existing invertible models. We mathematically and experimentally prove that the invertibility of an attention model can be achieved by carefully constraining its Lipschitz constant. We validate the invertibility of our invertible attention on image reconstruction task with 3 popular datasets: CIFAR-10, SVHN, and CelebA. We also show that our invertible attention achieves similar performance in comparison with normal non-invertible attention on dense prediction tasks. The code is available at https://github.com/Schwartz-Zha/InvertibleAttention

* 19 pages. The code is available at https://github.com/Schwartz-Zha/InvertibleAttention 
Viaarxiv icon