Alert button
Picture for Ajmal Mian

Ajmal Mian

Alert button

On quantifying and improving realism of images generated with diffusion

Sep 26, 2023
Yunzhuo Chen, Naveed Akhtar, Nur Al Hasan Haldar, Ajmal Mian

Figure 1 for On quantifying and improving realism of images generated with diffusion
Figure 2 for On quantifying and improving realism of images generated with diffusion
Figure 3 for On quantifying and improving realism of images generated with diffusion
Figure 4 for On quantifying and improving realism of images generated with diffusion

Recent advances in diffusion models have led to a quantum leap in the quality of generative visual content. However, quantification of realism of the content is still challenging. Existing evaluation metrics, such as Inception Score and Fr\'echet inception distance, fall short on benchmarking diffusion models due to the versatility of the generated images. Moreover, they are not designed to quantify realism of an individual image. This restricts their application in forensic image analysis, which is becoming increasingly important in the emerging era of generative models. To address that, we first propose a metric, called Image Realism Score (IRS), computed from five statistical measures of a given image. This non-learning based metric not only efficiently quantifies realism of the generated images, it is readily usable as a measure to classify a given image as real or fake. We experimentally establish the model- and data-agnostic nature of the proposed IRS by successfully detecting fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN. We further leverage this attribute of our metric to minimize an IRS-augmented generative loss of SDM, and demonstrate a convenient yet considerable quality improvement of the SDM-generated content with our modification. Our efforts have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes generated by four high-quality models. We will release the dataset and code.

* 10 pages, 5 figures 
Viaarxiv icon

Text-image guided Diffusion Model for generating Deepfake celebrity interactions

Sep 26, 2023
Yunzhuo Chen, Nur Al Hasan Haldar, Naveed Akhtar, Ajmal Mian

Figure 1 for Text-image guided Diffusion Model for generating Deepfake celebrity interactions
Figure 2 for Text-image guided Diffusion Model for generating Deepfake celebrity interactions
Figure 3 for Text-image guided Diffusion Model for generating Deepfake celebrity interactions
Figure 4 for Text-image guided Diffusion Model for generating Deepfake celebrity interactions

Deepfake images are fast becoming a serious concern due to their realism. Diffusion models have recently demonstrated highly realistic visual content generation, which makes them an excellent potential tool for Deepfake generation. To curb their exploitation for Deepfakes, it is imperative to first explore the extent to which diffusion models can be used to generate realistic content that is controllable with convenient prompts. This paper devises and explores a novel method in that regard. Our technique alters the popular stable diffusion model to generate a controllable high-quality Deepfake image with text and image prompts. In addition, the original stable model lacks severely in generating quality images that contain multiple persons. The modified diffusion model is able to address this problem, it add input anchor image's latent at the beginning of inferencing rather than Gaussian random latent as input. Hence, we focus on generating forged content for celebrity interactions, which may be used to spread rumors. We also apply Dreambooth to enhance the realism of our fake images. Dreambooth trains the pairing of center words and specific features to produce more refined and personalized output images. Our results show that with the devised scheme, it is possible to create fake visual content with alarming realism, such that the content can serve as believable evidence of meetings between powerful political figures.

* 8 pages,8 figures, DICTA 
Viaarxiv icon

PRAT: PRofiling Adversarial aTtacks

Sep 20, 2023
Rahul Ambati, Naveed Akhtar, Ajmal Mian, Yogesh Singh Rawat

Figure 1 for PRAT: PRofiling Adversarial aTtacks
Figure 2 for PRAT: PRofiling Adversarial aTtacks
Figure 3 for PRAT: PRofiling Adversarial aTtacks
Figure 4 for PRAT: PRofiling Adversarial aTtacks

Intrinsic susceptibility of deep learning to adversarial examples has led to a plethora of attack techniques with a broad common objective of fooling deep models. However, we find slight compositional differences between the algorithms achieving this objective. These differences leave traces that provide important clues for attacker profiling in real-life scenarios. Inspired by this, we introduce a novel problem of PRofiling Adversarial aTtacks (PRAT). Given an adversarial example, the objective of PRAT is to identify the attack used to generate it. Under this perspective, we can systematically group existing attacks into different families, leading to the sub-problem of attack family identification, which we also study. To enable PRAT analysis, we introduce a large Adversarial Identification Dataset (AID), comprising over 180k adversarial samples generated with 13 popular attacks for image specific/agnostic white/black box setups. We use AID to devise a novel framework for the PRAT objective. Our framework utilizes a Transformer based Global-LOcal Feature (GLOF) module to extract an approximate signature of the adversarial attack, which in turn is used for the identification of the attack. Using AID and our framework, we provide multiple interesting benchmark results for the PRAT problem.

Viaarxiv icon

Dual Student Networks for Data-Free Model Stealing

Sep 18, 2023
James Beetham, Navid Kardan, Ajmal Mian, Mubarak Shah

Figure 1 for Dual Student Networks for Data-Free Model Stealing
Figure 2 for Dual Student Networks for Data-Free Model Stealing
Figure 3 for Dual Student Networks for Data-Free Model Stealing
Figure 4 for Dual Student Networks for Data-Free Model Stealing

Existing data-free model stealing methods use a generator to produce samples in order to train a student model to match the target model outputs. To this end, the two main challenges are estimating gradients of the target model without access to its parameters, and generating a diverse set of training samples that thoroughly explores the input space. We propose a Dual Student method where two students are symmetrically trained in order to provide the generator a criterion to generate samples that the two students disagree on. On one hand, disagreement on a sample implies at least one student has classified the sample incorrectly when compared to the target model. This incentive towards disagreement implicitly encourages the generator to explore more diverse regions of the input space. On the other hand, our method utilizes gradients of student models to indirectly estimate gradients of the target model. We show that this novel training objective for the generator network is equivalent to optimizing a lower bound on the generator's loss if we had access to the target model gradients. We show that our new optimization framework provides more accurate gradient estimation of the target model and better accuracies on benchmark classification datasets. Additionally, our approach balances improved query efficiency with training computation cost. Finally, we demonstrate that our method serves as a better proxy model for transfer-based adversarial attacks than existing data-free model stealing methods.

* Published in the ICLR 2023 - The Eleventh International Conference on Learning Representations 
Viaarxiv icon

Quantum-Inspired Machine Learning: a Survey

Sep 08, 2023
Larry Huynh, Jin Hong, Ajmal Mian, Hajime Suzuki, Yanqiu Wu, Seyit Camtepe

Figure 1 for Quantum-Inspired Machine Learning: a Survey
Figure 2 for Quantum-Inspired Machine Learning: a Survey
Figure 3 for Quantum-Inspired Machine Learning: a Survey
Figure 4 for Quantum-Inspired Machine Learning: a Survey

Quantum-inspired Machine Learning (QiML) is a burgeoning field, receiving global attention from researchers for its potential to leverage principles of quantum mechanics within classical computational frameworks. However, current review literature often presents a superficial exploration of QiML, focusing instead on the broader Quantum Machine Learning (QML) field. In response to this gap, this survey provides an integrated and comprehensive examination of QiML, exploring QiML's diverse research domains including tensor network simulations, dequantized algorithms, and others, showcasing recent advancements, practical applications, and illuminating potential future research avenues. Further, a concrete definition of QiML is established by analyzing various prior interpretations of the term and their inherent ambiguities. As QiML continues to evolve, we anticipate a wealth of future developments drawing from quantum mechanics, quantum computing, and classical machine learning, enriching the field further. This survey serves as a guide for researchers and practitioners alike, providing a holistic understanding of QiML's current landscape and future directions.

* 59 pages, 13 figures, 9 tables. - Edited for spelling, grammar, and corrected minor typos in formulas - Adjusted wording in places for better clarity - Corrected contact info - Added Table 1 to clarify variables used in dequantized algs. - Added subsections in QVAS discussing QCBMs and TN-based VQC models - Included additional references as requested by authors to ensure a more exhaustive survey 
Viaarxiv icon

Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation

Aug 05, 2023
Zijie Wu, Yaonan Wang, Mingtao Feng, He Xie, Ajmal Mian

Figure 1 for Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation
Figure 2 for Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation
Figure 3 for Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation
Figure 4 for Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation

Diffusion probabilistic models have achieved remarkable success in text guided image generation. However, generating 3D shapes is still challenging due to the lack of sufficient data containing 3D models along with their descriptions. Moreover, text based descriptions of 3D shapes are inherently ambiguous and lack details. In this paper, we propose a sketch and text guided probabilistic diffusion model for colored point cloud generation that conditions the denoising process jointly with a hand drawn sketch of the object and its textual description. We incrementally diffuse the point coordinates and color values in a joint diffusion process to reach a Gaussian distribution. Colored point cloud generation thus amounts to learning the reverse diffusion process, conditioned by the sketch and text, to iteratively recover the desired shape and color. Specifically, to learn effective sketch-text embedding, our model adaptively aggregates the joint embedding of text prompt and the sketch based on a capsule attention network. Our model uses staged diffusion to generate the shape and then assign colors to different parts conditioned on the appearance prompt while preserving precise shapes from the first stage. This gives our model the flexibility to extend to multiple tasks, such as appearance re-editing and part segmentation. Experimental results demonstrate that our model outperforms recent state-of-the-art in point cloud generation.

* Accepted by ICCV 2023 
Viaarxiv icon

BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models

Jul 31, 2023
Jordan Vice, Naveed Akhtar, Richard Hartley, Ajmal Mian

Figure 1 for BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models
Figure 2 for BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models
Figure 3 for BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models
Figure 4 for BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models

The rise in popularity of text-to-image generative artificial intelligence (AI) has attracted widespread public interest. At the same time, backdoor attacks are well-known in machine learning literature for their effective manipulation of neural models, which is a growing concern among practitioners. We highlight this threat for generative AI by introducing a Backdoor Attack on text-to-image Generative Models (BAGM). Our attack targets various stages of the text-to-image generative pipeline, modifying the behaviour of the embedded tokenizer and the pre-trained language and visual neural networks. Based on the penetration level, BAGM takes the form of a suite of attacks that are referred to as surface, shallow and deep attacks in this article. We compare the performance of BAGM to recently emerging related methods. We also contribute a set of quantitative metrics for assessing the performance of backdoor attacks on generative AI models in the future. The efficacy of the proposed framework is established by targeting the state-of-the-art stable diffusion pipeline in a digital marketing scenario as the target domain. To that end, we also contribute a Marketable Foods dataset of branded product images. We hope this work contributes towards exposing the contemporary generative AI security challenges and fosters discussions on preemptive efforts for addressing those challenges. Keywords: Generative Artificial Intelligence, Generative Models, Text-to-Image generation, Backdoor Attacks, Trojan, Stable Diffusion.

* This research was supported by National Intelligence and Security Discovery Research Grants (project# NS220100007), funded by the Department of Defence Australia 
Viaarxiv icon

Spectrum-guided Multi-granularity Referring Video Object Segmentation

Jul 25, 2023
Bo Miao, Mohammed Bennamoun, Yongsheng Gao, Ajmal Mian

Figure 1 for Spectrum-guided Multi-granularity Referring Video Object Segmentation
Figure 2 for Spectrum-guided Multi-granularity Referring Video Object Segmentation
Figure 3 for Spectrum-guided Multi-granularity Referring Video Object Segmentation
Figure 4 for Spectrum-guided Multi-granularity Referring Video Object Segmentation

Current referring video object segmentation (R-VOS) techniques extract conditional kernels from encoded (low-resolution) vision-language features to segment the decoded high-resolution features. We discovered that this causes significant feature drift, which the segmentation kernels struggle to perceive during the forward computation. This negatively affects the ability of segmentation kernels. To address the drift problem, we propose a Spectrum-guided Multi-granularity (SgMg) approach, which performs direct segmentation on the encoded features and employs visual details to further optimize the masks. In addition, we propose Spectrum-guided Cross-modal Fusion (SCF) to perform intra-frame global interactions in the spectral domain for effective multimodal representation. Finally, we extend SgMg to perform multi-object R-VOS, a new paradigm that enables simultaneous segmentation of multiple referred objects in a video. This not only makes R-VOS faster, but also more practical. Extensive experiments show that SgMg achieves state-of-the-art performance on four video benchmark datasets, outperforming the nearest competitor by 2.8% points on Ref-YouTube-VOS. Our extended SgMg enables multi-object R-VOS, runs about 3 times faster while maintaining satisfactory performance. Code is available at https://github.com/bo-miao/SgMg.

* Accepted by ICCV 2023, code is at https://github.com/bo-miao/SgMg 
Viaarxiv icon

A Comprehensive Overview of Large Language Models

Jul 12, 2023
Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Nick Barnes, Ajmal Mian

Figure 1 for A Comprehensive Overview of Large Language Models
Figure 2 for A Comprehensive Overview of Large Language Models
Figure 3 for A Comprehensive Overview of Large Language Models
Figure 4 for A Comprehensive Overview of Large Language Models

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

Viaarxiv icon