Alert button
Picture for Chenghua Lin

Chenghua Lin

Alert button

PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning

Jul 11, 2022
Owen Millwood, Jack Miskelly, Bohao Yang, Prosanta Gope, Elif Kavun, Chenghua Lin

Figure 1 for PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning
Figure 2 for PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning
Figure 3 for PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning
Figure 4 for PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning

As the demand for highly secure and dependable lightweight systems increases in the modern world, Physically Unclonable Functions (PUFs) continue to promise a lightweight alternative to high-cost encryption techniques and secure key storage. While the security features promised by PUFs are highly attractive for secure system designers, they have been shown to be vulnerable to various sophisticated attacks - most notably Machine Learning (ML) based modelling attacks (ML-MA) which attempt to digitally clone the PUF behaviour and thus undermine their security. More recent ML-MA have even exploited publicly known helper data required for PUF error correction in order to predict PUF responses without requiring knowledge of response data. In response to this, research is beginning to emerge regarding the authentication of PUF devices with the assistance of ML as opposed to traditional PUF techniques of storage and comparison of pre-known Challenge-Response pairs (CRPs). In this article, we propose a classification system using ML based on a novel `PUF-Phenotype' concept to accurately identify the origin and determine the validity of noisy memory derived (DRAM) PUF responses as an alternative to helper data-reliant denoising techniques. To our best knowledge, we are the first to perform classification over multiple devices per model to enable a group-based PUF authentication scheme. We achieve up to 98\% classification accuracy using a modified deep convolutional neural network (CNN) for feature extraction in conjunction with several well-established classifiers. We also experimentally verified the performance of our model on a Raspberry Pi device to determine the suitability of deploying our proposed model in a resource-constrained environment.

* 13 pages main text, 7 pages supplementary material (total 20 pages), 8 figures, submitted to IEEE Transactions on Information Forensics and Security 
Viaarxiv icon

Nominal Metaphor Generation with Multitask Learning

Jun 10, 2022
Yucheng Li, Chenghua Lin, Frank Geurin

Figure 1 for Nominal Metaphor Generation with Multitask Learning
Figure 2 for Nominal Metaphor Generation with Multitask Learning
Figure 3 for Nominal Metaphor Generation with Multitask Learning
Figure 4 for Nominal Metaphor Generation with Multitask Learning

Nominal metaphors are frequently used in human language and have been shown to be effective in persuading, expressing emotion, and stimulating interest. This paper tackles the problem of Chinese Nominal Metaphor (NM) generation. We introduce a novel multitask framework, which jointly optimizes three tasks: NM identification, NM component identification, and NM generation. The metaphor identification module is able to perform a self-training procedure, which discovers novel metaphors from a large-scale unlabeled corpus for NM generation. The NM component identification module emphasizes components during training and conditions the generation on these NM components for more coherent results. To train the NM identification and component identification modules, we construct an annotated corpus consisting of 6.3k sentences that contain diverse metaphorical patterns. Automatic metrics show that our method can produce diverse metaphors with good readability, where 92\% of them are novel metaphorical comparisons. Human evaluation shows our model significantly outperforms baselines on consistency and creativity.

* INLG 2022 
Viaarxiv icon

TransHER: Translating Knowledge Graph Embedding with Hyper-Ellipsoidal Restriction

Apr 27, 2022
Yizhi Li, Wei Fan, Chao Liu, Chenghua Lin, Jiang Qian

Figure 1 for TransHER: Translating Knowledge Graph Embedding with Hyper-Ellipsoidal Restriction
Figure 2 for TransHER: Translating Knowledge Graph Embedding with Hyper-Ellipsoidal Restriction
Figure 3 for TransHER: Translating Knowledge Graph Embedding with Hyper-Ellipsoidal Restriction
Figure 4 for TransHER: Translating Knowledge Graph Embedding with Hyper-Ellipsoidal Restriction

Knowledge graph embedding methods are important for knowledge graph completion (link prediction) due to their robust performance and efficiency on large-magnitude datasets. One state-of-the-art method, PairRE, leverages two separate vectors for relations to model complex relations (i.e., 1-to-N, N-to-1, and N-to-N) in knowledge graphs. However, such a method strictly restricts entities on the hyper-ellipsoid surface and thus limits the optimization of entity distribution, which largely hinders the performance of knowledge graph completion. To address this problem, we propose a novel score function TransHER, which leverages relation-specific translations between head and tail entities restricted on separate hyper-ellipsoids. Specifically, given a triplet, our model first maps entities onto two separate hyper-ellipsoids and then conducts a relation-specific translation on one of them. The relation-specific translation provides TransHER with more direct guidance in optimization and the ability to learn semantic characteristics of entities with complex relations. Experimental results show that TransHER can achieve state-of-the-art performance and generalize to datasets in different domains and scales. All our code will be publicly available.

Viaarxiv icon

Recent Advances in Neural Text Generation: A Task-Agnostic Survey

Mar 06, 2022
Chen Tang, Frank Guerin, Yucheng Li, Chenghua Lin

Figure 1 for Recent Advances in Neural Text Generation: A Task-Agnostic Survey
Figure 2 for Recent Advances in Neural Text Generation: A Task-Agnostic Survey
Figure 3 for Recent Advances in Neural Text Generation: A Task-Agnostic Survey

In recent years much effort has been devoted to applying neural models to the task of natural language generation. The challenge is to generate natural human-like text, and to control the generation process. This paper presents a task-agnostic survey of recent advances in neural text generation. These advances have been achieved by numerous developments, which we group under the following four headings: data construction, neural frameworks, training and inference strategies, and evaluation metrics. Finally we discuss the future directions for the development of neural text generation including neural pipelines and exploiting back-ground knowledge.

Viaarxiv icon

Tell Me How to Survey: Literature Review Made Simple with Automatic Reading Path Generation

Oct 14, 2021
Jiayuan Ding, Tong Xiang, Zijing Ou, Wangyang Zuo, Ruihui Zhao, Chenghua Lin, Yefeng Zheng, Bang Liu

Figure 1 for Tell Me How to Survey: Literature Review Made Simple with Automatic Reading Path Generation
Figure 2 for Tell Me How to Survey: Literature Review Made Simple with Automatic Reading Path Generation
Figure 3 for Tell Me How to Survey: Literature Review Made Simple with Automatic Reading Path Generation
Figure 4 for Tell Me How to Survey: Literature Review Made Simple with Automatic Reading Path Generation

Recent years have witnessed the dramatic growth of paper volumes with plenty of new research papers published every day, especially in the area of computer science. How to glean papers worth reading from the massive literature to do a quick survey or keep up with the latest advancement about a specific research topic has become a challenging task. Existing academic search engines such as Google Scholar return relevant papers by individually calculating the relevance between each paper and query. However, such systems usually omit the prerequisite chains of a research topic and cannot form a meaningful reading path. In this paper, we introduce a new task named Reading Path Generation (RPG) which aims at automatically producing a path of papers to read for a given query. To serve as a research benchmark, we further propose SurveyBank, a dataset consisting of large quantities of survey papers in the field of computer science as well as their citation relationships. Each survey paper contains key phrases extracted from its title and multi-level reading lists inferred from its references. Furthermore, we propose a graph-optimization-based approach for reading path generation which takes the relationship between papers into account. Extensive evaluations demonstrate that our approach outperforms other baselines. A Real-time Reading Path Generation System (RePaGer) has been also implemented with our designed model. To the best of our knowledge, we are the first to target this important research problem. Our source code of RePaGer system and SurveyBank dataset can be found on here.

* 16 pages, 12 figures 
Viaarxiv icon

On the Latent Holes of VAEs for Text Generation

Oct 07, 2021
Ruizhe Li, Xutan Peng, Chenghua Lin

Figure 1 for On the Latent Holes of VAEs for Text Generation
Figure 2 for On the Latent Holes of VAEs for Text Generation
Figure 3 for On the Latent Holes of VAEs for Text Generation
Figure 4 for On the Latent Holes of VAEs for Text Generation

In this paper, we provide the first focused study on the discontinuities (aka. holes) in the latent space of Variational Auto-Encoders (VAEs), a phenomenon which has been shown to have a detrimental effect on model capacity. When investigating latent holes, existing works are exclusively centred around the encoder network and they merely explore the existence of holes. We tackle these limitations by proposing a highly efficient Tree-based Decoder-Centric (TDC) algorithm for latent hole identification, with a focal point on the text domain. In contrast to past studies, our approach pays attention to the decoder network, as a decoder has a direct impact on the model's output quality. Furthermore, we provide, for the first time, in-depth empirical analysis of the latent hole phenomenon, investigating several important aspects such as how the holes impact VAE algorithms' performance on text generation, and how the holes are distributed in the latent space.

Viaarxiv icon

Affective Decoding for Empathetic Response Generation

Sep 03, 2021
Chengkun Zheng, Guanyi Chen, Chenghua Lin, Ruizhe Li, Zhigang Chen

Figure 1 for Affective Decoding for Empathetic Response Generation
Figure 2 for Affective Decoding for Empathetic Response Generation
Figure 3 for Affective Decoding for Empathetic Response Generation
Figure 4 for Affective Decoding for Empathetic Response Generation

Understanding speaker's feelings and producing appropriate responses with emotion connection is a key communicative skill for empathetic dialogue systems. In this paper, we propose a simple technique called Affective Decoding for empathetic response generation. Our method can effectively incorporate emotion signals during each decoding step, and can additionally be augmented with an auxiliary dual emotion encoder, which learns separate embeddings for the speaker and listener given the emotion base of the dialogue. Extensive empirical studies show that our models are perceived to be more empathetic by human evaluations, in comparison to several strong mainstream methods for empathetic responding.

* Long paper accepted to INLG 2021 
Viaarxiv icon

Extractive and Abstractive Sentence Labelling of Sentiment-bearing Topics

Aug 29, 2021
Mohamad Hardyman Barawi, Chenghua Lin, Advaith Siddharthan, Yinbin Liu

Figure 1 for Extractive and Abstractive Sentence Labelling of Sentiment-bearing Topics
Figure 2 for Extractive and Abstractive Sentence Labelling of Sentiment-bearing Topics
Figure 3 for Extractive and Abstractive Sentence Labelling of Sentiment-bearing Topics
Figure 4 for Extractive and Abstractive Sentence Labelling of Sentiment-bearing Topics

This paper tackles the problem of automatically labelling sentiment-bearing topics with descriptive sentence labels. We propose two approaches to the problem, one extractive and the other abstractive. Both approaches rely on a novel mechanism to automatically learn the relevance of each sentence in a corpus to sentiment-bearing topics extracted from that corpus. The extractive approach uses a sentence ranking algorithm for label selection which for the first time jointly optimises topic--sentence relevance as well as aspect--sentiment co-coverage. The abstractive approach instead addresses aspect--sentiment co-coverage by using sentence fusion to generate a sentential label that includes relevant content from multiple sentences. To our knowledge, we are the first to study the problem of labelling sentiment-bearing topics. Our experimental results on three real-world datasets show that both the extractive and abstractive approaches outperform four strong baselines in terms of facilitating topic understanding and interpretation. In addition, when comparing extractive and abstractive labels, our evaluation shows that our best performing abstractive method is able to provide more topic information coverage in fewer words, at the cost of generating less grammatical labels than the extractive method. We conclude that abstractive methods can effectively synthesise the rich information contained in sentiment-bearing topics.

Viaarxiv icon