Abstract:As large-scale graphs become more widespread today, it exposes computational challenges to extract, process, and interpret large graph data. It is therefore natural to search for ways to summarize the original graph while maintaining its key characteristics. In this survey, we outline the most current progress of deep learning on graphs for graph summarization explicitly concentrating on Graph Neural Networks (GNNs) methods. We structure the paper into four categories, including graph recurrent networks, graph convolutional networks, graph autoencoders, and graph attention networks. We also discuss a new booming line of research which is elaborating on using graph reinforcement learning for evaluating and improving the quality of graph summaries. Finally, we conclude this survey and discuss a number of open research challenges that would motivate further study in this area.
Abstract:Artificial intelligence (AI) enables machines to learn from human experience, adjust to new inputs, and perform human-like tasks. AI is progressing rapidly and is transforming the way businesses operate, from process automation to cognitive augmentation of tasks and intelligent process/data analytics. However, the main challenge for human users would be to understand and appropriately trust the result of AI algorithms and methods. In this paper, to address this challenge, we study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools. We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance. We present an interactive evidence-based approach to assist human users in comprehending and trusting the results and output created by AI-enabled algorithms. We adopt a typical scenario in the Banking domain for analyzing customer transactions. We develop a digital dashboard to facilitate interacting with the algorithm results and discuss how the proposed XAI method can significantly improve the confidence of data scientists in understanding the result of AI-enabled algorithms.