In an increasingly autonomous manner AI systems make decisions impacting our daily life. Their actions might cause accidents, harm or, more generally, violate regulations -- either intentionally or not. Thus, AI systems might be considered suspects for various events. Therefore, it is essential to relate particular events to an AI, its owner and its creator. Given a multitude of AI systems from multiple manufactures, potentially, altered by their owner or changing through self-learning, this seems non-trivial. This paper discusses how to identify AI systems responsible for incidents as well as their motives that might be "malicious by design". In addition to a conceptualization, we conduct two case studies based on reinforcement learning and convolutional neural networks to illustrate our proposed methods and challenges. Our cases illustrate that "catching AI systems" seems often far from trivial and requires extensive expertise in machine learning. Legislative measures that enforce mandatory information to be collected during operation of AI systems as well as means to uniquely identify systems might facilitate the problem.
To derive explanations for deep learning models, ie. classifiers, we propose a `CLAssifier-DECoder' architecture (\emph{ClaDec}). \emph{ClaDec} allows to explain the output of an arbitrary layer. To this end, it uses a decoder that transforms the non-interpretable representation of the given layer to a representation that is more similar to training data. One can recognize what information a layer maintains by contrasting reconstructed images of \emph{ClaDec} with those of a conventional auto-encoder(AE) serving as reference. Our extended version also allows to trade human interpretability and fidelity to customize explanations to individual needs. We evaluate our approach for image classification using CNNs. In alignment with our theoretical motivation, the qualitative evaluation highlights that reconstructed images (of the network to be explained) tend to replace specific objects with more generic object templates and provide smoother reconstructions. We also show quantitatively that reconstructed visualizations using encodings from a classifier do capture more relevant information for classification than conventional AEs despite the fact that the latter contain more information on the original input.
Spatial data exhibits the property that nearby points are correlated. This holds also for learnt representations across layers, but not for commonly used weight initialization methods. Our theoretical analysis reveals for uncorrelated initialization that (i) flow through layers suffers from much more rapid decrease and (ii) training of individual parameters is subject to more ``zig-zagging''. We propose multiple methods for correlated initialization. For CNNs, they yield accuracy gains of several per cent in the absence of regularization. Even for properly tuned L2-regularization gains are often possible.
Artificial intelligence comes with great opportunities and but also great risks. We investigate to what extent deep learning can be used to create and detect deceptive explanations that either aim to lure a human into believing a decision that is not truthful to the model or provide reasoning that is non-faithful to the decision. Our theoretical insights show some limits of deception and detection in the absence of domain knowledge. For empirical evaluation, we focus on text classification. To create deceptive explanations, we alter explanations originating from GradCAM, a state-of-art technique for creating explanations in neural networks. We evaluate the effectiveness of deceptive explanations on 200 participants. Our findings indicate that deceptive explanations can indeed fool humans. Our classifier can detect even seemingly minor attempts of deception with accuracy that exceeds 80\% given sufficient domain knowledge encoded in the form of training data.
Artificial intelligence(AI) systems and humans communicate more and more with each other. AI systems are optimized for objectives such as error rate in communication or effort, eg. computation. In contrast, inputs created by humans are often treated as a given. We investigate how humans providing information to an AI can adjust to reduce miscommunication and improve efficiency while having to change their behavior as little as possible. These objectives result in trade-offs that we investigate using handwritten digits. To create examples that serve as demonstrations for humans to improve, we develop a model based on a conditional convolutional autoencoder (CCAE). Our quantitative and qualitative evaluation shows that in many occasions the generated proposals lead to lower error rates, require less effort to create and differ only modestly from the original samples.
We discuss training techniques, objectives and metrics toward mass personalization of deep learning models. In machine learning, personalization refers to the fact that every trained model should be targeted towards an individual by optimizing one or several performance metrics and often obeying additional constraints. We investigate three methods for personalized training of neural networks. They constitute three forms of curriculum learning. The methods are partially inspired by the "shaping" concept from psychology. Interestingly, we discover that extensive exposure to a limited set of training data in terms of class diversity \emph{early} in the training can lead to an irreversible reduction of the capability of a network to learn from more diverse training data. This is in close alignment with existing theories in human development. In contrast, training on a small data set covering all classes \emph{early} in the training can lead to better performance.
This work investigates fundamental questions related to locating and defining features in convolutional neural networks (CNN). The theoretical investigations guided by the locality principle show that the relevance of locations within a representation decreases with distance from the center. This is aligned with empirical findings across multiple architectures such as VGG, ResNet, Inception, DenseNet and MobileNet. To leverage our insights, we introduce Locality-promoting Regularization (LOCO-REG). It yields accuracy gains across multiple architectures and datasets.
Current topic models often suffer from discovering topics not matching human intuition, unnatural switching of topics within documents and high computational demands. We address these concerns by proposing a topic model and an inference algorithm based on automatically identifying characteristic keywords for topics. Keywords influence topic-assignments of nearby words. Our algorithm learns (key)word-topic scores and it self-regulates the number of topics. Inference is simple and easily parallelizable. Qualitative analysis yields comparable results to state-of-the-art models (eg. LDA), but with different strengths and weaknesses. Quantitative analysis using 9 datasets shows gains in terms of classification accuracy, PMI score, computational performance and consistency of topic assignments within documents, while most often using less topics.