Alert button
Picture for Dino Pedreschi

Dino Pedreschi

Alert button

Social AI and the Challenges of the Human-AI Ecosystem

Jun 23, 2023
Dino Pedreschi, Luca Pappalardo, Ricardo Baeza-Yates, Albert-Laszlo Barabasi, Frank Dignum, Virginia Dignum, Tina Eliassi-Rad, Fosca Giannotti, Janos Kertesz, Alistair Knott, Yannis Ioannidis, Paul Lukowicz, Andrea Passarella, Alex Sandy Pentland, John Shawe-Taylor, Alessandro Vespignani

The rise of large-scale socio-technical systems in which humans interact with artificial intelligence (AI) systems (including assistants and recommenders, in short AIs) multiplies the opportunity for the emergence of collective phenomena and tipping points, with unexpected, possibly unintended, consequences. For example, navigation systems' suggestions may create chaos if too many drivers are directed on the same route, and personalised recommendations on social media may amplify polarisation, filter bubbles, and radicalisation. On the other hand, we may learn how to foster the "wisdom of crowds" and collective action effects to face social and environmental challenges. In order to understand the impact of AI on socio-technical systems and design next-generation AIs that team with humans to help overcome societal problems rather than exacerbate them, we propose to build the foundations of Social AI at the intersection of Complex Systems, Network Science and AI. In this perspective paper, we discuss the main open questions in Social AI, outlining possible technical and scientific challenges and suggesting research avenues.

Viaarxiv icon

Dense Hebbian neural networks: a replica symmetric picture of supervised learning

Nov 25, 2022
Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi

Figure 1 for Dense Hebbian neural networks: a replica symmetric picture of supervised learning
Figure 2 for Dense Hebbian neural networks: a replica symmetric picture of supervised learning
Figure 3 for Dense Hebbian neural networks: a replica symmetric picture of supervised learning
Figure 4 for Dense Hebbian neural networks: a replica symmetric picture of supervised learning

We consider dense, associative neural-networks trained by a teacher (i.e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations. In particular, we obtain a phase diagram summarizing their performance as a function of the control parameters such as quality and quantity of the training dataset, network storage and noise, that is valid in the limit of large network size and structureless datasets: these networks may work in a ultra-storage regime (where they can handle a huge amount of patterns, if compared with shallow neural networks) or in a ultra-detection regime (where they can perform pattern recognition at prohibitive signal-to-noise ratios, if compared with shallow neural networks). Guided by the random theory as a reference framework, we also test numerically learning, storing and retrieval capabilities shown by these networks on structured datasets as MNist and Fashion MNist. As technical remarks, from the analytic side, we implement large deviations and stability analysis within Guerra's interpolation to tackle the not-Gaussian distributions involved in the post-synaptic potentials while, from the computational counterpart, we insert Plefka approximation in the Monte Carlo scheme, to speed up the evaluation of the synaptic tensors, overall obtaining a novel and broad approach to investigate supervised learning in neural networks, beyond the shallow limit, in general.

* arXiv admin note: text overlap with arXiv:2211.14067 
Viaarxiv icon

Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning

Nov 25, 2022
Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi

Figure 1 for Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning
Figure 2 for Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning
Figure 3 for Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning
Figure 4 for Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning

We consider dense, associative neural-networks trained with no supervision and we investigate their computational capabilities analytically, via a statistical-mechanics approach, and numerically, via Monte Carlo simulations. In particular, we obtain a phase diagram summarizing their performance as a function of the control parameters such as the quality and quantity of the training dataset and the network storage, valid in the limit of large network size and structureless datasets. Moreover, we establish a bridge between macroscopic observables standardly used in statistical mechanics and loss functions typically used in the machine learning. As technical remarks, from the analytic side, we implement large deviations and stability analysis within Guerra's interpolation to tackle the not-Gaussian distributions involved in the post-synaptic potentials while, from the computational counterpart, we insert Plefka approximation in the Monte Carlo scheme, to speed up the evaluation of the synaptic tensors, overall obtaining a novel and broad approach to investigate neural networks in general.

Viaarxiv icon

Benchmarking and Survey of Explanation Methods for Black Box Models

Feb 25, 2021
Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, Salvatore Rinzivillo

Figure 1 for Benchmarking and Survey of Explanation Methods for Black Box Models
Figure 2 for Benchmarking and Survey of Explanation Methods for Black Box Models
Figure 3 for Benchmarking and Survey of Explanation Methods for Black Box Models
Figure 4 for Benchmarking and Survey of Explanation Methods for Black Box Models

The widespread adoption of black-box models in Artificial Intelligence has enhanced the need for explanation methods to reveal how these obscure models reach specific decisions. Retrieving explanations is fundamental to unveil possible biases and to resolve practical or ethical issues. Nowadays, the literature is full of methods with different explanations. We provide a categorization of explanation methods based on the type of explanation returned. We present the most recent and widely used explainers, and we show a visual comparison among explanations and a quantitative benchmarking.

* This work is currently under review on an international journal 
Viaarxiv icon

GLocalX -- From Local to Global Explanations of Black Box AI Models

Jan 26, 2021
Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti

Figure 1 for GLocalX -- From Local to Global Explanations of Black Box AI Models
Figure 2 for GLocalX -- From Local to Global Explanations of Black Box AI Models
Figure 3 for GLocalX -- From Local to Global Explanations of Black Box AI Models
Figure 4 for GLocalX -- From Local to Global Explanations of Black Box AI Models

Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are "black boxes" which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating "local" explanations. We present GLocalX, a "local-first" model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLocalX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLocalX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLocalX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.

* 27 pages, 2 figures, submitted to "Special Issue on: Explainable AI (XAI) for Web-based Information Processing" 
Viaarxiv icon

Predicting seasonal influenza using supermarket retail records

Dec 17, 2020
Ioanna Miliou, Xinyue Xiong, Salvatore Rinzivillo, Qian Zhang, Giulio Rossetti, Fosca Giannotti, Dino Pedreschi, Alessandro Vespignani

Figure 1 for Predicting seasonal influenza using supermarket retail records
Figure 2 for Predicting seasonal influenza using supermarket retail records
Figure 3 for Predicting seasonal influenza using supermarket retail records
Figure 4 for Predicting seasonal influenza using supermarket retail records

Increased availability of epidemiological data, novel digital data streams, and the rise of powerful machine learning approaches have generated a surge of research activity on real-time epidemic forecast systems. In this paper, we propose the use of a novel data source, namely retail market data to improve seasonal influenza forecasting. Specifically, we consider supermarket retail data as a proxy signal for influenza, through the identification of sentinel baskets, i.e., products bought together by a population of selected customers. We develop a nowcasting and forecasting framework that provides estimates for influenza incidence in Italy up to 4 weeks ahead. We make use of the Support Vector Regression (SVR) model to produce the predictions of seasonal flu incidence. Our predictions outperform both a baseline autoregressive model and a second baseline based on product purchases. The results show quantitatively the value of incorporating retail market data in forecasting models, acting as a proxy that can be used for the real-time analysis of epidemics.

* 17 pages, 2 figures, 4 tables (1 in appendix), 1 algorithm, submitted to PLOS Computational Biology 
Viaarxiv icon

FairLens: Auditing Black-box Clinical Decision Support Systems

Nov 08, 2020
Cecilia Panigutti, Alan Perotti, Andrè Panisson, Paolo Bajardi, Dino Pedreschi

Figure 1 for FairLens: Auditing Black-box Clinical Decision Support Systems
Figure 2 for FairLens: Auditing Black-box Clinical Decision Support Systems
Figure 3 for FairLens: Auditing Black-box Clinical Decision Support Systems
Figure 4 for FairLens: Auditing Black-box Clinical Decision Support Systems

The pervasive application of algorithmic decision-making is raising concerns on the risk of unintended bias in AI systems deployed in critical settings such as healthcare. The detection and mitigation of biased models is a very delicate task which should be tackled with care and involving domain experts in the loop. In this paper we introduce FairLens, a methodology for discovering and explaining biases. We show how our tool can be used to audit a fictional commercial black-box model acting as a clinical decision support system. In this scenario, the healthcare facility experts can use FairLens on their own historical data to discover the model's biases before incorporating it into the clinical decision flow. FairLens first stratifies the available patient data according to attributes such as age, ethnicity, gender and insurance; it then assesses the model performance on such subgroups of patients identifying those in need of expert evaluation. Finally, building on recent state-of-the-art XAI (eXplainable Artificial Intelligence) techniques, FairLens explains which elements in patients' clinical history drive the model error in the selected subgroup. Therefore, FairLens allows experts to investigate whether to trust the model and to spotlight group-specific biases that might constitute potential fairness issues.

Viaarxiv icon

Black Box Explanation by Learning Image Exemplars in the Latent Feature Space

Jan 27, 2020
Riccardo Guidotti, Anna Monreale, Stan Matwin, Dino Pedreschi

Figure 1 for Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
Figure 2 for Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
Figure 3 for Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
Figure 4 for Black Box Explanation by Learning Image Exemplars in the Latent Feature Space

We present an approach to explain the decisions of black box models for image classification. While using the black box to label images, our explanation method exploits the latent feature space learned through an adversarial autoencoder. The proposed method first generates exemplar images in the latent feature space and learns a decision tree classifier. Then, it selects and decodes exemplars respecting local decision rules. Finally, it visualizes them in a manner that shows to the user how the exemplars can be modified to either stay within their class, or to become counter-factuals by "morphing" into another class. Since we focus on black box decision systems for image classification, the explanation obtained from the exemplars also provides a saliency map highlighting the areas of the image that contribute to its classification, and areas of the image that push it into another class. We present the results of an experimental evaluation on three datasets and two black box models. Besides providing the most useful and interpretable explanations, we show that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability.

Viaarxiv icon