Alert button
Picture for Pedro Bizarro

Pedro Bizarro

Alert button

Adversarial training for tabular data with attack propagation

Jul 28, 2023
Tiago Leon Melo, João Bravo, Marco O. P. Sampaio, Paolo Romano, Hugo Ferreira, João Tiago Ascensão, Pedro Bizarro

Figure 1 for Adversarial training for tabular data with attack propagation
Figure 2 for Adversarial training for tabular data with attack propagation
Figure 3 for Adversarial training for tabular data with attack propagation
Figure 4 for Adversarial training for tabular data with attack propagation

Adversarial attacks are a major concern in security-centered applications, where malicious actors continuously try to mislead Machine Learning (ML) models into wrongly classifying fraudulent activity as legitimate, whereas system maintainers try to stop them. Adversarially training ML models that are robust against such attacks can prevent business losses and reduce the work load of system maintainers. In such applications data is often tabular and the space available for attackers to manipulate undergoes complex feature engineering transformations, to provide useful signals for model training, to a space attackers cannot access. Thus, we propose a new form of adversarial training where attacks are propagated between the two spaces in the training loop. We then test this method empirically on a real world dataset in the domain of credit card fraud detection. We show that our method can prevent about 30% performance drops under moderate attacks and is essential under very aggressive attacks, with a trade-off loss in performance under no attacks smaller than 7%.

Viaarxiv icon

The GANfather: Controllable generation of malicious activity to improve defence systems

Jul 25, 2023
Ricardo Ribeiro Pereira, Jacopo Bono, João Tiago Ascensão, David Aparício, Pedro Ribeiro, Pedro Bizarro

Figure 1 for The GANfather: Controllable generation of malicious activity to improve defence systems
Figure 2 for The GANfather: Controllable generation of malicious activity to improve defence systems
Figure 3 for The GANfather: Controllable generation of malicious activity to improve defence systems
Figure 4 for The GANfather: Controllable generation of malicious activity to improve defence systems

Machine learning methods to aid defence systems in detecting malicious activity typically rely on labelled data. In some domains, such labelled data is unavailable or incomplete. In practice this can lead to low detection rates and high false positive rates, which characterise for example anti-money laundering systems. In fact, it is estimated that 1.7--4 trillion euros are laundered annually and go undetected. We propose The GANfather, a method to generate samples with properties of malicious activity, without label requirements. We propose to reward the generation of malicious samples by introducing an extra objective to the typical Generative Adversarial Networks (GANs) loss. Ultimately, our goal is to enhance the detection of illicit activity using the discriminator network as a novel and robust defence system. Optionally, we may encourage the generator to bypass pre-existing detection systems. This setup then reveals defensive weaknesses for the discriminator to correct. We evaluate our method in two real-world use cases, money laundering and recommendation systems. In the former, our method moves cumulative amounts close to 350 thousand dollars through a network of accounts without being detected by an existing system. In the latter, we recommend the target item to a broad user base with as few as 30 synthetic attackers. In both cases, we train a new defence system to capture the synthetic attacks.

Viaarxiv icon

From random-walks to graph-sprints: a low-latency node embedding framework on continuous-time dynamic graphs

Jul 18, 2023
Ahmad Naser Eddin, Jacopo Bono, David Aparício, Hugo Ferreira, João Ascensão, Pedro Ribeiro, Pedro Bizarro

Figure 1 for From random-walks to graph-sprints: a low-latency node embedding framework on continuous-time dynamic graphs
Figure 2 for From random-walks to graph-sprints: a low-latency node embedding framework on continuous-time dynamic graphs
Figure 3 for From random-walks to graph-sprints: a low-latency node embedding framework on continuous-time dynamic graphs
Figure 4 for From random-walks to graph-sprints: a low-latency node embedding framework on continuous-time dynamic graphs

Many real-world datasets have an underlying dynamic graph structure, where entities and their interactions evolve over time. Machine learning models should consider these dynamics in order to harness their full potential in downstream tasks. Previous approaches for graph representation learning have focused on either sampling k-hop neighborhoods, akin to breadth-first search, or random walks, akin to depth-first search. However, these methods are computationally expensive and unsuitable for real-time, low-latency inference on dynamic graphs. To overcome these limitations, we propose graph-sprints a general purpose feature extraction framework for continuous-time-dynamic-graphs (CTDGs) that has low latency and is competitive with state-of-the-art, higher latency models. To achieve this, a streaming, low latency approximation to the random-walk based features is proposed. In our framework, time-aware node embeddings summarizing multi-hop information are computed using only single-hop operations on the incoming edges. We evaluate our proposed approach on three open-source datasets and two in-house datasets, and compare with three state-of-the-art algorithms (TGN-attn, TGN-ID, Jodie). We demonstrate that our graph-sprints features, combined with a machine learning classifier, achieve competitive performance (outperforming all baselines for the node classification tasks in five datasets). Simultaneously, graph-sprints significantly reduce inference latencies, achieving close to an order of magnitude speed-up in our experimental setting.

* 9 pages, 5 figures, 7 tables 
Viaarxiv icon

Fairness-Aware Data Valuation for Supervised Learning

Mar 29, 2023
José Pombal, Pedro Saleiro, Mário A. T. Figueiredo, Pedro Bizarro

Figure 1 for Fairness-Aware Data Valuation for Supervised Learning
Figure 2 for Fairness-Aware Data Valuation for Supervised Learning
Figure 3 for Fairness-Aware Data Valuation for Supervised Learning

Data valuation is a ML field that studies the value of training instances towards a given predictive task. Although data bias is one of the main sources of downstream model unfairness, previous work in data valuation does not consider how training instances may influence both performance and fairness of ML models. Thus, we propose Fairness-Aware Data vauatiOn (FADO), a data valuation framework that can be used to incorporate fairness concerns into a series of ML-related tasks (e.g., data pre-processing, exploratory data analysis, active learning). We propose an entropy-based data valuation metric suited to address our two-pronged goal of maximizing both performance and fairness, which is more computationally efficient than existing metrics. We then show how FADO can be applied as the basis for unfairness mitigation pre-processing techniques. Our methods achieve promising results -- up to a 40 p.p. improvement in fairness at a less than 1 p.p. loss in performance compared to a baseline -- and promote fairness in a data-centric way, where a deeper understanding of data quality takes center stage.

* ICLR 2023 Workshop Trustworthy ML 
Viaarxiv icon

Turning the Tables: Biased, Imbalanced, Dynamic Tabular Datasets for ML Evaluation

Nov 28, 2022
Sérgio Jesus, José Pombal, Duarte Alves, André Cruz, Pedro Saleiro, Rita P. Ribeiro, João Gama, Pedro Bizarro

Figure 1 for Turning the Tables: Biased, Imbalanced, Dynamic Tabular Datasets for ML Evaluation
Figure 2 for Turning the Tables: Biased, Imbalanced, Dynamic Tabular Datasets for ML Evaluation
Figure 3 for Turning the Tables: Biased, Imbalanced, Dynamic Tabular Datasets for ML Evaluation
Figure 4 for Turning the Tables: Biased, Imbalanced, Dynamic Tabular Datasets for ML Evaluation

Evaluating new techniques on realistic datasets plays a crucial role in the development of ML research and its broader adoption by practitioners. In recent years, there has been a significant increase of publicly available unstructured data resources for computer vision and NLP tasks. However, tabular data -- which is prevalent in many high-stakes domains -- has been lagging behind. To bridge this gap, we present Bank Account Fraud (BAF), the first publicly available privacy-preserving, large-scale, realistic suite of tabular datasets. The suite was generated by applying state-of-the-art tabular data generation techniques on an anonymized,real-world bank account opening fraud detection dataset. This setting carries a set of challenges that are commonplace in real-world applications, including temporal dynamics and significant class imbalance. Additionally, to allow practitioners to stress test both performance and fairness of ML methods, each dataset variant of BAF contains specific types of data bias. With this resource, we aim to provide the research community with a more realistic, complete, and robust test bed to evaluate novel and existing methods.

* Accepted at NeurIPS 2022. https://openreview.net/forum?id=UrAYT2QwOX8 
Viaarxiv icon

LaundroGraph: Self-Supervised Graph Representation Learning for Anti-Money Laundering

Oct 25, 2022
Mário Cardoso, Pedro Saleiro, Pedro Bizarro

Figure 1 for LaundroGraph: Self-Supervised Graph Representation Learning for Anti-Money Laundering
Figure 2 for LaundroGraph: Self-Supervised Graph Representation Learning for Anti-Money Laundering
Figure 3 for LaundroGraph: Self-Supervised Graph Representation Learning for Anti-Money Laundering
Figure 4 for LaundroGraph: Self-Supervised Graph Representation Learning for Anti-Money Laundering

Anti-money laundering (AML) regulations mandate financial institutions to deploy AML systems based on a set of rules that, when triggered, form the basis of a suspicious alert to be assessed by human analysts. Reviewing these cases is a cumbersome and complex task that requires analysts to navigate a large network of financial interactions to validate suspicious movements. Furthermore, these systems have very high false positive rates (estimated to be over 95\%). The scarcity of labels hinders the use of alternative systems based on supervised learning, reducing their applicability in real-world applications. In this work we present LaundroGraph, a novel self-supervised graph representation learning approach to encode banking customers and financial transactions into meaningful representations. These representations are used to provide insights to assist the AML reviewing process, such as identifying anomalous movements for a given customer. LaundroGraph represents the underlying network of financial interactions as a customer-transaction bipartite graph and trains a graph neural network on a fully self-supervised link prediction task. We empirically demonstrate that our approach outperforms other strong baselines on self-supervised link prediction using a real-world dataset, improving the best non-graph baseline by $12$ p.p. of AUC. The goal is to increase the efficiency of the reviewing process by supplying these AI-powered insights to the analysts upon review. To the best of our knowledge, this is the first fully self-supervised system within the context of AML detection.

* Accepted at ACM International Conference on AI in Finance 2022 (ICAIF'22) 
Viaarxiv icon

FairGBM: Gradient Boosting with Fairness Constraints

Sep 19, 2022
André F Cruz, Catarina Belém, João Bravo, Pedro Saleiro, Pedro Bizarro

Figure 1 for FairGBM: Gradient Boosting with Fairness Constraints
Figure 2 for FairGBM: Gradient Boosting with Fairness Constraints
Figure 3 for FairGBM: Gradient Boosting with Fairness Constraints
Figure 4 for FairGBM: Gradient Boosting with Fairness Constraints

Machine Learning (ML) algorithms based on gradient boosted decision trees (GBDT) are still favored on many tabular data tasks across various mission critical applications, from healthcare to finance. However, GBDT algorithms are not free of the risk of bias and discriminatory decision-making. Despite GBDT's popularity and the rapid pace of research in fair ML, existing in-processing fair ML methods are either inapplicable to GBDT, incur in significant train time overhead, or are inadequate for problems with high class imbalance. We present FairGBM, a learning framework for training GBDT under fairness constraints with little to no impact on predictive performance when compared to unconstrained LightGBM. Since common fairness metrics are non-differentiable, we employ a "proxy-Lagrangian" formulation using smooth convex error rate proxies to enable gradient-based optimization. Additionally, our open-source implementation shows an order of magnitude speedup in training time when compared with related work, a pivotal aspect to foster the widespread adoption of FairGBM by real-world practitioners.

Viaarxiv icon

Lightweight Automated Feature Monitoring for Data Streams

Jul 19, 2022
João Conde, Ricardo Moreira, João Torres, Pedro Cardoso, Hugo R. C. Ferreira, Marco O. P. Sampaio, João Tiago Ascensão, Pedro Bizarro

Figure 1 for Lightweight Automated Feature Monitoring for Data Streams
Figure 2 for Lightweight Automated Feature Monitoring for Data Streams
Figure 3 for Lightweight Automated Feature Monitoring for Data Streams
Figure 4 for Lightweight Automated Feature Monitoring for Data Streams

Monitoring the behavior of automated real-time stream processing systems has become one of the most relevant problems in real world applications. Such systems have grown in complexity relying heavily on high dimensional input data, and data hungry Machine Learning (ML) algorithms. We propose a flexible system, Feature Monitoring (FM), that detects data drifts in such data sets, with a small and constant memory footprint and a small computational cost in streaming applications. The method is based on a multi-variate statistical test and is data driven by design (full reference distributions are estimated from the data). It monitors all features that are used by the system, while providing an interpretable features ranking whenever an alarm occurs (to aid in root cause analysis). The computational and memory lightness of the system results from the use of Exponential Moving Histograms. In our experimental study, we analyze the system's behavior with its parameters and, more importantly, show examples where it detects problems that are not directly related to a single feature. This illustrates how FM eliminates the need to add custom signals to detect specific types of problems and that monitoring the available space of features is often enough.

* 10 pages, 5 figures. AutoML, KDD22, August 14-17, 2022, Washington, DC, US 
Viaarxiv icon

Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

Jul 13, 2022
José Pombal, André F. Cruz, João Bravo, Pedro Saleiro, Mário A. T. Figueiredo, Pedro Bizarro

Figure 1 for Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions
Figure 2 for Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions
Figure 3 for Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions
Figure 4 for Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

In recent years, machine learning algorithms have become ubiquitous in a multitude of high-stakes decision-making applications. The unparalleled ability of machine learning algorithms to learn patterns from data also enables them to incorporate biases embedded within. A biased model can then make decisions that disproportionately harm certain groups in society -- limiting their access to financial services, for example. The awareness of this problem has given rise to the field of Fair ML, which focuses on studying, measuring, and mitigating unfairness in algorithmic prediction, with respect to a set of protected groups (e.g., race or gender). However, the underlying causes for algorithmic unfairness still remain elusive, with researchers divided between blaming either the ML algorithms or the data they are trained on. In this work, we maintain that algorithmic unfairness stems from interactions between models and biases in the data, rather than from isolated contributions of either of them. To this end, we propose a taxonomy to characterize data bias and we study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings. On our real-world account-opening fraud use case, we find that each setting entails specific trade-offs, affecting fairness in expected value and variance -- the latter often going unnoticed. Moreover, we show how algorithms compare differently in terms of accuracy and fairness, depending on the biases affecting the data. Finally, we note that under specific data bias conditions, simple pre-processing interventions can successfully balance group-wise error rates, while the same techniques fail in more complex settings.

* KDD'22 Workshop on Machine Learning in Finance 
Viaarxiv icon