Alert button
Picture for Laura Toni

Laura Toni

Alert button

AGAR: Attention Graph-RNN for Adaptative Motion Prediction of Point Clouds of Deformable Objects

Jul 19, 2023
Pedro Gomes, Silvia Rossi, Laura Toni

Figure 1 for AGAR: Attention Graph-RNN for Adaptative Motion Prediction of Point Clouds of Deformable Objects
Figure 2 for AGAR: Attention Graph-RNN for Adaptative Motion Prediction of Point Clouds of Deformable Objects
Figure 3 for AGAR: Attention Graph-RNN for Adaptative Motion Prediction of Point Clouds of Deformable Objects
Figure 4 for AGAR: Attention Graph-RNN for Adaptative Motion Prediction of Point Clouds of Deformable Objects

This paper focuses on motion prediction for point cloud sequences in the challenging case of deformable 3D objects, such as human body motion. First, we investigate the challenges caused by deformable shapes and complex motions present in this type of representation, with the ultimate goal of understanding the technical limitations of state-of-the-art models. From this understanding, we propose an improved architecture for point cloud prediction of deformable 3D objects. Specifically, to handle deformable shapes, we propose a graph-based approach that learns and exploits the spatial structure of point clouds to extract more representative features. Then we propose a module able to combine the learned features in an adaptative manner according to the point cloud movements. The proposed adaptative module controls the composition of local and global motions for each point, enabling the network to model complex motions in deformable 3D objects more effectively. We tested the proposed method on the following datasets: MNIST moving digits, the Mixamo human bodies motions, JPEG and CWIPC-SXR real-world dynamic bodies. Simulation results demonstrate that our method outperforms the current baseline methods given its improved ability to model complex movements as well as preserve point cloud shape. Furthermore, we demonstrate the generalizability of the proposed framework for dynamic feature learning, by testing the framework for action recognition on the MSRAction3D dataset and achieving results on-par with state-of-the-art methods

Viaarxiv icon

Online Network Source Optimization with Graph-Kernel MAB

Jul 07, 2023
Laura Toni, Pascal Frossard

Figure 1 for Online Network Source Optimization with Graph-Kernel MAB
Figure 2 for Online Network Source Optimization with Graph-Kernel MAB
Figure 3 for Online Network Source Optimization with Graph-Kernel MAB
Figure 4 for Online Network Source Optimization with Graph-Kernel MAB

We propose Grab-UCB, a graph-kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks, such that the reward obtained from a priori unknown network processes is maximized. The uncertainty calls for online learning, which suffers however from the curse of dimensionality. To achieve sample efficiency, we describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations. This enables a data-efficient learning framework, whose learning rate scales with the dimension of the spectral representation model instead of the one of the network. We then propose Grab-UCB, an online sequential decision strategy that learns the parameters of the spectral representation while optimizing the action strategy. We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy We introduce a computationally simplified solving method, Grab-arm-Light, an algorithm that walks along the edges of the polytope representing the objective function. Simulations results show that the proposed online learning algorithm outperforms baseline offline methods that typically separate the learning phase from the testing one. The results confirm the theoretical findings, and further highlight the gain of the proposed online learning strategy in terms of cumulative regret, sample efficiency and computational complexity.

Viaarxiv icon

MiDi: Mixed Graph and 3D Denoising Diffusion for Molecule Generation

Feb 17, 2023
Clement Vignac, Nagham Osman, Laura Toni, Pascal Frossard

Figure 1 for MiDi: Mixed Graph and 3D Denoising Diffusion for Molecule Generation
Figure 2 for MiDi: Mixed Graph and 3D Denoising Diffusion for Molecule Generation
Figure 3 for MiDi: Mixed Graph and 3D Denoising Diffusion for Molecule Generation
Figure 4 for MiDi: Mixed Graph and 3D Denoising Diffusion for Molecule Generation

This work introduces MiDi, a diffusion model for jointly generating molecular graphs and corresponding 3D conformers. In contrast to existing models, which derive molecular bonds from the conformation using predefined rules, MiDi streamlines the molecule generation process with an end-to-end differentiable model. Experimental results demonstrate the benefits of this approach: on the complex GEOM-DRUGS dataset, our model generates significantly better molecular graphs than 3D-based models and even surpasses specialized algorithms that directly optimize the bond orders for validity. Our code is available at github.com/cvignac/MiDi.

* 13 pages. Preprint 
Viaarxiv icon

Learning Algorithm Generalization Error Bounds via Auxiliary Distributions

Oct 02, 2022
Gholamali Aminian, Saeed Masiha, Laura Toni, Miguel R. D. Rodrigues

Figure 1 for Learning Algorithm Generalization Error Bounds via Auxiliary Distributions
Figure 2 for Learning Algorithm Generalization Error Bounds via Auxiliary Distributions
Figure 3 for Learning Algorithm Generalization Error Bounds via Auxiliary Distributions
Figure 4 for Learning Algorithm Generalization Error Bounds via Auxiliary Distributions

Generalization error boundaries are essential for comprehending how well machine learning models work. In this work, we suggest a creative method, i.e., the Auxiliary Distribution Method, that derives new upper bounds on generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the generalized $\alpha$-Jensen-Shannon, $\alpha$-R\'enyi ($0< \alpha < 1$) information between random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on generalized $\alpha$-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distributional mismatch is modeled as $\alpha$-Jensen-Shannon or $\alpha$-R\'enyi ($0< \alpha < 1$) between the distribution of test and training data samples. We also outline the circumstances in which our proposed upper bounds might be tighter than other earlier upper bounds.

Viaarxiv icon

Semi-Counterfactual Risk Minimization Via Neural Networks

Sep 28, 2022
Gholamali Aminian, Roberto Vega, Omar Rivasplata, Laura Toni, Miguel Rodrigues

Figure 1 for Semi-Counterfactual Risk Minimization Via Neural Networks
Figure 2 for Semi-Counterfactual Risk Minimization Via Neural Networks
Figure 3 for Semi-Counterfactual Risk Minimization Via Neural Networks
Figure 4 for Semi-Counterfactual Risk Minimization Via Neural Networks

Counterfactual risk minimization is a framework for offline policy optimization with logged data which consists of context, action, propensity score, and reward for each sample point. In this work, we build on this framework and propose a learning method for settings where the rewards for some samples are not observed, and so the logged data consists of a subset of samples with unknown rewards and a subset of samples with known rewards. This setting arises in many application domains, including advertising and healthcare. While reward feedback is missing for some samples, it is possible to leverage the unknown-reward samples in order to minimize the risk, and we refer to this setting as semi-counterfactual risk minimization. To approach this kind of learning problem, we derive new upper bounds on the true risk under the inverse propensity score estimator. We then build upon these bounds to propose a regularized counterfactual risk minimization method, where the regularization term is based on the logged unknown-rewards dataset only; hence it is reward-independent. We also propose another algorithm based on generating pseudo-rewards for the logged unknown-rewards dataset. Experimental results with neural networks and benchmark datasets indicate that these algorithms can leverage the logged unknown-rewards dataset besides the logged known-reward dataset.

* Accepted in EWRL 2022 
Viaarxiv icon

An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift

Feb 24, 2022
Gholamali Aminian, Mahed Abroshan, Mohammad Mahdi Khalili, Laura Toni, Miguel R. D. Rodrigues

Figure 1 for An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift
Figure 2 for An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift
Figure 3 for An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift
Figure 4 for An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift

A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution. However, this assumption is not satisfied in many applications. In many scenarios, the data is collected sequentially (e.g., healthcare) and the distribution of the data may change over time often exhibiting so-called covariate shifts. In this paper, we propose an approach for semi-supervised learning algorithms that is capable of addressing this issue. Our framework also recovers some popular methods, including entropy minimization and pseudo-labeling. We provide new information-theoretical based generalization error upper bounds inspired by our novel framework. Our bounds are applicable to both general semi-supervised learning and the covariate-shift scenario. Finally, we show numerically that our method outperforms previous approaches proposed for semi-supervised learning under the covariate shift.

* Accepted at AISTATS 2022 
Viaarxiv icon

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

Nov 02, 2021
Yuheng Bu, Gholamali Aminian, Laura Toni, Miguel Rodrigues, Gregory Wornell

Figure 1 for Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm
Figure 2 for Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm
Figure 3 for Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm
Figure 4 for Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, $\alpha$-weighted-ERM and two-stage-ERM. Our key result is an exact characterization of the generalization behaviour using the conditional symmetrized KL information between the output hypothesis and the target training samples given the source samples. Our results can also be applied to provide novel distribution-free generalization error upper bounds on these two aforementioned Gibbs algorithms. Our approach is versatile, as it also characterizes the generalization errors and excess risks of these two Gibbs algorithms in the asymptotic regime, where they converge to the $\alpha$-weighted-ERM and two-stage-ERM, respectively. Based on our theoretical results, we show that the benefits of transfer learning can be viewed as a bias-variance trade-off, with the bias induced by the source distribution and the variance induced by the lack of target samples. We believe this viewpoint can guide the choice of transfer learning algorithms in practice.

Viaarxiv icon

Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information

Jul 28, 2021
Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel R. D. Rodrigues, Gregory Wornell

Bounding the generalization error of a supervised learning algorithm is one of the most important problems in learning theory, and various approaches have been developed. However, existing bounds are often loose and lack of guarantees. As a result, they may fail to characterize the exact generalization ability of a learning algorithm. Our main contribution is an exact characterization of the expected generalization error of the well-known Gibbs algorithm in terms of symmetrized KL information between the input training samples and the output hypothesis. Such a result can be applied to tighten existing expected generalization error bound. Our analysis provides more insight on the fundamental role the symmetrized KL information plays in controlling the generalization error of the Gibbs algorithm.

* The first and second author have contributed equally to the paper. This paper is accepted in the ICML-21 Workshop on Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning: https://sites.google.com/view/itr3/schedule 
Viaarxiv icon

Spatio-temporal Graph-RNN for Point Cloud Prediction

Feb 22, 2021
Pedro Gomes, Silvia Rossi, Laura Toni

Figure 1 for Spatio-temporal Graph-RNN for Point Cloud Prediction
Figure 2 for Spatio-temporal Graph-RNN for Point Cloud Prediction
Figure 3 for Spatio-temporal Graph-RNN for Point Cloud Prediction
Figure 4 for Spatio-temporal Graph-RNN for Point Cloud Prediction

In this paper, we propose an end-to-end learning network to predict future frames in a point cloud sequence. As main novelty, an initial layer learns topological information of point clouds as geometric features, to form representative spatio-temporal neighborhoods. This module is followed by multiple Graph-RNN cells. Each cell learns points dynamics (i.e., RNN states) by processing each point jointly with the spatio-temporal neighbouring points. We tested the network performance with a MINST dataset of moving digits, a synthetic human bodies motions and JPEG dynamic bodies datasets. Simulation results demonstrate that our method outperforms baseline ones that neglect geometry features information.

Viaarxiv icon