Alert button
Picture for Alexander Kravberg

Alexander Kravberg

Alert button

Elastic Context: Encoding Elasticity for Data-driven Models of Textiles

Sep 19, 2022
Alberta Longhini, Marco Moletta, Alfredo Reichlin, Michael C. Welle, Alexander Kravberg, Yufei Wang, David Held, Zackory Erickson, Danica Kragic

Figure 1 for Elastic Context: Encoding Elasticity for Data-driven Models of Textiles
Figure 2 for Elastic Context: Encoding Elasticity for Data-driven Models of Textiles
Figure 3 for Elastic Context: Encoding Elasticity for Data-driven Models of Textiles
Figure 4 for Elastic Context: Encoding Elasticity for Data-driven Models of Textiles

Physical interaction with textiles, such as assistive dressing, relies on advanced dextreous capabilities. The underlying complexity in textile behavior when being pulled and stretched, is due to both the yarn material properties and the textile construction technique. Today, there are no commonly adopted and annotated datasets on which the various interaction or property identification methods are assessed. One important property that affects the interaction is material elasticity that results from both the yarn material and construction technique: these two are intertwined and, if not known a-priori, almost impossible to identify through sensing commonly available on robotic platforms. We introduce Elastic Context (EC), a concept that integrates various properties that affect elastic behavior, to enable a more effective physical interaction with textiles. The definition of EC relies on stress/strain curves commonly used in textile engineering, which we reformulated for robotic applications. We employ EC using Graph Neural Network (GNN) to learn generalized elastic behaviors of textiles. Furthermore, we explore the effect the dimension of the EC has on accurate force modeling of non-linear real-world elastic behaviors, highlighting the challenges of current robotic setups to sense textile properties.

Viaarxiv icon

Active Nearest Neighbor Regression Through Delaunay Refinement

Jun 16, 2022
Alexander Kravberg, Giovanni Luca Marchetti, Vladislav Polianskii, Anastasiia Varava, Florian T. Pokorny, Danica Kragic

Figure 1 for Active Nearest Neighbor Regression Through Delaunay Refinement
Figure 2 for Active Nearest Neighbor Regression Through Delaunay Refinement
Figure 3 for Active Nearest Neighbor Regression Through Delaunay Refinement
Figure 4 for Active Nearest Neighbor Regression Through Delaunay Refinement

We introduce an algorithm for active function approximation based on nearest neighbor regression. Our Active Nearest Neighbor Regressor (ANNR) relies on the Voronoi-Delaunay framework from computational geometry to subdivide the space into cells with constant estimated function value and select novel query points in a way that takes the geometry of the function graph into account. We consider the recent state-of-the-art active function approximator called DEFER, which is based on incremental rectangular partitioning of the space, as the main baseline. The ANNR addresses a number of limitations that arise from the space subdivision strategy used in DEFER. We provide a computationally efficient implementation of our method, as well as theoretical halting guarantees. Empirical results show that ANNR outperforms the baseline for both closed-form functions and real-world examples, such as gravitational wave parameter inference and exploration of the latent space of a generative model.

* Accepted at the International Conference on Machine Learning (ICML) 2022 
Viaarxiv icon