Abstract:Three-dimensional point clouds provide highly accurate digital representations of objects, essential for applications in computer graphics, photogrammetry, computer vision, and robotics. However, comparing point clouds faces significant challenges due to their unstructured nature and the complex geometry of the surfaces they represent. Traditional geometric metrics such as Hausdorff and Chamfer distances often fail to capture global statistical structure and exhibit sensitivity to outliers, while existing Kullback-Leibler (KL) divergence approximations for Gaussian Mixture Models can produce unbounded or numerically unstable values. This paper introduces an information geometric framework for 3D point cloud shape analysis by representing point clouds as Gaussian Mixture Models (GMMs) on a statistical manifold. We prove that the space of GMMs forms a statistical manifold and propose the Modified Symmetric Kullback-Leibler (MSKL) divergence with theoretically guaranteed upper and lower bounds, ensuring numerical stability for all GMM comparisons. Through comprehensive experiments on human pose discrimination (MPI-FAUST dataset) and animal shape comparison (G-PCD dataset), we demonstrate that MSKL provides stable and monotonically varying values that directly reflect geometric variation, outperforming traditional distances and existing KL approximations.
Abstract:In this paper, a distance between the Gaussian Mixture Models(GMMs) is obtained based on an embedding of the K-component Gaussian Mixture Model into the manifold of the symmetric positive definite matrices. Proof of embedding of K-component GMMs into the manifold of symmetric positive definite matrices is given and shown that it is a submanifold. Then, proved that the manifold of GMMs with the pullback of induced metric is isometric to the submanifold with the induced metric. Through this embedding we obtain a general lower bound for the Fisher-Rao metric. This lower bound is a distance measure on the manifold of GMMs and we employ it for the similarity measure of GMMs. The effectiveness of this framework is demonstrated through an experiment on standard machine learning benchmarks, achieving accuracy of 98%, 92%, and 93.33% on the UIUC, KTH-TIPS, and UMD texture recognition datasets respectively.