Alert button
Picture for Jinsheng Sun

Jinsheng Sun

Alert button

AU-PD: An Arbitrary-size and Uniform Downsampling Framework for Point Clouds

Nov 02, 2022
Peng Zhang, Ruoyin Xie, Jinsheng Sun, Weiqing Li, Zhiyong Su

Figure 1 for AU-PD: An Arbitrary-size and Uniform Downsampling Framework for Point Clouds
Figure 2 for AU-PD: An Arbitrary-size and Uniform Downsampling Framework for Point Clouds
Figure 3 for AU-PD: An Arbitrary-size and Uniform Downsampling Framework for Point Clouds
Figure 4 for AU-PD: An Arbitrary-size and Uniform Downsampling Framework for Point Clouds

Point cloud downsampling is a crucial pre-processing operation to downsample the points in the point cloud in order to reduce computational cost, and communication load, to name a few. Recent research on point cloud downsampling has achieved great success which concentrates on learning to sample in a task-aware way. However, existing learnable samplers can not perform arbitrary-size sampling directly. Moreover, their sampled results always comprise many overlapping points. In this paper, we introduce the AU-PD, a novel task-aware sampling framework that directly downsamples point cloud to any smaller size based on a sample-to-refine strategy. Given a specified arbitrary size, we first perform task-agnostic pre-sampling to sample the input point cloud. Then, we refine the pre-sampled set to make it task-aware, driven by downstream task losses. The refinement is realized by adding each pre-sampled point with a small offset predicted by point-wise multi-layer perceptrons (MLPs). In this way, the sampled set remains almost unchanged from the original in distribution, and therefore contains fewer overlapping cases. With the attention mechanism and proper training scheme, the framework learns to adaptively refine the pre-sampled set of different sizes. We evaluate sampled results for classification and registration tasks, respectively. The proposed AU-PD gets competitive downstream performance with the state-of-the-art method while being more flexible and containing fewer overlapping points in the sampled set. The source code will be publicly available at https://zhiyongsu.github.io/Project/AUPD.html.

Viaarxiv icon

One-way Hash Function Based on Neural Network

Jul 27, 2007
Shiguo Lian, Jinsheng Sun, Zhiquan Wang

Figure 1 for One-way Hash Function Based on Neural Network
Figure 2 for One-way Hash Function Based on Neural Network
Figure 3 for One-way Hash Function Based on Neural Network
Figure 4 for One-way Hash Function Based on Neural Network

A hash function is constructed based on a three-layer neural network. The three neuron-layers are used to realize data confusion, diffusion and compression respectively, and the multi-block hash mode is presented to support the plaintext with variable length. Theoretical analysis and experimental results show that this hash function is one-way, with high key sensitivity and plaintext sensitivity, and secure against birthday attacks or meet-in-the-middle attacks. Additionally, the neural network's property makes it practical to realize in a parallel way. These properties make it a suitable choice for data signature or authentication.

* 7 pages,5 figures,submitted 
Viaarxiv icon