Alert button
Picture for Leilei Sun

Leilei Sun

Alert button

Self-optimizing Feature Generation via Categorical Hashing Representation and Hierarchical Reinforcement Crossing

Sep 14, 2023
Wangyang Ying, Dongjie Wang, Kunpeng Liu, Leilei Sun, Yanjie Fu

Figure 1 for Self-optimizing Feature Generation via Categorical Hashing Representation and Hierarchical Reinforcement Crossing
Figure 2 for Self-optimizing Feature Generation via Categorical Hashing Representation and Hierarchical Reinforcement Crossing
Figure 3 for Self-optimizing Feature Generation via Categorical Hashing Representation and Hierarchical Reinforcement Crossing
Figure 4 for Self-optimizing Feature Generation via Categorical Hashing Representation and Hierarchical Reinforcement Crossing

Feature generation aims to generate new and meaningful features to create a discriminative representation space.A generated feature is meaningful when the generated feature is from a feature pair with inherent feature interaction. In the real world, experienced data scientists can identify potentially useful feature-feature interactions, and generate meaningful dimensions from an exponentially large search space, in an optimal crossing form over an optimal generation path. But, machines have limited human-like abilities.We generalize such learning tasks as self-optimizing feature generation. Self-optimizing feature generation imposes several under-addressed challenges on existing systems: meaningful, robust, and efficient generation. To tackle these challenges, we propose a principled and generic representation-crossing framework to solve self-optimizing feature generation.To achieve hashing representation, we propose a three-step approach: feature discretization, feature hashing, and descriptive summarization. To achieve reinforcement crossing, we develop a hierarchical reinforcement feature crossing approach.We present extensive experimental results to demonstrate the effectiveness and efficiency of the proposed method. The code is available at https://github.com/yingwangyang/HRC_feature_cross.git.

Viaarxiv icon

Adaptive Taxonomy Learning and Historical Patterns Modelling for Patent Classification

Aug 10, 2023
Tao Zou, Le Yu, Leilei Sun, Bowen Du, Deqing Wang, Fuzhen Zhuang

Figure 1 for Adaptive Taxonomy Learning and Historical Patterns Modelling for Patent Classification
Figure 2 for Adaptive Taxonomy Learning and Historical Patterns Modelling for Patent Classification
Figure 3 for Adaptive Taxonomy Learning and Historical Patterns Modelling for Patent Classification
Figure 4 for Adaptive Taxonomy Learning and Historical Patterns Modelling for Patent Classification

Patent classification aims to assign multiple International Patent Classification (IPC) codes to a given patent. Recent methods for automatically classifying patents mainly focus on analyzing the text descriptions of patents. However, apart from the texts, each patent is also associated with some assignees, and the knowledge of their applied patents is often valuable for classification. Furthermore, the hierarchical taxonomy formulated by the IPC system provides important contextual information and enables models to leverage the correlations between IPC codes for more accurate classification. However, existing methods fail to incorporate the above aspects. In this paper, we propose an integrated framework that comprehensively considers the information on patents for patent classification. To be specific, we first present an IPC codes correlations learning module to derive their semantic representations via adaptively passing and aggregating messages within the same level and across different levels along the hierarchical taxonomy. Moreover, we design a historical application patterns learning component to incorporate the corresponding assignee's previous patents by a dual channel aggregation mechanism. Finally, we combine the contextual information of patent texts that contains the semantics of IPC codes, and assignees' sequential preferences to make predictions. Experiments on real-world datasets demonstrate the superiority of our approach over the existing methods. Besides, we present the model's ability to capture the temporal patterns of assignees and the semantic dependencies among IPC codes.

* 13 pages 
Viaarxiv icon

Event-based Dynamic Graph Representation Learning for Patent Application Trend Prediction

Aug 04, 2023
Tao Zou, Le Yu, Leilei Sun, Bowen Du, Deqing Wang, Fuzhen Zhuang

Figure 1 for Event-based Dynamic Graph Representation Learning for Patent Application Trend Prediction
Figure 2 for Event-based Dynamic Graph Representation Learning for Patent Application Trend Prediction
Figure 3 for Event-based Dynamic Graph Representation Learning for Patent Application Trend Prediction
Figure 4 for Event-based Dynamic Graph Representation Learning for Patent Application Trend Prediction

Accurate prediction of what types of patents that companies will apply for in the next period of time can figure out their development strategies and help them discover potential partners or competitors in advance. Although important, this problem has been rarely studied in previous research due to the challenges in modelling companies' continuously evolving preferences and capturing the semantic correlations of classification codes. To fill in this gap, we propose an event-based dynamic graph learning framework for patent application trend prediction. In particular, our method is founded on the memorable representations of both companies and patent classification codes. When a new patent is observed, the representations of the related companies and classification codes are updated according to the historical memories and the currently encoded messages. Moreover, a hierarchical message passing mechanism is provided to capture the semantic proximities of patent classification codes by updating their representations along the hierarchical taxonomy. Finally, the patent application trend is predicted by aggregating the representations of the target company and classification codes from static, dynamic, and hierarchical perspectives. Experiments on real-world data demonstrate the effectiveness of our approach under various experimental conditions, and also reveal the abilities of our method in learning semantics of classification codes and tracking technology developing trajectories of companies.

* 13 pages 
Viaarxiv icon

Learning Joint 2D & 3D Diffusion Models for Complete Molecule Generation

May 21, 2023
Han Huang, Leilei Sun, Bowen Du, Weifeng Lv

Figure 1 for Learning Joint 2D & 3D Diffusion Models for Complete Molecule Generation
Figure 2 for Learning Joint 2D & 3D Diffusion Models for Complete Molecule Generation
Figure 3 for Learning Joint 2D & 3D Diffusion Models for Complete Molecule Generation
Figure 4 for Learning Joint 2D & 3D Diffusion Models for Complete Molecule Generation

Designing new molecules is essential for drug discovery and material science. Recently, deep generative models that aim to model molecule distribution have made promising progress in narrowing down the chemical research space and generating high-fidelity molecules. However, current generative models only focus on modeling either 2D bonding graphs or 3D geometries, which are two complementary descriptors for molecules. The lack of ability to jointly model both limits the improvement of generation quality and further downstream applications. In this paper, we propose a new joint 2D and 3D diffusion model (JODO) that generates complete molecules with atom types, formal charges, bond information, and 3D coordinates. To capture the correlation between molecular graphs and geometries in the diffusion process, we develop a Diffusion Graph Transformer to parameterize the data prediction model that recovers the original data from noisy data. The Diffusion Graph Transformer interacts node and edge representations based on our relational attention mechanism, while simultaneously propagating and updating scalar features and geometric vectors. Our model can also be extended for inverse molecular design targeting single or multiple quantum properties. In our comprehensive evaluation pipeline for unconditional joint generation, the results of the experiment show that JODO remarkably outperforms the baselines on the QM9 and GEOM-Drugs datasets. Furthermore, our model excels in few-step fast sampling, as well as in inverse molecule design and molecular graph generation. Our code is provided in https://github.com/GRAPH-0/JODO.

Viaarxiv icon

Towards Better Dynamic Graph Learning: New Architecture and Unified Library

Mar 23, 2023
Le Yu, Leilei Sun, Bowen Du, Weifeng Lv

Figure 1 for Towards Better Dynamic Graph Learning: New Architecture and Unified Library
Figure 2 for Towards Better Dynamic Graph Learning: New Architecture and Unified Library
Figure 3 for Towards Better Dynamic Graph Learning: New Architecture and Unified Library
Figure 4 for Towards Better Dynamic Graph Learning: New Architecture and Unified Library

We propose DyGFormer, a new Transformer-based architecture for dynamic graph learning that solely learns from the sequences of nodes' historical first-hop interactions. DyGFormer incorporates two distinct designs: a neighbor co-occurrence encoding scheme that explores the correlations of the source node and destination node based on their sequences; a patching technique that divides each sequence into multiple patches and feeds them to Transformer, allowing the model to effectively and efficiently benefit from longer histories. We also introduce DyGLib, a unified library with standard training pipelines, extensible coding interfaces, and comprehensive evaluating protocols to promote reproducible, scalable, and credible dynamic graph learning research. By performing extensive experiments on thirteen datasets from various domains for transductive/inductive dynamic link prediction and dynamic node classification tasks, we observe that: DyGFormer achieves state-of-the-art performance on most of the datasets, demonstrating the effectiveness of capturing nodes' correlations and long-term temporal dependencies; the results of baselines vary across different datasets and some findings are inconsistent with previous reports, which may be caused by their diverse pipelines and problematic implementations. We hope our work can provide new insights and facilitate the development of the dynamic graph learning field. All the resources including datasets, data loaders, algorithms, and executing scripts are publicly available at https://github.com/yule-BUAA/DyGLib.

* 24 pages 
Viaarxiv icon

PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation

Feb 20, 2023
Mingzhe Liu, Han Huang, Hao Feng, Leilei Sun, Bowen Du, Yanjie Fu

Figure 1 for PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation
Figure 2 for PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation
Figure 3 for PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation
Figure 4 for PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation

Spatiotemporal data mining plays an important role in air quality monitoring, crowd flow modeling, and climate forecasting. However, the originally collected spatiotemporal data in real-world scenarios is usually incomplete due to sensor failures or transmission loss. Spatiotemporal imputation aims to fill the missing values according to the observed values and the underlying spatiotemporal dependence of them. The previous dominant models impute missing values autoregressively and suffer from the problem of error accumulation. As emerging powerful generative models, the diffusion probabilistic models can be adopted to impute missing values conditioned by observations and avoid inferring missing values from inaccurate historical imputation. However, the construction and utilization of conditional information are inevitable challenges when applying diffusion models to spatiotemporal imputation. To address above issues, we propose a conditional diffusion framework for spatiotemporal imputation with enhanced prior modeling, named PriSTI. Our proposed framework provides a conditional feature extraction module first to extract the coarse yet effective spatiotemporal dependencies from conditional information as the global context prior. Then, a noise estimation module transforms random noise to realistic values, with the spatiotemporal attention weights calculated by the conditional feature, as well as the consideration of geographic relationships. PriSTI outperforms existing imputation methods in various missing patterns of different real-world spatiotemporal data, and effectively handles scenarios such as high missing rates and sensor failure. The implementation code is available at https://github.com/LMZZML/PriSTI.

* This paper has been accepted as a full paper at ICDE 2023 
Viaarxiv icon

Conditional Diffusion Based on Discrete Graph Structures for Molecular Graph Generation

Jan 01, 2023
Han Huang, Leilei Sun, Bowen Du, Weifeng Lv

Figure 1 for Conditional Diffusion Based on Discrete Graph Structures for Molecular Graph Generation
Figure 2 for Conditional Diffusion Based on Discrete Graph Structures for Molecular Graph Generation
Figure 3 for Conditional Diffusion Based on Discrete Graph Structures for Molecular Graph Generation
Figure 4 for Conditional Diffusion Based on Discrete Graph Structures for Molecular Graph Generation

Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.

Viaarxiv icon

GraphGDP: Generative Diffusion Processes for Permutation Invariant Graph Generation

Dec 04, 2022
Han Huang, Leilei Sun, Bowen Du, Yanjie Fu, Weifeng Lv

Figure 1 for GraphGDP: Generative Diffusion Processes for Permutation Invariant Graph Generation
Figure 2 for GraphGDP: Generative Diffusion Processes for Permutation Invariant Graph Generation
Figure 3 for GraphGDP: Generative Diffusion Processes for Permutation Invariant Graph Generation
Figure 4 for GraphGDP: Generative Diffusion Processes for Permutation Invariant Graph Generation

Graph generative models have broad applications in biology, chemistry and social science. However, modelling and understanding the generative process of graphs is challenging due to the discrete and high-dimensional nature of graphs, as well as permutation invariance to node orderings in underlying graph distributions. Current leading autoregressive models fail to capture the permutation invariance nature of graphs for the reliance on generation ordering and have high time complexity. Here, we propose a continuous-time generative diffusion process for permutation invariant graph generation to mitigate these issues. Specifically, we first construct a forward diffusion process defined by a stochastic differential equation (SDE), which smoothly converts graphs within the complex distribution to random graphs that follow a known edge probability. Solving the corresponding reverse-time SDE, graphs can be generated from newly sampled random graphs. To facilitate the reverse-time SDE, we newly design a position-enhanced graph score network, capturing the evolving structure and position information from perturbed graphs for permutation equivariant score estimation. Under the evaluation of comprehensive metrics, our proposed generative diffusion process achieves competitive performance in graph distribution learning. Experimental results also show that GraphGDP can generate high-quality graphs in only 24 function evaluations, much faster than previous autoregressive models.

* Accepted by the IEEE International Conference on Data Mining (ICDM) 2022 
Viaarxiv icon

Human-instructed Deep Hierarchical Generative Learning for Automated Urban Planning

Dec 01, 2022
Dongjie Wang, Lingfei Wu, Denghui Zhang, Jingbo Zhou, Leilei Sun, Yanjie Fu

Figure 1 for Human-instructed Deep Hierarchical Generative Learning for Automated Urban Planning
Figure 2 for Human-instructed Deep Hierarchical Generative Learning for Automated Urban Planning
Figure 3 for Human-instructed Deep Hierarchical Generative Learning for Automated Urban Planning
Figure 4 for Human-instructed Deep Hierarchical Generative Learning for Automated Urban Planning

The essential task of urban planning is to generate the optimal land-use configuration of a target area. However, traditional urban planning is time-consuming and labor-intensive. Deep generative learning gives us hope that we can automate this planning process and come up with the ideal urban plans. While remarkable achievements have been obtained, they have exhibited limitations in lacking awareness of: 1) the hierarchical dependencies between functional zones and spatial grids; 2) the peer dependencies among functional zones; and 3) human regulations to ensure the usability of generated configurations. To address these limitations, we develop a novel human-instructed deep hierarchical generative model. We rethink the urban planning generative task from a unique functionality perspective, where we summarize planning requirements into different functionality projections for better urban plan generation. To this end, we develop a three-stage generation process from a target area to zones to grids. The first stage is to label the grids of a target area with latent functionalities to discover functional zones. The second stage is to perceive the planning requirements to form urban functionality projections. We propose a novel module: functionalizer to project the embedding of human instructions and geospatial contexts to the zone-level plan to obtain such projections. Each projection includes the information of land-use portfolios and the structural dependencies across spatial grids in terms of a specific urban function. The third stage is to leverage multi-attentions to model the zone-zone peer dependencies of the functionality projections to generate grid-level land-use configurations. Finally, we present extensive experiments to demonstrate the effectiveness of our framework.

* AAAI 2023 
Viaarxiv icon