Alert button
Picture for Hao Peng

Hao Peng

Alert button

Self-organization Preserved Graph Structure Learning with Principle of Relevant Information

Dec 30, 2022
Qingyun Sun, Jianxin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu

Figure 1 for Self-organization Preserved Graph Structure Learning with Principle of Relevant Information
Figure 2 for Self-organization Preserved Graph Structure Learning with Principle of Relevant Information
Figure 3 for Self-organization Preserved Graph Structure Learning with Principle of Relevant Information
Figure 4 for Self-organization Preserved Graph Structure Learning with Principle of Relevant Information
Viaarxiv icon

Sensitivity analysis of biological washout and depth selection for a machine learning based dose verification framework in proton therapy

Dec 21, 2022
Shixiong Yu, Yuxiang Liu, Zongsheng Hu, Haozhao Zhang, Pengyu Qi, Hao Peng

Figure 1 for Sensitivity analysis of biological washout and depth selection for a machine learning based dose verification framework in proton therapy
Figure 2 for Sensitivity analysis of biological washout and depth selection for a machine learning based dose verification framework in proton therapy
Figure 3 for Sensitivity analysis of biological washout and depth selection for a machine learning based dose verification framework in proton therapy
Figure 4 for Sensitivity analysis of biological washout and depth selection for a machine learning based dose verification framework in proton therapy
Viaarxiv icon

Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces

Nov 30, 2022
Li Sun, Junda Ye, Hao Peng, Feiyang Wang, Philip S. Yu

Figure 1 for Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces
Figure 2 for Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces
Figure 3 for Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces
Figure 4 for Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces
Viaarxiv icon

Ranking-based Group Identification via Factorized Attention on Social Tripartite Graph

Nov 16, 2022
Mingdai Yang, Zhiwei Liu, Liangwei Yang, Xiaolong Liu, Chen Wang, Hao Peng, Philip S. Yu

Figure 1 for Ranking-based Group Identification via Factorized Attention on Social Tripartite Graph
Figure 2 for Ranking-based Group Identification via Factorized Attention on Social Tripartite Graph
Figure 3 for Ranking-based Group Identification via Factorized Attention on Social Tripartite Graph
Figure 4 for Ranking-based Group Identification via Factorized Attention on Social Tripartite Graph
Viaarxiv icon

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

Nov 14, 2022
Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie Zhou

Figure 1 for MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Figure 2 for MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Figure 3 for MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Figure 4 for MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Viaarxiv icon

COPEN: Probing Conceptual Knowledge in Pre-trained Language Models

Nov 08, 2022
Hao Peng, Xiaozhi Wang, Shengding Hu, Hailong Jin, Lei Hou, Juanzi Li, Zhiyuan Liu, Qun Liu

Figure 1 for COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Figure 2 for COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Figure 3 for COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Figure 4 for COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Viaarxiv icon

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Nov 07, 2022
Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz

Figure 1 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 2 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 3 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 4 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Viaarxiv icon

Sequential Recommendation with Auxiliary Item Relationships via Multi-Relational Transformer

Oct 28, 2022
Ziwei Fan, Zhiwei Liu, Chen Wang, Peijie Huang, Hao Peng, Philip S. Yu

Figure 1 for Sequential Recommendation with Auxiliary Item Relationships via Multi-Relational Transformer
Figure 2 for Sequential Recommendation with Auxiliary Item Relationships via Multi-Relational Transformer
Figure 3 for Sequential Recommendation with Auxiliary Item Relationships via Multi-Relational Transformer
Figure 4 for Sequential Recommendation with Auxiliary Item Relationships via Multi-Relational Transformer
Viaarxiv icon

DAGAD: Data Augmentation for Graph Anomaly Detection

Oct 18, 2022
Fanzhen Liu, Xiaoxiao Ma, Jia Wu, Jian Yang, Shan Xue, Amin Beheshti, Chuan Zhou, Hao Peng, Quan Z. Sheng, Charu C. Aggarwal

Figure 1 for DAGAD: Data Augmentation for Graph Anomaly Detection
Figure 2 for DAGAD: Data Augmentation for Graph Anomaly Detection
Figure 3 for DAGAD: Data Augmentation for Graph Anomaly Detection
Figure 4 for DAGAD: Data Augmentation for Graph Anomaly Detection
Viaarxiv icon