Alert button
Picture for Junxiang Wang

Junxiang Wang

Alert button

DeepGAR: Deep Graph Learning for Analogical Reasoning

Add code
Bookmark button
Alert button
Nov 19, 2022
Chen Ling, Tanmoy Chowdhury, Junji Jiang, Junxiang Wang, Xuchao Zhang, Haifeng Chen, Liang Zhao

Figure 1 for DeepGAR: Deep Graph Learning for Analogical Reasoning
Figure 2 for DeepGAR: Deep Graph Learning for Analogical Reasoning
Figure 3 for DeepGAR: Deep Graph Learning for Analogical Reasoning
Figure 4 for DeepGAR: Deep Graph Learning for Analogical Reasoning
Viaarxiv icon

Source Localization of Graph Diffusion via Variational Autoencoders for Graph Inverse Problems

Add code
Bookmark button
Alert button
Jun 24, 2022
Chen Ling, Junji Jiang, Junxiang Wang, Liang Zhao

Figure 1 for Source Localization of Graph Diffusion via Variational Autoencoders for Graph Inverse Problems
Figure 2 for Source Localization of Graph Diffusion via Variational Autoencoders for Graph Inverse Problems
Figure 3 for Source Localization of Graph Diffusion via Variational Autoencoders for Graph Inverse Problems
Figure 4 for Source Localization of Graph Diffusion via Variational Autoencoders for Graph Inverse Problems
Viaarxiv icon

An Invertible Graph Diffusion Neural Network for Source Localization

Add code
Bookmark button
Alert button
Jun 18, 2022
Junxiang Wang, Junji Jiang, Liang Zhao

Figure 1 for An Invertible Graph Diffusion Neural Network for Source Localization
Figure 2 for An Invertible Graph Diffusion Neural Network for Source Localization
Figure 3 for An Invertible Graph Diffusion Neural Network for Source Localization
Figure 4 for An Invertible Graph Diffusion Neural Network for Source Localization
Viaarxiv icon

Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?

Add code
Bookmark button
Alert button
Dec 23, 2021
Junxiang Wang, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao

Figure 1 for Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?
Figure 2 for Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?
Figure 3 for Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?
Figure 4 for Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?
Viaarxiv icon

A Convergent ADMM Framework for Efficient Neural Network Training

Add code
Bookmark button
Alert button
Dec 22, 2021
Junxiang Wang, Hongyi Li, Liang Zhao

Figure 1 for A Convergent ADMM Framework for Efficient Neural Network Training
Figure 2 for A Convergent ADMM Framework for Efficient Neural Network Training
Figure 3 for A Convergent ADMM Framework for Efficient Neural Network Training
Figure 4 for A Convergent ADMM Framework for Efficient Neural Network Training
Viaarxiv icon

Community-based Layerwise Distributed Training of Graph Convolutional Networks

Add code
Bookmark button
Alert button
Dec 17, 2021
Hongyi Li, Junxiang Wang, Yongchao Wang, Yue Cheng, Liang Zhao

Figure 1 for Community-based Layerwise Distributed Training of Graph Convolutional Networks
Figure 2 for Community-based Layerwise Distributed Training of Graph Convolutional Networks
Figure 3 for Community-based Layerwise Distributed Training of Graph Convolutional Networks
Figure 4 for Community-based Layerwise Distributed Training of Graph Convolutional Networks
Viaarxiv icon

Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework

Add code
Bookmark button
Alert button
May 20, 2021
Junxiang Wang, Hongyi Li, Zheng Chai, Yongchao Wang, Yue Cheng, Liang Zhao

Figure 1 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 2 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 3 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 4 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Viaarxiv icon

Sign-regularized Multi-task Learning

Add code
Bookmark button
Alert button
Feb 22, 2021
Johnny Torres, Guangji Bai, Junxiang Wang, Liang Zhao, Carmen Vaca, Cristina Abad

Figure 1 for Sign-regularized Multi-task Learning
Figure 2 for Sign-regularized Multi-task Learning
Figure 3 for Sign-regularized Multi-task Learning
Figure 4 for Sign-regularized Multi-task Learning
Viaarxiv icon

Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training

Add code
Bookmark button
Alert button
Sep 16, 2020
Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao

Figure 1 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Figure 2 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Figure 3 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Viaarxiv icon