Picture for Guotong Xie

Guotong Xie

CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer

Add code
Mar 02, 2022
Figure 1 for CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Figure 2 for CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Figure 3 for CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Figure 4 for CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Viaarxiv icon

Superpixel-Based Building Damage Detection from Post-earthquake Very High Resolution Imagery Using Deep Neural Networks

Add code
Dec 22, 2021
Figure 1 for Superpixel-Based Building Damage Detection from Post-earthquake Very High Resolution Imagery Using Deep Neural Networks
Figure 2 for Superpixel-Based Building Damage Detection from Post-earthquake Very High Resolution Imagery Using Deep Neural Networks
Figure 3 for Superpixel-Based Building Damage Detection from Post-earthquake Very High Resolution Imagery Using Deep Neural Networks
Figure 4 for Superpixel-Based Building Damage Detection from Post-earthquake Very High Resolution Imagery Using Deep Neural Networks
Viaarxiv icon

Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks

Add code
Oct 26, 2021
Figure 1 for Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks
Figure 2 for Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks
Figure 3 for Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks
Figure 4 for Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised Strategy for Pre-training Graph Neural Networks
Viaarxiv icon

Multi-institutional Validation of Two-Streamed Deep Learning Method for Automated Delineation of Esophageal Gross Tumor Volume using planning-CT and FDG-PETCT

Add code
Oct 11, 2021
Figure 1 for Multi-institutional Validation of Two-Streamed Deep Learning Method for Automated Delineation of Esophageal Gross Tumor Volume using planning-CT and FDG-PETCT
Figure 2 for Multi-institutional Validation of Two-Streamed Deep Learning Method for Automated Delineation of Esophageal Gross Tumor Volume using planning-CT and FDG-PETCT
Figure 3 for Multi-institutional Validation of Two-Streamed Deep Learning Method for Automated Delineation of Esophageal Gross Tumor Volume using planning-CT and FDG-PETCT
Figure 4 for Multi-institutional Validation of Two-Streamed Deep Learning Method for Automated Delineation of Esophageal Gross Tumor Volume using planning-CT and FDG-PETCT
Viaarxiv icon

SAME: Deformable Image Registration based on Self-supervised Anatomical Embeddings

Add code
Sep 23, 2021
Figure 1 for SAME: Deformable Image Registration based on Self-supervised Anatomical Embeddings
Figure 2 for SAME: Deformable Image Registration based on Self-supervised Anatomical Embeddings
Figure 3 for SAME: Deformable Image Registration based on Self-supervised Anatomical Embeddings
Figure 4 for SAME: Deformable Image Registration based on Self-supervised Anatomical Embeddings
Viaarxiv icon

DeepStationing: Thoracic Lymph Node Station Parsing in CT Scans using Anatomical Context Encoding and Key Organ Auto-Search

Add code
Sep 20, 2021
Figure 1 for DeepStationing: Thoracic Lymph Node Station Parsing in CT Scans using Anatomical Context Encoding and Key Organ Auto-Search
Figure 2 for DeepStationing: Thoracic Lymph Node Station Parsing in CT Scans using Anatomical Context Encoding and Key Organ Auto-Search
Figure 3 for DeepStationing: Thoracic Lymph Node Station Parsing in CT Scans using Anatomical Context Encoding and Key Organ Auto-Search
Figure 4 for DeepStationing: Thoracic Lymph Node Station Parsing in CT Scans using Anatomical Context Encoding and Key Organ Auto-Search
Viaarxiv icon

CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark

Add code
Jul 06, 2021
Figure 1 for CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
Figure 2 for CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
Figure 3 for CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
Figure 4 for CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
Viaarxiv icon

Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model

Add code
Jun 24, 2021
Figure 1 for Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model
Figure 2 for Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model
Viaarxiv icon

Lesion Segmentation and RECIST Diameter Prediction via Click-driven Attention and Dual-path Connection

Add code
May 05, 2021
Figure 1 for Lesion Segmentation and RECIST Diameter Prediction via Click-driven Attention and Dual-path Connection
Figure 2 for Lesion Segmentation and RECIST Diameter Prediction via Click-driven Attention and Dual-path Connection
Figure 3 for Lesion Segmentation and RECIST Diameter Prediction via Click-driven Attention and Dual-path Connection
Figure 4 for Lesion Segmentation and RECIST Diameter Prediction via Click-driven Attention and Dual-path Connection
Viaarxiv icon

Weakly-Supervised Universal Lesion Segmentation with Regional Level Set Loss

Add code
May 03, 2021
Figure 1 for Weakly-Supervised Universal Lesion Segmentation with Regional Level Set Loss
Figure 2 for Weakly-Supervised Universal Lesion Segmentation with Regional Level Set Loss
Figure 3 for Weakly-Supervised Universal Lesion Segmentation with Regional Level Set Loss
Figure 4 for Weakly-Supervised Universal Lesion Segmentation with Regional Level Set Loss
Viaarxiv icon