Picture for Jonathan Godwin

Jonathan Godwin

Band-gap regression with architecture-optimized message-passing neural networks

Add code
Sep 12, 2023
Figure 1 for Band-gap regression with architecture-optimized message-passing neural networks
Figure 2 for Band-gap regression with architecture-optimized message-passing neural networks
Figure 3 for Band-gap regression with architecture-optimized message-passing neural networks
Figure 4 for Band-gap regression with architecture-optimized message-passing neural networks
Viaarxiv icon

Learned Force Fields Are Ready For Ground State Catalyst Discovery

Add code
Sep 26, 2022
Figure 1 for Learned Force Fields Are Ready For Ground State Catalyst Discovery
Figure 2 for Learned Force Fields Are Ready For Ground State Catalyst Discovery
Figure 3 for Learned Force Fields Are Ready For Ground State Catalyst Discovery
Figure 4 for Learned Force Fields Are Ready For Ground State Catalyst Discovery
Viaarxiv icon

Pre-training via Denoising for Molecular Property Prediction

Add code
May 31, 2022
Figure 1 for Pre-training via Denoising for Molecular Property Prediction
Figure 2 for Pre-training via Denoising for Molecular Property Prediction
Figure 3 for Pre-training via Denoising for Molecular Property Prediction
Figure 4 for Pre-training via Denoising for Molecular Property Prediction
Viaarxiv icon

Learned Coarse Models for Efficient Turbulence Simulation

Add code
Jan 04, 2022
Figure 1 for Learned Coarse Models for Efficient Turbulence Simulation
Figure 2 for Learned Coarse Models for Efficient Turbulence Simulation
Figure 3 for Learned Coarse Models for Efficient Turbulence Simulation
Figure 4 for Learned Coarse Models for Efficient Turbulence Simulation
Viaarxiv icon

Automap: Towards Ergonomic Automated Parallelism for ML Models

Add code
Dec 06, 2021
Figure 1 for Automap: Towards Ergonomic Automated Parallelism for ML Models
Figure 2 for Automap: Towards Ergonomic Automated Parallelism for ML Models
Figure 3 for Automap: Towards Ergonomic Automated Parallelism for ML Models
Figure 4 for Automap: Towards Ergonomic Automated Parallelism for ML Models
Viaarxiv icon

Large-scale graph representation learning with very deep GNNs and self-supervision

Add code
Jul 20, 2021
Figure 1 for Large-scale graph representation learning with very deep GNNs and self-supervision
Figure 2 for Large-scale graph representation learning with very deep GNNs and self-supervision
Figure 3 for Large-scale graph representation learning with very deep GNNs and self-supervision
Viaarxiv icon

Very Deep Graph Neural Networks Via Noise Regularisation

Add code
Jun 15, 2021
Figure 1 for Very Deep Graph Neural Networks Via Noise Regularisation
Figure 2 for Very Deep Graph Neural Networks Via Noise Regularisation
Figure 3 for Very Deep Graph Neural Networks Via Noise Regularisation
Figure 4 for Very Deep Graph Neural Networks Via Noise Regularisation
Viaarxiv icon

Graph Networks with Spectral Message Passing

Add code
Dec 31, 2020
Figure 1 for Graph Networks with Spectral Message Passing
Figure 2 for Graph Networks with Spectral Message Passing
Figure 3 for Graph Networks with Spectral Message Passing
Figure 4 for Graph Networks with Spectral Message Passing
Viaarxiv icon

Learning to Simulate Complex Physics with Graph Networks

Add code
Feb 21, 2020
Figure 1 for Learning to Simulate Complex Physics with Graph Networks
Figure 2 for Learning to Simulate Complex Physics with Graph Networks
Figure 3 for Learning to Simulate Complex Physics with Graph Networks
Figure 4 for Learning to Simulate Complex Physics with Graph Networks
Viaarxiv icon

Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies

Add code
Dec 29, 2016
Figure 1 for Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies
Figure 2 for Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies
Figure 3 for Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies
Figure 4 for Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies
Viaarxiv icon