Alert button
Picture for Robert M. Kirby

Robert M. Kirby

Alert button

Kolmogorov n-Widths for Multitask Physics-Informed Machine Learning (PIML) Methods: Towards Robust Metrics

Add code
Bookmark button
Alert button
Feb 16, 2024
Michael Penwarden, Houman Owhadi, Robert M. Kirby

Viaarxiv icon

Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels

Add code
Bookmark button
Alert button
Oct 09, 2023
Da Long, Wei W. Xing, Aditi S. Krishnapriyan, Robert M. Kirby, Shandian Zhe, Michael W. Mahoney

Viaarxiv icon

Neural Operator Learning for Ultrasound Tomography Inversion

Add code
Bookmark button
Alert button
Apr 06, 2023
Haocheng Dai, Michael Penwarden, Robert M. Kirby, Sarang Joshi

Figure 1 for Neural Operator Learning for Ultrasound Tomography Inversion
Figure 2 for Neural Operator Learning for Ultrasound Tomography Inversion
Viaarxiv icon

A unified scalable framework for causal sweeping strategies for Physics-Informed Neural Networks (PINNs) and their temporal decompositions

Add code
Bookmark button
Alert button
Feb 28, 2023
Michael Penwarden, Ameya D. Jagtap, Shandian Zhe, George Em Karniadakis, Robert M. Kirby

Figure 1 for A unified scalable framework for causal sweeping strategies for Physics-Informed Neural Networks (PINNs) and their temporal decompositions
Figure 2 for A unified scalable framework for causal sweeping strategies for Physics-Informed Neural Networks (PINNs) and their temporal decompositions
Figure 3 for A unified scalable framework for causal sweeping strategies for Physics-Informed Neural Networks (PINNs) and their temporal decompositions
Figure 4 for A unified scalable framework for causal sweeping strategies for Physics-Informed Neural Networks (PINNs) and their temporal decompositions
Viaarxiv icon

Deep neural operators can serve as accurate surrogates for shape optimization: A case study for airfoils

Add code
Bookmark button
Alert button
Feb 02, 2023
Khemraj Shukla, Vivek Oommen, Ahmad Peyvan, Michael Penwarden, Luis Bravo, Anindya Ghoshal, Robert M. Kirby, George Em Karniadakis

Figure 1 for Deep neural operators can serve as accurate surrogates for shape optimization: A case study for airfoils
Figure 2 for Deep neural operators can serve as accurate surrogates for shape optimization: A case study for airfoils
Figure 3 for Deep neural operators can serve as accurate surrogates for shape optimization: A case study for airfoils
Figure 4 for Deep neural operators can serve as accurate surrogates for shape optimization: A case study for airfoils
Viaarxiv icon

Batch Multi-Fidelity Active Learning with Budget Constraints

Add code
Bookmark button
Alert button
Oct 23, 2022
Shibo Li, Jeff M. Phillips, Xin Yu, Robert M. Kirby, Shandian Zhe

Figure 1 for Batch Multi-Fidelity Active Learning with Budget Constraints
Figure 2 for Batch Multi-Fidelity Active Learning with Budget Constraints
Figure 3 for Batch Multi-Fidelity Active Learning with Budget Constraints
Figure 4 for Batch Multi-Fidelity Active Learning with Budget Constraints
Viaarxiv icon

Meta Learning of Interface Conditions for Multi-Domain Physics-Informed Neural Networks

Add code
Bookmark button
Alert button
Oct 23, 2022
Shibo Li, Michael Penwarden, Robert M. Kirby, Shandian Zhe

Figure 1 for Meta Learning of Interface Conditions for Multi-Domain Physics-Informed Neural Networks
Figure 2 for Meta Learning of Interface Conditions for Multi-Domain Physics-Informed Neural Networks
Figure 3 for Meta Learning of Interface Conditions for Multi-Domain Physics-Informed Neural Networks
Figure 4 for Meta Learning of Interface Conditions for Multi-Domain Physics-Informed Neural Networks
Viaarxiv icon

Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization

Add code
Bookmark button
Alert button
Aug 01, 2022
Tan Nguyen, Richard G. Baraniuk, Robert M. Kirby, Stanley J. Osher, Bao Wang

Figure 1 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 2 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 3 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 4 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Viaarxiv icon

Adaptive Self-supervision Algorithms for Physics-informed Neural Networks

Add code
Bookmark button
Alert button
Jul 08, 2022
Shashank Subramanian, Robert M. Kirby, Michael W. Mahoney, Amir Gholami

Figure 1 for Adaptive Self-supervision Algorithms for Physics-informed Neural Networks
Figure 2 for Adaptive Self-supervision Algorithms for Physics-informed Neural Networks
Figure 3 for Adaptive Self-supervision Algorithms for Physics-informed Neural Networks
Figure 4 for Adaptive Self-supervision Algorithms for Physics-informed Neural Networks
Viaarxiv icon

Infinite-Fidelity Coregionalization for Physical Simulation

Add code
Bookmark button
Alert button
Jul 01, 2022
Shibo Li, Zheng Wang, Robert M. Kirby, Shandian Zhe

Figure 1 for Infinite-Fidelity Coregionalization for Physical Simulation
Figure 2 for Infinite-Fidelity Coregionalization for Physical Simulation
Figure 3 for Infinite-Fidelity Coregionalization for Physical Simulation
Figure 4 for Infinite-Fidelity Coregionalization for Physical Simulation
Viaarxiv icon