Alert button
Picture for Timothy Doster

Timothy Doster

Alert button

Reproducing Kernel Hilbert Space Pruning for Sparse Hyperspectral Abundance Prediction

Aug 16, 2023
Michael G. Rawson, Timothy Doster, Tegan Emerson

Figure 1 for Reproducing Kernel Hilbert Space Pruning for Sparse Hyperspectral Abundance Prediction
Figure 2 for Reproducing Kernel Hilbert Space Pruning for Sparse Hyperspectral Abundance Prediction
Figure 3 for Reproducing Kernel Hilbert Space Pruning for Sparse Hyperspectral Abundance Prediction
Viaarxiv icon

In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?

Oct 07, 2022
Henry Kvinge, Tegan H. Emerson, Grayson Jorgenson, Scott Vasquez, Timothy Doster, Jesse D. Lew

Figure 1 for In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
Figure 2 for In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
Figure 3 for In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
Figure 4 for In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
Viaarxiv icon

Reward-Free Attacks in Multi-Agent Reinforcement Learning

Dec 02, 2021
Ted Fujimoto, Timothy Doster, Adam Attarian, Jill Brandenberger, Nathan Hodas

Figure 1 for Reward-Free Attacks in Multi-Agent Reinforcement Learning
Figure 2 for Reward-Free Attacks in Multi-Agent Reinforcement Learning
Figure 3 for Reward-Free Attacks in Multi-Agent Reinforcement Learning
Figure 4 for Reward-Free Attacks in Multi-Agent Reinforcement Learning
Viaarxiv icon

Argumentative Topology: Finding Loop(holes) in Logic

Nov 17, 2020
Sarah Tymochko, Zachary New, Lucius Bynum, Emilie Purvine, Timothy Doster, Julien Chaput, Tegan Emerson

Figure 1 for Argumentative Topology: Finding Loop(holes) in Logic
Figure 2 for Argumentative Topology: Finding Loop(holes) in Logic
Figure 3 for Argumentative Topology: Finding Loop(holes) in Logic
Viaarxiv icon

Gradual DropIn of Layers to Train Very Deep Neural Networks

Nov 22, 2015
Leslie N. Smith, Emily M. Hand, Timothy Doster

Figure 1 for Gradual DropIn of Layers to Train Very Deep Neural Networks
Figure 2 for Gradual DropIn of Layers to Train Very Deep Neural Networks
Figure 3 for Gradual DropIn of Layers to Train Very Deep Neural Networks
Figure 4 for Gradual DropIn of Layers to Train Very Deep Neural Networks
Viaarxiv icon