Alert button
Picture for Andreas Krause

Andreas Krause

Alert button

Department of Computer Science, ETH Zürich

Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data via Differentiable Cross-Approximation

May 29, 2021
Mikhail Usvyatsov, Anastasia Makarova, Rafael Ballester-Ripoll, Maxim Rakhuba, Andreas Krause, Konrad Schindler

Figure 1 for Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data via Differentiable Cross-Approximation
Figure 2 for Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data via Differentiable Cross-Approximation
Figure 3 for Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data via Differentiable Cross-Approximation
Figure 4 for Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data via Differentiable Cross-Approximation
Viaarxiv icon

Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning

May 28, 2021
Scott Sussex, Andreas Krause, Caroline Uhler

Figure 1 for Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning
Figure 2 for Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning
Figure 3 for Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning
Figure 4 for Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning
Viaarxiv icon

DiBS: Differentiable Bayesian Structure Learning

May 25, 2021
Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, Andreas Krause

Figure 1 for DiBS: Differentiable Bayesian Structure Learning
Figure 2 for DiBS: Differentiable Bayesian Structure Learning
Figure 3 for DiBS: Differentiable Bayesian Structure Learning
Figure 4 for DiBS: Differentiable Bayesian Structure Learning
Viaarxiv icon

Bias-Robust Bayesian Optimization via Dueling Bandit

May 25, 2021
Johannes Kirschner, Andreas Krause

Figure 1 for Bias-Robust Bayesian Optimization via Dueling Bandit
Figure 2 for Bias-Robust Bayesian Optimization via Dueling Bandit
Viaarxiv icon

Regret Bounds for Gaussian-Process Optimization in Large Domains

Apr 29, 2021
Manuel Wüthrich, Bernhard Schölkopf, Andreas Krause

Figure 1 for Regret Bounds for Gaussian-Process Optimization in Large Domains
Figure 2 for Regret Bounds for Gaussian-Process Optimization in Large Domains
Viaarxiv icon

Overfitting in Bayesian Optimization: an empirical study and early-stopping solution

Apr 16, 2021
Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

Figure 1 for Overfitting in Bayesian Optimization: an empirical study and early-stopping solution
Figure 2 for Overfitting in Bayesian Optimization: an empirical study and early-stopping solution
Figure 3 for Overfitting in Bayesian Optimization: an empirical study and early-stopping solution
Figure 4 for Overfitting in Bayesian Optimization: an empirical study and early-stopping solution
Viaarxiv icon

Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning

Mar 18, 2021
Sebastian Curi, Ilija Bogunovic, Andreas Krause

Figure 1 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 2 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 3 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 4 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Viaarxiv icon

Information Directed Reward Learning for Reinforcement Learning

Feb 24, 2021
David Lindner, Matteo Turchetta, Sebastian Tschiatschek, Kamil Ciosek, Andreas Krause

Figure 1 for Information Directed Reward Learning for Reinforcement Learning
Figure 2 for Information Directed Reward Learning for Reinforcement Learning
Figure 3 for Information Directed Reward Learning for Reinforcement Learning
Figure 4 for Information Directed Reward Learning for Reinforcement Learning
Viaarxiv icon

Risk-Averse Offline Reinforcement Learning

Feb 10, 2021
Núria Armengol Urpí, Sebastian Curi, Andreas Krause

Figure 1 for Risk-Averse Offline Reinforcement Learning
Figure 2 for Risk-Averse Offline Reinforcement Learning
Figure 3 for Risk-Averse Offline Reinforcement Learning
Figure 4 for Risk-Averse Offline Reinforcement Learning
Viaarxiv icon

Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback

Jan 21, 2021
Marc Jourdan, Mojmír Mutný, Johannes Kirschner, Andreas Krause

Figure 1 for Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback
Figure 2 for Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback
Figure 3 for Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback
Figure 4 for Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback
Viaarxiv icon