Picture for Honglak Lee

Honglak Lee

University of Michigan, Ann Arbor

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

Add code
Oct 27, 2022
Figure 1 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 2 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 3 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 4 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Viaarxiv icon

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

Add code
Sep 27, 2022
Figure 1 for UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Figure 2 for UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Figure 3 for UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Figure 4 for UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Viaarxiv icon

Grouping-matrix based Graph Pooling with Adaptive Number of Clusters

Add code
Sep 07, 2022
Figure 1 for Grouping-matrix based Graph Pooling with Adaptive Number of Clusters
Figure 2 for Grouping-matrix based Graph Pooling with Adaptive Number of Clusters
Figure 3 for Grouping-matrix based Graph Pooling with Adaptive Number of Clusters
Figure 4 for Grouping-matrix based Graph Pooling with Adaptive Number of Clusters
Viaarxiv icon

Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks

Add code
Jul 20, 2022
Figure 1 for Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks
Figure 2 for Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks
Figure 3 for Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks
Figure 4 for Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks
Viaarxiv icon

Pure Transformers are Powerful Graph Learners

Add code
Jul 06, 2022
Figure 1 for Pure Transformers are Powerful Graph Learners
Figure 2 for Pure Transformers are Powerful Graph Learners
Figure 3 for Pure Transformers are Powerful Graph Learners
Figure 4 for Pure Transformers are Powerful Graph Learners
Viaarxiv icon

OpenSRH: optimizing brain tumor surgery using intraoperative stimulated Raman histology

Add code
Jun 16, 2022
Figure 1 for OpenSRH: optimizing brain tumor surgery using intraoperative stimulated Raman histology
Figure 2 for OpenSRH: optimizing brain tumor surgery using intraoperative stimulated Raman histology
Figure 3 for OpenSRH: optimizing brain tumor surgery using intraoperative stimulated Raman histology
Figure 4 for OpenSRH: optimizing brain tumor surgery using intraoperative stimulated Raman histology
Viaarxiv icon

Is Continual Learning Truly Learning Representations Continually?

Add code
Jun 16, 2022
Figure 1 for Is Continual Learning Truly Learning Representations Continually?
Figure 2 for Is Continual Learning Truly Learning Representations Continually?
Figure 3 for Is Continual Learning Truly Learning Representations Continually?
Figure 4 for Is Continual Learning Truly Learning Representations Continually?
Viaarxiv icon

Few-shot Subgoal Planning with Language Models

Add code
May 28, 2022
Figure 1 for Few-shot Subgoal Planning with Language Models
Figure 2 for Few-shot Subgoal Planning with Language Models
Figure 3 for Few-shot Subgoal Planning with Language Models
Figure 4 for Few-shot Subgoal Planning with Language Models
Viaarxiv icon

LEPUS: Prompt-based Unsupervised Multi-hop Reranking for Open-domain QA

Add code
May 25, 2022
Figure 1 for LEPUS: Prompt-based Unsupervised Multi-hop Reranking for Open-domain QA
Figure 2 for LEPUS: Prompt-based Unsupervised Multi-hop Reranking for Open-domain QA
Figure 3 for LEPUS: Prompt-based Unsupervised Multi-hop Reranking for Open-domain QA
Figure 4 for LEPUS: Prompt-based Unsupervised Multi-hop Reranking for Open-domain QA
Viaarxiv icon

Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization

Add code
May 25, 2022
Figure 1 for Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization
Figure 2 for Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization
Figure 3 for Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization
Figure 4 for Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization
Viaarxiv icon