Picture for Honglak Lee

Honglak Lee

University of Michigan, Ann Arbor

Hierarchical discriminative learning improves visual representations of biomedical microscopy

Add code
Mar 02, 2023
Figure 1 for Hierarchical discriminative learning improves visual representations of biomedical microscopy
Figure 2 for Hierarchical discriminative learning improves visual representations of biomedical microscopy
Figure 3 for Hierarchical discriminative learning improves visual representations of biomedical microscopy
Figure 4 for Hierarchical discriminative learning improves visual representations of biomedical microscopy
Viaarxiv icon

Preference Transformer: Modeling Human Preferences using Transformers for RL

Add code
Mar 02, 2023
Viaarxiv icon

Unsupervised Task Graph Generation from Instructional Video Transcripts

Add code
Feb 17, 2023
Viaarxiv icon

Multimodal Subtask Graph Generation from Instructional Videos

Add code
Feb 17, 2023
Figure 1 for Multimodal Subtask Graph Generation from Instructional Videos
Figure 2 for Multimodal Subtask Graph Generation from Instructional Videos
Figure 3 for Multimodal Subtask Graph Generation from Instructional Videos
Figure 4 for Multimodal Subtask Graph Generation from Instructional Videos
Viaarxiv icon

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

Add code
Jan 27, 2023
Figure 1 for Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Figure 2 for Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Figure 3 for Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Figure 4 for Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Viaarxiv icon

Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching

Add code
Jan 07, 2023
Figure 1 for Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Figure 2 for Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Figure 3 for Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Figure 4 for Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Viaarxiv icon

Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program

Add code
Dec 25, 2022
Viaarxiv icon

Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders

Add code
Dec 14, 2022
Viaarxiv icon

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

Add code
Oct 27, 2022
Figure 1 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 2 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 3 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 4 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Viaarxiv icon

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

Add code
Sep 27, 2022
Figure 1 for UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Figure 2 for UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Figure 3 for UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Figure 4 for UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Viaarxiv icon