Alert button
Picture for Honglak Lee

Honglak Lee

Alert button

Lightweight feature encoder for wake-up word detection based on self-supervised speech representation

Add code
Bookmark button
Alert button
Mar 14, 2023
Hyungjun Lim, Younggwan Kim, Kiho Yeom, Eunjoo Seo, Hoodong Lee, Stanley Jungkyu Choi, Honglak Lee

Figure 1 for Lightweight feature encoder for wake-up word detection based on self-supervised speech representation
Figure 2 for Lightweight feature encoder for wake-up word detection based on self-supervised speech representation
Figure 3 for Lightweight feature encoder for wake-up word detection based on self-supervised speech representation
Figure 4 for Lightweight feature encoder for wake-up word detection based on self-supervised speech representation
Viaarxiv icon

Hierarchical discriminative learning improves visual representations of biomedical microscopy

Add code
Bookmark button
Alert button
Mar 02, 2023
Cheng Jiang, Xinhai Hou, Akhil Kondepudi, Asadur Chowdury, Christian W. Freudiger, Daniel A. Orringer, Honglak Lee, Todd C. Hollon

Figure 1 for Hierarchical discriminative learning improves visual representations of biomedical microscopy
Figure 2 for Hierarchical discriminative learning improves visual representations of biomedical microscopy
Figure 3 for Hierarchical discriminative learning improves visual representations of biomedical microscopy
Figure 4 for Hierarchical discriminative learning improves visual representations of biomedical microscopy
Viaarxiv icon

Preference Transformer: Modeling Human Preferences using Transformers for RL

Add code
Bookmark button
Alert button
Mar 02, 2023
Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee

Figure 1 for Preference Transformer: Modeling Human Preferences using Transformers for RL
Figure 2 for Preference Transformer: Modeling Human Preferences using Transformers for RL
Figure 3 for Preference Transformer: Modeling Human Preferences using Transformers for RL
Figure 4 for Preference Transformer: Modeling Human Preferences using Transformers for RL
Viaarxiv icon

Unsupervised Task Graph Generation from Instructional Video Transcripts

Add code
Bookmark button
Alert button
Feb 17, 2023
Lajanugen Logeswaran, Sungryull Sohn, Yunseok Jang, Moontae Lee, Honglak Lee

Figure 1 for Unsupervised Task Graph Generation from Instructional Video Transcripts
Figure 2 for Unsupervised Task Graph Generation from Instructional Video Transcripts
Figure 3 for Unsupervised Task Graph Generation from Instructional Video Transcripts
Figure 4 for Unsupervised Task Graph Generation from Instructional Video Transcripts
Viaarxiv icon

Multimodal Subtask Graph Generation from Instructional Videos

Add code
Bookmark button
Alert button
Feb 17, 2023
Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee

Figure 1 for Multimodal Subtask Graph Generation from Instructional Videos
Figure 2 for Multimodal Subtask Graph Generation from Instructional Videos
Figure 3 for Multimodal Subtask Graph Generation from Instructional Videos
Figure 4 for Multimodal Subtask Graph Generation from Instructional Videos
Viaarxiv icon

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

Add code
Bookmark button
Alert button
Jan 27, 2023
Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee

Figure 1 for Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Figure 2 for Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Figure 3 for Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Figure 4 for Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Viaarxiv icon

Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching

Add code
Bookmark button
Alert button
Jan 07, 2023
Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee

Figure 1 for Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Figure 2 for Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Figure 3 for Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Figure 4 for Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Viaarxiv icon

Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program

Add code
Bookmark button
Alert button
Dec 25, 2022
Tiange Luo, Honglak Lee, Justin Johnson

Figure 1 for Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program
Figure 2 for Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program
Figure 3 for Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program
Figure 4 for Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program
Viaarxiv icon

Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders

Add code
Bookmark button
Alert button
Dec 14, 2022
Jongseong Jang, Daeun Kyung, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae, Edward Choi

Figure 1 for Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders
Figure 2 for Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders
Figure 3 for Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders
Figure 4 for Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders
Viaarxiv icon

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

Add code
Bookmark button
Alert button
Oct 27, 2022
Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong

Figure 1 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 2 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 3 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 4 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Viaarxiv icon