Alert button
Picture for Seunghoon Hong

Seunghoon Hong

Alert button

Learning Symmetrization for Equivariance with Orbit Distance Minimization

Nov 13, 2023
Tien Dat Nguyen, Jinwoo Kim, Hongseok Yang, Seunghoon Hong

Viaarxiv icon

3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation

Sep 08, 2023
Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee

Figure 1 for 3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation
Figure 2 for 3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation
Figure 3 for 3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation
Figure 4 for 3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation
Viaarxiv icon

Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance

Jun 05, 2023
Jinwoo Kim, Tien Dat Nguyen, Ayhan Suleymanzade, Hyeokjun An, Seunghoon Hong

Figure 1 for Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
Figure 2 for Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
Figure 3 for Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
Figure 4 for Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
Viaarxiv icon

Towards End-to-End Generative Modeling of Long Videos with Memory-Efficient Bidirectional Transformers

Mar 27, 2023
Jaehoon Yoo, Semin Kim, Doyup Lee, Chiheon Kim, Seunghoon Hong

Figure 1 for Towards End-to-End Generative Modeling of Long Videos with Memory-Efficient Bidirectional Transformers
Figure 2 for Towards End-to-End Generative Modeling of Long Videos with Memory-Efficient Bidirectional Transformers
Figure 3 for Towards End-to-End Generative Modeling of Long Videos with Memory-Efficient Bidirectional Transformers
Figure 4 for Towards End-to-End Generative Modeling of Long Videos with Memory-Efficient Bidirectional Transformers
Viaarxiv icon

Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching

Mar 27, 2023
Donggyun Kim, Jinwoo Kim, Seongwoong Cho, Chong Luo, Seunghoon Hong

Figure 1 for Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching
Figure 2 for Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching
Figure 3 for Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching
Figure 4 for Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching
Viaarxiv icon

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

Oct 27, 2022
Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong

Figure 1 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 2 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 3 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Figure 4 for Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Viaarxiv icon

Equivariant Hypergraph Neural Networks

Aug 22, 2022
Jinwoo Kim, Saeyoon Oh, Sungjun Cho, Seunghoon Hong

Figure 1 for Equivariant Hypergraph Neural Networks
Figure 2 for Equivariant Hypergraph Neural Networks
Figure 3 for Equivariant Hypergraph Neural Networks
Figure 4 for Equivariant Hypergraph Neural Networks
Viaarxiv icon

Diverse Generative Adversarial Perturbations on Attention Space for Transferable Adversarial Attacks

Aug 11, 2022
Woo Jae Kim, Seunghoon Hong, Sung-Eui Yoon

Figure 1 for Diverse Generative Adversarial Perturbations on Attention Space for Transferable Adversarial Attacks
Figure 2 for Diverse Generative Adversarial Perturbations on Attention Space for Transferable Adversarial Attacks
Figure 3 for Diverse Generative Adversarial Perturbations on Attention Space for Transferable Adversarial Attacks
Figure 4 for Diverse Generative Adversarial Perturbations on Attention Space for Transferable Adversarial Attacks
Viaarxiv icon

Pure Transformers are Powerful Graph Learners

Jul 06, 2022
Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong

Figure 1 for Pure Transformers are Powerful Graph Learners
Figure 2 for Pure Transformers are Powerful Graph Learners
Figure 3 for Pure Transformers are Powerful Graph Learners
Figure 4 for Pure Transformers are Powerful Graph Learners
Viaarxiv icon