Alert button
Picture for Dongyoon Han

Dongyoon Han

Alert button

Loss-based Sequential Learning for Active Domain Adaptation

Add code
Bookmark button
Alert button
Apr 25, 2022
Kyeongtak Han, Youngeun Kim, Dongyoon Han, Sungeun Hong

Figure 1 for Loss-based Sequential Learning for Active Domain Adaptation
Figure 2 for Loss-based Sequential Learning for Active Domain Adaptation
Figure 3 for Loss-based Sequential Learning for Active Domain Adaptation
Figure 4 for Loss-based Sequential Learning for Active Domain Adaptation
Viaarxiv icon

An Extendable, Efficient and Effective Transformer-based Object Detector

Add code
Bookmark button
Alert button
Apr 17, 2022
Hwanjun Song, Deqing Sun, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang

Figure 1 for An Extendable, Efficient and Effective Transformer-based Object Detector
Figure 2 for An Extendable, Efficient and Effective Transformer-based Object Detector
Figure 3 for An Extendable, Efficient and Effective Transformer-based Object Detector
Figure 4 for An Extendable, Efficient and Effective Transformer-based Object Detector
Viaarxiv icon

Spatiotemporal Augmentation on Selective Frequencies for Video Representation Learning

Add code
Bookmark button
Alert button
Apr 08, 2022
Jinhyung Kim, Taeoh Kim, Minho Shim, Dongyoon Han, Dongyoon Wee, Junmo Kim

Figure 1 for Spatiotemporal Augmentation on Selective Frequencies for Video Representation Learning
Figure 2 for Spatiotemporal Augmentation on Selective Frequencies for Video Representation Learning
Figure 3 for Spatiotemporal Augmentation on Selective Frequencies for Video Representation Learning
Figure 4 for Spatiotemporal Augmentation on Selective Frequencies for Video Representation Learning
Viaarxiv icon

Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?

Add code
Bookmark button
Alert button
Mar 28, 2022
Jisoo Mok, Byunggook Na, Ji-Hoon Kim, Dongyoon Han, Sungroh Yoon

Figure 1 for Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?
Figure 2 for Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?
Figure 3 for Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?
Figure 4 for Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?
Viaarxiv icon

Learning Features with Parameter-Free Layers

Add code
Bookmark button
Alert button
Feb 06, 2022
Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo

Figure 1 for Learning Features with Parameter-Free Layers
Figure 2 for Learning Features with Parameter-Free Layers
Figure 3 for Learning Features with Parameter-Free Layers
Figure 4 for Learning Features with Parameter-Free Layers
Viaarxiv icon

Contrastive Vicinal Space for Unsupervised Domain Adaptation

Add code
Bookmark button
Alert button
Dec 05, 2021
Jaemin Na, Dongyoon Han, Hyung Jin Chang, Wonjun Hwang

Figure 1 for Contrastive Vicinal Space for Unsupervised Domain Adaptation
Figure 2 for Contrastive Vicinal Space for Unsupervised Domain Adaptation
Figure 3 for Contrastive Vicinal Space for Unsupervised Domain Adaptation
Figure 4 for Contrastive Vicinal Space for Unsupervised Domain Adaptation
Viaarxiv icon

Donut: Document Understanding Transformer without OCR

Add code
Bookmark button
Alert button
Nov 30, 2021
Geewook Kim, Teakgyu Hong, Moonbin Yim, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park

Figure 1 for Donut: Document Understanding Transformer without OCR
Figure 2 for Donut: Document Understanding Transformer without OCR
Figure 3 for Donut: Document Understanding Transformer without OCR
Figure 4 for Donut: Document Understanding Transformer without OCR
Viaarxiv icon

ViDT: An Efficient and Effective Fully Transformer-based Object Detector

Add code
Bookmark button
Alert button
Oct 08, 2021
Hwanjun Song, Deqing Sun, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang

Figure 1 for ViDT: An Efficient and Effective Fully Transformer-based Object Detector
Figure 2 for ViDT: An Efficient and Effective Fully Transformer-based Object Detector
Figure 3 for ViDT: An Efficient and Effective Fully Transformer-based Object Detector
Figure 4 for ViDT: An Efficient and Effective Fully Transformer-based Object Detector
Viaarxiv icon

Rethinking Spatial Dimensions of Vision Transformers

Add code
Bookmark button
Alert button
Mar 30, 2021
Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh

Figure 1 for Rethinking Spatial Dimensions of Vision Transformers
Figure 2 for Rethinking Spatial Dimensions of Vision Transformers
Figure 3 for Rethinking Spatial Dimensions of Vision Transformers
Figure 4 for Rethinking Spatial Dimensions of Vision Transformers
Viaarxiv icon