Alert button
Picture for Sangjoon Park

Sangjoon Park

Alert button

Enhancing Demand Prediction in Open Systems by Cartogram-aided Deep Learning

Add code
Bookmark button
Alert button
Mar 24, 2024
Sangjoon Park, Yongsung Kwon, Hyungjoon Soh, Mi Jin Lee, Seung-Woo Son

Viaarxiv icon

Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model

Add code
Bookmark button
Alert button
Feb 28, 2024
Sangjoon Park, Yong Bae Kim, Jee Suk Chang, Seo Hee Choi, Hyungjin Chung, Ik Jae Lee, Hwa Kyung Byun

Viaarxiv icon

RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization

Add code
Bookmark button
Alert button
Nov 27, 2023
Kwanyoung Kim, Yujin Oh, Sangjoon Park, Hwa Kyung Byun, Jin Sung Kim, Yong Bae Kim, Jong Chul Ye

Figure 1 for RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization
Figure 2 for RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization
Figure 3 for RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization
Figure 4 for RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization
Viaarxiv icon

LLM-driven Multimodal Target Volume Contouring in Radiation Oncology

Add code
Bookmark button
Alert button
Nov 03, 2023
Yujin Oh, Sangjoon Park, Hwa Kyung Byun, Jin Sung Kim, Jong Chul Ye

Figure 1 for LLM-driven Multimodal Target Volume Contouring in Radiation Oncology
Figure 2 for LLM-driven Multimodal Target Volume Contouring in Radiation Oncology
Figure 3 for LLM-driven Multimodal Target Volume Contouring in Radiation Oncology
Figure 4 for LLM-driven Multimodal Target Volume Contouring in Radiation Oncology
Viaarxiv icon

Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model

Add code
Bookmark button
Alert button
Feb 27, 2023
Jaeyoung Huh, Sangjoon Park, Jeong Eun Lee, Jong Chul Ye

Figure 1 for Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Figure 2 for Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Figure 3 for Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Figure 4 for Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Viaarxiv icon

MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling

Add code
Bookmark button
Alert button
Jan 05, 2023
Sangjoon Park, Ik-Jae Lee, Jun Won Kim, Jong Chul Ye

Figure 1 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 2 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 3 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 4 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Viaarxiv icon

Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation

Add code
Bookmark button
Alert button
Aug 10, 2022
Sangjoon Park, Eun Sun Lee, Jeong Eun Lee, Jong Chul Ye

Figure 1 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 2 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 3 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 4 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Viaarxiv icon

Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation

Add code
Bookmark button
Alert button
Apr 07, 2022
Sangjoon Park, Jong Chul Ye

Figure 1 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 2 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 3 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 4 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Viaarxiv icon

AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation

Add code
Bookmark button
Alert button
Feb 13, 2022
Sangjoon Park, Gwanghyun Kim, Yujin Oh, Joon Beom Seo, Sang Min Lee, Jin Hwan Kim, Sungjun Moon, Jae-Kwang Lim, Chang Min Park, Jong Chul Ye

Figure 1 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 2 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 3 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 4 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Viaarxiv icon

Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training

Add code
Bookmark button
Alert button
Nov 03, 2021
Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye

Figure 1 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 2 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 3 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 4 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Viaarxiv icon