Picture for Sangjoon Park

Sangjoon Park

Enhancing Demand Prediction in Open Systems by Cartogram-aided Deep Learning

Add code
Mar 24, 2024
Viaarxiv icon

Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model

Add code
Feb 28, 2024
Figure 1 for Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model
Figure 2 for Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model
Figure 3 for Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model
Figure 4 for Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model
Viaarxiv icon

RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization

Add code
Nov 27, 2023
Figure 1 for RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization
Figure 2 for RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization
Figure 3 for RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization
Figure 4 for RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization
Viaarxiv icon

LLM-driven Multimodal Target Volume Contouring in Radiation Oncology

Add code
Nov 03, 2023
Figure 1 for LLM-driven Multimodal Target Volume Contouring in Radiation Oncology
Figure 2 for LLM-driven Multimodal Target Volume Contouring in Radiation Oncology
Figure 3 for LLM-driven Multimodal Target Volume Contouring in Radiation Oncology
Figure 4 for LLM-driven Multimodal Target Volume Contouring in Radiation Oncology
Viaarxiv icon

Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model

Add code
Feb 27, 2023
Figure 1 for Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Figure 2 for Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Figure 3 for Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Figure 4 for Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Viaarxiv icon

MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling

Add code
Jan 05, 2023
Figure 1 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 2 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 3 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 4 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Viaarxiv icon

Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation

Add code
Aug 10, 2022
Figure 1 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 2 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 3 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 4 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Viaarxiv icon

Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation

Add code
Apr 07, 2022
Figure 1 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 2 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 3 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 4 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Viaarxiv icon

AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation

Add code
Feb 13, 2022
Figure 1 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 2 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 3 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 4 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Viaarxiv icon

Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training

Add code
Nov 03, 2021
Figure 1 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 2 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 3 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 4 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Viaarxiv icon