Alert button
Picture for Shih-Cheng Huang

Shih-Cheng Huang

Alert button

Training Small Multimodal Models to Bridge Biomedical Competency Gap: A Case Study in Radiology Imaging

Mar 20, 2024
Juan Manuel Zambrano Chaves, Shih-Cheng Huang, Yanbo Xu, Hanwen Xu, Naoto Usuyama, Sheng Zhang, Fei Wang, Yujia Xie, Mahmoud Khademi, Ziyi Yang, Hany Awadalla, Julia Gong, Houdong Hu, Jianwei Yang, Chunyuan Li, Jianfeng Gao, Yu Gu, Cliff Wong, Mu Wei, Tristan Naumann, Muhao Chen, Matthew P. Lungren, Serena Yeung-Levy, Curtis P. Langlotz, Sheng Wang, Hoifung Poon

Viaarxiv icon

INSPECT: A Multimodal Dataset for Pulmonary Embolism Diagnosis and Prognosis

Nov 17, 2023
Shih-Cheng Huang, Zepeng Huo, Ethan Steinberg, Chia-Chun Chiang, Matthew P. Lungren, Curtis P. Langlotz, Serena Yeung, Nigam H. Shah, Jason A. Fries

Viaarxiv icon

Chat Vector: A Simple Approach to Equip LLMs With New Language Chat Capabilities

Oct 07, 2023
Shih-Cheng Huang, Pin-Zu Li, Yu-Chi Hsu, Kuang-Ming Chen, Yu Tung Lin, Shih-Kai Hsiao, Richard Tzong-Han Tsai, Hung-yi Lee

Viaarxiv icon

LOVM: Language-Only Vision Model Selection

Jun 15, 2023
Orr Zohar, Shih-Cheng Huang, Kuan-Chieh Wang, Serena Yeung

Figure 1 for LOVM: Language-Only Vision Model Selection
Figure 2 for LOVM: Language-Only Vision Model Selection
Figure 3 for LOVM: Language-Only Vision Model Selection
Figure 4 for LOVM: Language-Only Vision Model Selection
Viaarxiv icon

BenchMD: A Benchmark for Modality-Agnostic Learning on Medical Images and Sensors

Apr 17, 2023
Kathryn Wantlin, Chenwei Wu, Shih-Cheng Huang, Oishi Banerjee, Farah Dadabhoy, Veeral Vipin Mehta, Ryan Wonhee Han, Fang Cao, Raja R. Narayan, Errol Colak, Adewole Adamson, Laura Heacock, Geoffrey H. Tison, Alex Tamkin, Pranav Rajpurkar

Figure 1 for BenchMD: A Benchmark for Modality-Agnostic Learning on Medical Images and Sensors
Figure 2 for BenchMD: A Benchmark for Modality-Agnostic Learning on Medical Images and Sensors
Figure 3 for BenchMD: A Benchmark for Modality-Agnostic Learning on Medical Images and Sensors
Figure 4 for BenchMD: A Benchmark for Modality-Agnostic Learning on Medical Images and Sensors
Viaarxiv icon

Video Pretraining Advances 3D Deep Learning on Chest CT Tasks

Apr 02, 2023
Alexander Ke, Shih-Cheng Huang, Chloe P O'Connell, Michal Klimont, Serena Yeung, Pranav Rajpurkar

Figure 1 for Video Pretraining Advances 3D Deep Learning on Chest CT Tasks
Figure 2 for Video Pretraining Advances 3D Deep Learning on Chest CT Tasks
Figure 3 for Video Pretraining Advances 3D Deep Learning on Chest CT Tasks
Figure 4 for Video Pretraining Advances 3D Deep Learning on Chest CT Tasks
Viaarxiv icon

Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation

Feb 08, 2023
Yuhui Zhang, Shih-Cheng Huang, Zhengping Zhou, Matthew P. Lungren, Serena Yeung

Figure 1 for Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation
Figure 2 for Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation
Figure 3 for Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation
Figure 4 for Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation
Viaarxiv icon

Diagnosing and Rectifying Vision Models using Language

Feb 08, 2023
Yuhui Zhang, Jeff Z. HaoChen, Shih-Cheng Huang, Kuan-Chieh Wang, James Zou, Serena Yeung

Figure 1 for Diagnosing and Rectifying Vision Models using Language
Figure 2 for Diagnosing and Rectifying Vision Models using Language
Figure 3 for Diagnosing and Rectifying Vision Models using Language
Figure 4 for Diagnosing and Rectifying Vision Models using Language
Viaarxiv icon

General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning

Dec 02, 2022
Shih-Cheng Huang, Shih-Heng Wang, Min-Han Shih, Saurav Sahay, Hung-yi Lee

Figure 1 for General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning
Figure 2 for General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning
Figure 3 for General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning
Figure 4 for General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning
Viaarxiv icon