Automated radiology report drafting (ARRD) using vision-language models (VLMs) has advanced rapidly, yet most systems lack explicit uncertainty estimates, limiting trust and safe clinical deployment. We propose CONRep, a model-agnostic framework that integrates conformal prediction (CP) to provide statistically grounded uncertainty quantification for VLM-generated radiology reports. CONRep operates at both the label level, by calibrating binary predictions for predefined findings, and the sentence level, by assessing uncertainty in free-text impressions via image-text semantic alignment. We evaluate CONRep using both generative and contrastive VLMs on public chest X-ray datasets. Across both settings, outputs classified as high confidence consistently show significantly higher agreement with radiologist annotations and ground-truth impressions than low-confidence outputs. By enabling calibrated confidence stratification without modifying underlying models, CONRep improves the transparency, reliability, and clinical usability of automated radiology reporting systems.