Learning-based signal processing systems increasingly support high-stakes medical decisions using heterogeneous biomedical signals, including medical images, physiological time series, and clinical records. Despite strong predictive performance, many models rely on statistical correlations that are unstable across acquisition settings, patient populations, and institutional practices, limiting robustness, interpretability, and clinical trust. We advocate a causal signal processing perspective in which biomedical signals are treated as effects of latent generative mechanisms rather than as isolated predictive inputs. Using clinical risk prediction as a motivating example, we show how disease-related factors generate observable biomarkers, while acquisition processes act as confounders influencing signal appearance. In clinical disease risk prediction from chest CT scans and patient risk factors, correlational models may fail under scanner changes, whereas causal abstractions remain invariant. Building on this view, we propose a unifying conceptual framework integrating causal modeling with learning-based signal processing and neuro-symbolic reasoning. Statistical models extract multimodal representations that are mapped to interpretable causal abstractions and combined with symbolic knowledge encoding clinical risk factors and guidelines. This structure enables clinically grounded explanations, counterfactual reasoning about hypothetical interventions, and improved robustness to distribution shifts arising from changes in acquisition conditions or screening policies. Rather than introducing a specific algorithm, this article presents schematic causal structures and a comparative analysis of correlation-based, causal, and neuro-symbolic approaches to guide the design of robust and interpretable medical decision-support systems.