Biomedical signal processing extract meaningful information from physiological signals like electrocardiograms (ECGs), electroencephalograms (EEGs), and electromyograms (EMGs) to diagnose, monitor, and treat medical conditions and diseases such as seizures, cardiomyopathy, and neuromuscular disorders, respectively. Traditional manual physician analysis of electrical recordings is prone to human error as subtle anomolies may not be detected. Recently, advanced deep learning has significantly improved the accuracy of biomedical signal analysis. A multi-modal deep learning model is proposed that utilizes discrete wavelet transforms for signal pre-processing to reduce noise. A multi-modal image fusion and multimodal feature fusion framework is utilized that converts numeric biomedical signals into 2D and 3D images for image processing using Gramian angular fields, recurrency plots, and Markov transition fields. In this paper, deep learning models are applied to ECG, EEG, and human activity signals using actual medical datasets, brain, and heart recordings. The results demonstrate that using a multi-modal approach using wavelet transforms improves the accuracy of disease and disorder classification.