Abstract:Audio deepfake detection has become increasingly challenging due to rapid advances in speech synthesis and voice conversion technologies, particularly under channel distortions, replay attacks, and real-world recording conditions. This paper proposes a resolution-aware audio deepfake detection framework that explicitly models and aligns multi-resolution spectral representations through cross-scale attention and consistency learning. Unlike conventional single-resolution or implicit feature-fusion approaches, the proposed method enforces agreement across complementary time--frequency scales. The proposed framework is evaluated on three representative benchmarks: ASVspoof 2019 (LA and PA), the Fake-or-Real (FoR) dataset, and the In-the-Wild Audio Deepfake dataset under a speaker-disjoint protocol. The method achieves near-perfect performance on ASVspoof LA (EER 0.16%), strong robustness on ASVspoof PA (EER 5.09%), FoR rerecorded audio (EER 4.54%), and in-the-wild deepfakes (AUC 0.98, EER 4.81%), significantly outperforming single-resolution and non-attention baselines under challenging conditions. The proposed model remains lightweight and efficient, requiring only 159k parameters and less than 1~GFLOP per inference, making it suitable for practical deployment. Comprehensive ablation studies confirm the critical contributions of cross-scale attention and consistency learning, while gradient-based interpretability analysis reveals that the model learns resolution-consistent and semantically meaningful spectral cues across diverse spoofing conditions. These results demonstrate that explicit cross-resolution modeling provides a principled, robust, and scalable foundation for next-generation audio deepfake detection systems.
Abstract:Simultaneous EEG-fMRI recording combines high temporal and spatial resolution for tracking neural activity. However, its usefulness is greatly limited by artifacts from magnetic resonance (MR), especially gradient artifacts (GA) and ballistocardiogram (BCG) artifacts, which interfere with the EEG signal. To address this issue, we used a denoising autoencoder (DAR), a deep learning framework designed to reduce MR-related artifacts in EEG recordings. Using paired data that includes both artifact-contaminated and MR-corrected EEG from the CWL EEG-fMRI dataset, DAR uses a 1D convolutional autoencoder to learn a direct mapping from noisy to clear signal segments. Compared to traditional artifact removal methods like principal component analysis (PCA), independent component analysis (ICA), average artifact subtraction (AAS), and wavelet thresholding, DAR shows better performance. It achieves a root-mean-squared error (RMSE) of 0.0218 $\pm$ 0.0152, a structural similarity index (SSIM) of 0.8885 $\pm$ 0.0913, and a signal-to-noise ratio (SNR) gain of 14.63 dB. Statistical analysis with paired t-tests confirms that these improvements are significant (p<0.001; Cohen's d>1.2). A leave-one-subject-out (LOSO) cross-validation protocol shows that the model generalizes well, yielding an average RMSE of 0.0635 $\pm$ 0.0110 and an SSIM of 0.6658 $\pm$ 0.0880 across unseen subjects. Additionally, saliency-based visualizations demonstrate that DAR highlights areas with dense artifacts, which makes its decisions easier to interpret. Overall, these results position DAR as a potential and understandable solution for real-time EEG artifact removal in simultaneous EEG-fMRI applications.