Variational inference (VI) provides a principled framework for estimating posterior distributions over model parameters, enabling explicit modeling of weight uncertainty during optimization. By capturing this uncertainty, VI improves the reliability of predictions, yielding better calibrated outputs. In this work, we investigate the benefits of VI for challenging multimodal understanding and reasoning by applying the Improved Variational Online Newton (IVON), a recent VI optimizer, to fine-tuning a multimodal large language model on audio question answering tasks. Our results show that VI not only enhances predictive accuracy but also significantly improves calibration, reducing the model's overconfidence. These advances further support risk-sensitive applications such as selective prediction, where reliable confidence estimates are crucial.