Large scale language models (LLM) have received significant attention and found diverse applications across various domains, but their development encounters challenges in real-world scenarios. These challenges arise due to the scarcity of public domain data availability and the need to maintain privacy with respect to private domain data. To address these issues, federated learning (FL) has emerged as a promising technology that enables collaborative training of shared models while preserving decentralized data. We propose the concept of federated LLM, which comprises three key components, i.e., federated LLM pre-training, federated LLM fine-tuning, and federated LLM prompt engineering. For each component, we discuss its advantage over traditional LLM training methods and propose specific engineering strategies for implementation. Furthermore, we explore the novel challenges introduced by the integration of FL and LLM. We analyze existing solutions and identify potential obstacles faced by these solutions within the context of federated LLM.
Photoacoustic imaging is a promising imaging technique for human brain due to its high sensitivity and functional imaging ability. However, the skull would cause strong attenuation and distortion to the photoacoustic signals, which makes non-invasive transcranial imaging difficult. In this work, the temporal bone is selected as an imaging window to minimize the influence of the skull. Moreover, non-line-of-sight photoacoustic imaging is introduced to enhance the field of view, where the skull is considered as a reflector. Simulation studies are carried out to show that the image quality can be improved with reflected signal considered.