Abstract:The rapid expansion of distributed rooftop photovoltaic (PV) systems introduces increasing uncertainty in distribution grid planning, hosting capacity assessment, and voltage regulation. Reliable estimation of rooftop PV deployment from satellite imagery is therefore essential for accurate modeling of distributed generation at feeder and service-territory scales. However, conventional computer vision approaches rely on fixed learned representations and globally averaged visual correlations. This makes them sensitive to geographic distribution shifts caused by differences in roof materials, urban morphology, and imaging conditions across regions. To address these challenges, this paper proposes Solar Retrieval-Augmented Generation (Solar-RAG), a context-grounded framework for photovoltaic assessment that integrates similarity-based image retrieval with multimodal vision-language reasoning. Instead of producing predictions solely from internal model parameters, the proposed approach retrieves visually similar rooftop scenes with verified annotations and performs comparative reasoning against these examples during inference. This retrieval-guided mechanism provides geographically contextualized references that improve robustness under heterogeneous urban environments without requiring model retraining. The method outperform both conventional deep vision models and standalone vision-language models. Furthermore, feeder-level case studies show that improved PV inventory estimation reduces errors in voltage deviation analysis and hosting capacity assessment. The results demonstrate that the proposed method provides a scalable and geographically robust approach for monitoring distributed PV deployment. This enables more reliable integration of remote sensing data into distribution grid planning and distributed energy resource management.
Abstract:Non-stationary power system dynamics, influenced by renewable energy variability, evolving demand patterns, and climate change, are becoming increasingly complex. Accurately capturing these dynamics requires a model capable of adapting to environmental factors. Traditional models, including Recurrent Neural Networks (RNNs), lack efficient mechanisms to encode external factors, such as time or environmental data, for dynamic adaptation. To address this, we propose the External Adaptive RNN (ExARNN), a novel framework that integrates external data (e.g., weather, time) to continuously adjust the parameters of a base RNN. ExARNN achieves this through a hierarchical hypernetwork design, using Neural Controlled Differential Equations (NCDE) to process external data and generate RNN parameters adaptively. This approach enables ExARNN to handle inconsistent timestamps between power and external measurements, ensuring continuous adaptation. Extensive forecasting tests demonstrate ExARNN's superiority over established baseline models.