Wireless sensor networks (WSNs) with energy harvesting (EH) are expected to play a vital role in intelligent 6G systems, especially in industrial sensing and control, where continuous operation and sustainable energy use are critical. Given limited energy resources, WSNs must operate efficiently to ensure long-term performance. Their deployment, however, is challenged by dynamic environments where EH conditions, network scale, and traffic rates change over time. In this work, we address system dynamics that yield different learning tasks, where decision variables remain fixed but strategies vary, as well as learning domains, where both decision space and strategies evolve. To handle such scenarios, we propose a cross-domain lifelong reinforcement learning (CD-L2RL) framework for energy-efficient WSN design. Our CD-L2RL algorithm leverages prior experience to accelerate adaptation across tasks and domains. Unlike conventional approaches based on Markov decision processes or Lyapunov optimization, which assume relatively stable environments, our solution achieves rapid policy adaptation by reusing knowledge from past tasks and domains to ensure continuous operations. We validate the approach through extensive simulations under diverse conditions. Results show that our method improves adaptation speed by up to 35% over standard reinforcement learning and up to 70% over Lyapunov-based optimization, while also increasing total harvested energy. These findings highlight the strong potential of CD-L2RL for deployment in dynamic 6G WSNs.