We consider vehicular networking scenarios where existing vehicle-to-vehicle (V2V) links can be leveraged for an effective uploading of large-size data to the network. In particular, we consider a group of vehicles where one vehicle can be designated as the \textit{leader} and other \textit{follower} vehicles can offload their data to the leader vehicle or directly upload it to the base station (or a combination of the two). In our proposed framework, the leader vehicle is responsible for receiving the data from other vehicles and processing it in order to remove the redundancy (deduplication) before uploading it to the base station. We present a mathematical framework of the considered network and formulate two separate optimization problems for minimizing (i) total time and (ii) total energy consumption by vehicles for uploading their data to the base station. We employ deep reinforcement learning (DRL) tools to obtain solutions in a dynamic vehicular network where network parameters (e.g., vehicle locations and channel coefficients) vary over time. Our results demonstrate that the application of DRL is highly beneficial, and data offloading with deduplication can significantly reduce the time and energy consumption. Furthermore, we present comprehensive numerical results to validate our findings and compare them with alternative approaches to show the benefits of the proposed DRL methods.