With the increasing number of tasks in vehicular networks year by year, optimizing task offloading is becoming increasingly critical to improving system performance and resource utilization. However, due to the dynamic nature of network topology, the high mobility of vehicular devices, and the heterogeneous demands of mobile users, designing effective offloading strategies remains to be a challenging task. In this study, we proposes a task offloading method that synthesizes a reinforcement learning algorithm and an evolutionary one i.e., PPO-Enhanced NSGA-III Offloading Algorithm(PENOA). This method leverages a PPO reinforcement learning algorithm for dynamically adjusting offloading schedules produced by an NSGA-III algorithm. PENOA is capable of learning to generate near Pareto-optimal solution sets by collaboratively tuning, computational resource allocation and task schedules strategies between vehicles and servers. Experimental studies based on real-world Taxi Trajectory and Telecom Base Station datasets demonstrate that PENOA outperforms benchmark algorithms across multiple performance metrics.