Abstract:Pareto optimization via evolutionary multi-objective algorithms has been shown to efficiently solve constrained monotone submodular functions. Traditionally when solving multiple problems, the algorithm is run for each problem separately. We introduce multitasking formulations of these problems that are an effective way to solve multiple related problems with a single run. In our setting the given problems share a monotone submodular function $f$ but have different knapsack constraints. We examine the case where elements within a constraint have the same cost and show that our multitasking formulations result in small Pareto fronts. This allows the population to share solutions between all problems leading to significant improvements compared to running several classical approaches independently. Using rigorous runtime analysis, we analyze the expected time until the introduced multitasking approaches obtain a $(1-1/e)$-approximation for each of the given problems. Our experimental investigations for the maximum coverage problem give further insight into the dynamics behind how the approach works and doesn't work in practice for problems where elements within a constraint also have varied costs.
Abstract:In many real-world settings, problem instances that need to be solved are quite similar, and knowledge from previous optimization runs can potentially be utilized. We explore this for the Traveling Salesperson problem with time windows (TSPTW), which often arises in settings where the travel-time matrix is fixed but time-window constraints change across related tasks. Existing TSPTW studies, however, have not systematically compared solving such task sequences independently with sequential transfer from previously solved tasks. We address this gap using a multi-task benchmark in which each base instance is expanded into five related tasks under two environments: partial time-window expansion and swap-additive time reassignment. We compare a standard from-scratch protocol with an iterative protocol that initializes each task from the best tour of the previous task, using the popular local search approaches LNS, VNS, and LKH-3 under a common penalized-score objective. Our experimental results show that the iterative protocol is consistently superior in the progressive-relaxation setting and generally competitive under swap-additive changes, with improvements increasing on more difficult instances.