Traveling Salesman Problem
JB
Posted on June 13, 2020
Resources:
Takeaways:
- The traveling salesman problem (TSP) is:
- Given a list of cities & the distances between each pair of cities: what is the shortest possible route/tour that visits each city and returns to the origin city?
- With vanilla TSP you can assume the following:
- The distance
D
between cityA
and cityB
is the same as the distance between cityB
and cityA
. Thus,D[A][B] == D[B][A]
.- If this is not true, TSP (& our graph of cities/distances) is considered asymmetric - a solution is more challenging to arrive at as there are more edges/possible routes.
- There is no given origin/start city. We only care about the shortest (optimal) route.
- If we must find the optimal route given a starting city, the solution is largely the same.
- The distance
- TSP is an NP-hard problem and the brute force approach to solving it is
O(n!)
(factorial time) - as we must explore all possible routes. - Exact algorithms that solve TSP optimally can work reasonably well for small inputs, however for larger (or unknown) inputs heuristic algorithms are popular.
- Heuristic algorithms solve problems but produce solutions that are not guaranteed to be optimal. When we talk about TSP heuristic algorithms we mean algorithms that will produce an approximate solution.
- Heuristic approaches to solving TSP are typically faster than optimal approaches, meaning we can get an answer to TSP in a reasonable amount of time. If we choose the right algorithm(s), then an approximate solution can be acceptably close (and sometimes identical) to an optimal one.
- Personally, I chose to solve a variation of TSP using two heuristic approaches that result in approximate solutions.
- The variation is minor. I chose to solve TSP for a given start city (if none is provided, then I just solve starting at the first city in the given list).
- The solution will be very similar for vanilla TSP, in fact slightly less complex as we do not care about final ordering of cities. In my implementations, I had to take extra care, especially when attempting to optimize a tour, to retain the specified start city.
- The variation is minor. I chose to solve TSP for a given start city (if none is provided, then I just solve starting at the first city in the given list).
- The algorithms I implemented:
- Preorder Minimum Spanning Tree (PMST) using Kruskal's algorithm
-
Nearest Neighbour (NN)
- Optimized using repetition & sampling.
- In addition, I chose to optimize the resulting solutions (where appropriate) using 2-opt
- 2-opt is a local search algorithm often used to improve existing TSP solutions (that are not already optimal)
- Per Skiena, in The Algorithm Design Manual: "Two-opting a tour is a fast and effective way to improve any other heuristic." (p.535).
- Here is the Wikipedia page for 2-opt. The algorithm as described is what I ended up using.
- I like the PMST approach because I had already covered minimum spanning tree (MST) in an earlier post, it also allowed me to brush up on Union-Find which Kruskal's algorithm relies on (Kruskal's algorithm constructs an MST).
- The PMST approach has the benefit, as implemented, of being guaranteed to be no worse than 2x the optimal route (or tour). Meaning if an optimal tour is 20, at worse a PMST solution will be 40.
- An MST is a tree (list) of all the shortest edges in the graph that connect all vertices in the graph
- If we could rearrange an MST at no cost, we could rearrange it to be an optimal tour.
- As rearranging an MST does have a cost, we instead traverse it in a general way (preorder using DFS) to come up with an ordering that is "good enough" i.e an approximation.
- When rearranging an MST into a tour, the most we could traverse an edge is twice, but instead of doing this we instead "skip" to the next city.
- This skip is a straight line & represents a single edge in our graph.
- Going via one edge in this way will always be the same or better than going via two edges. Why? because of triangle inequality.
- Triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side.
- So if a subsection of our MST is a triangle with each side of our triangle being an edge and each point of our triangle being a vertex (
C--A--B
):- If we start at vertex
A
and then visitB
, we will need to visitC
next (to complete our tour). - To visit
C
we can skip fromB
toC
via a single edgeedge(B,C)
or go viaA
which will mean visiting two edgesedge(B,A) & edge(A,C)
. - We will always choose to visit
C
via a single edgeedge(B,C)
because:edge(B,C) <= edge(B,A) + edge(A,C)
- meaningedge(B,C)
(due to triangle inequality) can be shorter, but never longer. - Therefore, preordering an MST means that in a worst case scenario our "skips" (going via one edge) will be as bad as going via two edges. Meaning at worst a preordering of vertices in an MST will produce a tour that is 2x the distance of an optimal tour (but often better than 2x).
- If we start at vertex
- NN is much simpler to reason about and the code is more concise, so I decided to provide an implementation for this algorithm as well.
- At a high level, this is how the chosen algorithms work:
- PMST:
- This approach first constructs an MST from the input graph of cities.
- This MST will have it's edges duplicated, to form an Eulerian graph.
- We do this so that we can perform depth-first search (DFS) on our MST and preorder it's vertices (cities).
- Once an MST is constructed we can perform DFS and the resulting order of vertices (cities) is our approximate solution.
- NN:
- This approach starts at any city
A
and then proceeds to visit the nearest city to itB
. - The algorithm repeats this process (so will visit city
C
that is closest toB
etc.) until all cities have been visited, then it will return to the origin city. - We can optimize NN by executing it for every city in our list of cities, whilst keeping track of what the shortest tour produced was. We can call this repeated nearest neighbour.
- In the same way, we can take a subset (or sample) of cities at random and execute NN for each city in our subset, then return the shortest tour. We can call this sampled repeated nearest neighbour.
- Both repetition and sampling improve NN, but sampling is often almost as good as repetition whilst being less expensive (quicker) - this is of course dependent on the sample size.
- This approach starts at any city
- 2-opt:
- 2-opt's goal is to "uncross" any part of the route and see if preventing routes from crossing over each other's paths improves the solution.
- To do this 2-opt swaps 2 edges in a tour.
- To swap the 2 edges, 2-opt takes a tour and creates 3 subsections of it.
- The middle subsection is reversed (i.e 2 edges are swapped), with the leading and trailing subsections staying the same.
- If the resulting tour is better, we recurse and begin the 2-opt routine again (passing in our new, shorter, tour) - this way 2-opt will continue to optimize an already optimized tour, and will produce the most optimal tour it can.
- Don't confuse this as meaning 2-opt can produce an optimal tour - most of the time it cannot, but it can improve tours.
- If the tour is no better, or worse, than the original tour, then the window size is increased and the process is repeated.
- A bigger window means a larger middle subsection will get reversed.
- PMST:
- So what are the time complexities of PMST, NN, & 2-opt?
- PMST:
- Time complexity for Kruskal's algorithm (used to construct our MST) is
O(e log v)
wheree
is the number of edges andv
is the number of vertices in the graph. Space isO(v + e)
.- In our PMST solution of TSP we will double the edges, so space is actually worse, but constants are factored out in Big O.
- Preorder traversal of our MST is
O(v + e)
. - Total time complexity of the PMST approach is
O(e log v)
.- My implementations take an adjacency matrix for city distances. I transform this matrix into an edge list. This incurs an
O(v^2)
cost. I think this means that, as implemented, my solution isO(e log v^2)
. However, if supplied an edge/adjacency list as an input, this cost is not incurred.
- My implementations take an adjacency matrix for city distances. I transform this matrix into an edge list. This incurs an
- Time complexity for Kruskal's algorithm (used to construct our MST) is
- NN:
- NN's time complexity is
O(n^2)
in the worst case. Space isO(n)
. Repeated NN isO(n^3)
and sampled NN isO(n^k)
wherek
is the size of the sample.
- NN's time complexity is
- 2-opt:
- 2-opt's complexity varies on the type of tour it is given:
- In the average case, if starting with a tour made using a greedy algorithm (which our PMST & NN produce), the time complexity is
O(n)
. If the tour is more randomly arrived at, then 2-opt's time complexity isO(n log n)
.
- In the average case, if starting with a tour made using a greedy algorithm (which our PMST & NN produce), the time complexity is
- 2-opt's complexity varies on the type of tour it is given:
- PMST:
Extras:
-
Held-Karp algorithm is a dynamic programming algorithm that solves the TSP optimally. It is
O(n^2 * 2^n)
. -
3-opt is similar to 2-opt, but it swaps 3 edges instead of 2. It is
O(n^3)
for a single run, but can produce close to an optimal tour.- Both 2 & 3-opt are a class of K-optimal tours.
- Per Skiena in The Algorithm Design Manual: K-optimal tours are "local refinements to an initially
arbitrary tour in the hopes of improving it. In particular, subsets of k edges
are deleted from the tour and the k remaining subchains rewired to form
a different tour with hopefully a better cost. A tour is k-optimal when no
subset of k edges can be deleted and rewired to reduce the cost of the tour." (p.535).
- When
K > 3
the time complexity increases faster than the solution improves. So we often only deal with 2-opt or 3-opt in the context of TSP.
- When
Below you will find implementations of, & test cases for, preorder minimum spanning tree, variations of nearest neighbour, & 2-opt approximate solutions for the Traveling Salesman problem:
As always, if you found any errors in this post please let me know!
💖 💪 🙅 🚩
JB
Posted on June 13, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.