TSP: Worse Case Rising

As part of my dissertation in high school, I describe heuristics for the Traveler Problem. I read this kind of case study (p. 8), but I cannot understand what these suggestions mean:

The run time for NN, as described, is Θ (N ^ 2). [...] In particular, we are guaranteed that NN (I) / OPT (I) ≤ (0. 5) (log_ {2} N + 1).

This part is very clear to me. But then:

No substantial. However, a higher guarantee is possible, since there are cases when the ratio grows as Θ (logN).

What is the point There are instances for which ?

The same thing happens with the greedy algorithm:

... but the worst examples known to Greedy only increase the ratio as (logN) / (3 log log N).

So what is the meaning of these statements? Does this have to do with non-Euclidean instances (I wouldn’t think so, because you just need to read the column of the distance matrix to solve it)? Or just instances, for example. multiple nodes at the same distance from the starting node, which requires the algorithm to split the decision tree or something similar?

EDIT: Thanks to @templatetypedef (your answer will still be accepted as correct), all of this makes sense. However, I would like to ask if anyone knows any example (or even a link) of these specific graphs (I do not mean which algorithm). I don’t think this is too much off topic, it is more likely to add something useful to this topic.

+6
source share
1 answer

Take a look at these two statements side by side:

In particular, we are guaranteed that NN (I) / OPT (I) ≤ (0. 5) (log_ {2} N + 1).

However, a significantly more reliable guarantee is possible, since there are cases when the ratio increases as Θ (logN).

This first statement says that the NN algorithm in the worst case gives an answer that is (roughly) within 1/2 log N of the true answer (to see this, simply multiply both sides by OPT (I)). This is great news! The natural question about follow-up is whether the actual boundary is even stricter. For example, this result does not exclude the possibility that we could also have NN (I) / OPT (I) & le; log log N or NN (I) / OPT (I)? 2. These are much more stringent restrictions.

That is where the second statement comes from. This statement claims that there are known examples of TSPs (i.e., specific graphs with weights on them) where the ratio NN (I) / OPT (I) is & Theta; (log n) (i.e., the ratio is proportional to the logarithm of the number of nodes in the graph). Since we already know about such inputs, there is no way that NN (I) / OPT (I) can be bounded from above by something like log log n or 2, since these boundaries are too tight.

However, this second statement in isolation is not very useful. We know that there are inputs that would make the algorithm produce something that is disabled by the log factor, but can it be possible that there are inputs that make it turn off a lot more; let's say an exponential factor? With the first statement, we know that this cannot happen, since we know that we never get more than the logarithmic factor.

Think of these statements in two steps: the first statement gives an upper estimate of how bad the approximation can be - it is never worse than 1/2 log N + 1 optimality factor. The second statement gives a lower bound on how bad the approximation can be - there are specific cases where the algorithm cannot do better than a & Theta; (log N) approximation of the optimal response.

(Note that & Theta; (log n) has nothing to do with runtime here - it's just a way of saying “something is logarithmic.”)

Moving forward, watch for the upper bounds and lower bounds. These two teams tell you much more than each of them individually.

Good luck with your dissertation!

+1
source

All Articles