Essentially, it is a depth search that stops at a specific depth or value. For example, it can DFS all nodes within 10 edges of the source, then 20, and then 30. The difference is that instead of starting DFS from scratch after each iteration, I save the "perimeter" of the desired area (list of nodes) when each iteration search reaches its limits.
In the next iteration, I go through all the nodes around the perimeter, performing DFS from each node, again to a fixed depth / cost before stopping, again recording along the perimeter of the search area for the next iteration, start with.
The reason I do this is because my graph (which is a tree) is split into a set of logical βpiecesβ, each of which needs to be fully explored before its child pieces can begin to be studied. There are a large number of knots, but only a small number of pieces; I essentially make a BFS blockchain in which each individual fragment (containing a large number of separate nodes) is fully explored by its own mini-DFS.
Now, I just completely did it in place to solve my problem, and it is, but is there something similar in the literature? I could not find anything, but I am sure that someone else has done this before, and correctly analyzed its performance, asymptotic behavior, flaws, errors, etc. In this case, I would like to know about it.
source
share