Implement key reduction in STL Priority Queue C ++

I am trying to implement Prim Algorithm, and for this I need to have a reduceKey method for the priority queue (for updating the key value in the priority queue). Can I implement this in the STL priority queue?

If this helps, this is the algorithm I follow:

  • for each vertex u in the graph G
    • set the u key to INFINITY
    • set parent element u to NIL
  • set the source vertex key to 0
  • en-queue for Q priority queue all vertices in the graph with keys as indicated above
  • while Q is not empty
    • pop vertex u with smallest key in Q
    • for each adjacent vertex v of u do
      • if (v is still in Q) and (key (u) + weight function (u, v) <key (v)), then
        • set u as parent v
        • Update v key to equal key (u) + weight function (u, v) // This part gives me problems since I don’t know how to implement reduceKey in priority queue
+4
source share
4 answers

I do not think you can implement it in an STL container. Remember that you can always write your own heap (priority queue) based on the vector, but there is work around:

Hold an array of distances, say d . In the priority queue, you put pairs of distances and the vertex index of this distance. When you need to remove a value from the queue, do not delete it, just update the value in the d array and put a new pair in the queue.

Each time you take a new value from the queue, check if the distance in the pair is really good, as in your array d . If you do not ignore it.

Time is the same O (MlogM). Memory O (MlogM), where M is the number of edges.

There is another approach: use RB-Tree, it can insert and delete keys in O (logN) and get the minimum values. You can find the implementation in STL RB-Tree in the std::set container.

But, although the time complexity is the same, RB-Tree is slower and has a larger hidden constant, so it can be a little slower, appx. 5 times slower. Depends on the data, of course.

+8
source

For a different approach: better than using std :: set. You can use btree :: btree_set (or btree :: safe_btree_set). This is an implementation identical to std :: set made by google using B-Tree as opposed to stl which use RB-Tree. This is much better than std :: set, as well as O (logN). check the performance comparison: http://code.google.com/p/cpp-btree/wiki/UsageInstructions And it also has a much smaller memory area.

+2
source

I am not an expert, so I hope this is not too stupid, but will a vector combined with lower_bound be good?

If you use lower_bound to find the right place to insert new values, your vector will always be sorted as it is created, no sorting is required. When your vector is sorted, is not lower_bound a binary search with exponent of a logarithmic class?

Since it is sorted, finding the minimum value (or max) is a binding.

To reduce the key, you must perform a lower search, delete and make lower_bound again to insert the reduced key, which = 2 logarithmic classes. Still not bad.

Alternatively, you can update the key and sort the vector. I would suggest that with random access this should still be in a logarithmic class, but I don't know what stl does exactly.

With a sorted vector, if you know that the candidate key is smaller than the one that is there, perhaps you can even sort the part of the vector that has ever smaller values ​​for even greater performance.

Another consideration: I think sets / maps have a bit more memory overhead than vectors?

0
source

I think that large sorting is limited to NLogN, so a 2nd login to reinstall, rather than sorting, might be better for a key reduction operation.

Another thing is that the insertion into the vector is not so hot, but in general, is it worth considering the idea of ​​vector w lower_bound?

thanks

0
source

All Articles