Incident matrix instead of adjacency matrix

What problems on graphs are faster (in big-O terms) to solve using incident matrix data structures instead of the more common adjacency matrices?

+6
math algorithm big-o data-structures graph
source share
2 answers

Cosmic complexity of structures:

Adjacency: O (V ^ 2) Incidence: O (VE)

Due to the fact that the incidence structure saves space if the number of vertices is greater than the edges.

You can see the time complexity of some typical graph operations:

Find all vertices adjacent to the vertex: Adj: O (V) Inc: O (VE)

Check if two vertices are adjacent: Adj: O (1) Inc: O (E)

Count the top valency: Adj: O (V) Inc: O (E)

And so on. For any given algorithm, you can use building blocks, as described above, to calculate which representation gives you the best overall time complexity.

As a last note, using any matrix is ​​extremely inefficient for everyone except the busiest schedules, and I recommend not using them if you have not knowingly rejected alternatives such as adjacency lists.

+7
source share

Personally, I have never found a real application of the incident matrix representation in a programming competition or research problem. I think this may be useful for proving some theorems or for some very special problems. One book gives an example of "counting the number of spanning trees" as a problem in which this representation is useful.

Another problem with this representation is that it makes no sense to store it, because it is really easy to calculate it dynamically (to answer what the specified cell contains) from the list of edges.

However, this may seem more useful in hypergraphs, but only if it is dense.

So, my opinion is only useful for theoretical work.

+3
source share

All Articles