My answer will be based on the igraph package in R. The situation is really rather confusing, and the questions are relevant, because, as Newman (2004) says,
Since the publication of this work, the author has been given a time number whether there is an appropriate generalization of the algorithm for weighted networks.
In his article, he receives an appropriate generalization of the Newman-Girvan algorithm for weighted networks.
Weight
You are correct in interpreting weights in the Newman-Girvan algorithm. edge_betweenness uses a formula similar to the formula (Brandes, 2001), where the path length is defined as the sum of the weights of its edges. (You can also check the source code , but this is very important). In ?edge_betweenness and in particular ?cluster_edge_betweenness it says
Boundary weights are used to calculate the weighted boundary between them. This means that the edges are interpreted as distances, and not as joining strengths.
The consequences are as follows. Let b (e, w) be the boundary inter-population of the edge e with weight w. Then you can show (I could develop, if you want) that
b (e, w) <= b (e, w *) if and only if w> = w *.
That is, the difference between the boundaries and the weight e is inversely proportional. The main idea is that the given one, for example, w * β w, those very shortest paths that crossed e now, will most likely dominate some other paths that do not include e. Consequently, greater weight implies a (weakly) lower civil strife, and lower civil strife makes it less likely that e will be recognized as the edge connecting the two communities. So it sounds weird if we see weights as distances. On the other hand, if e is within a certain community, and we are reducing its weight, then the number of shortest paths through this facet is potentially increasing, and this becomes more likely, as a combination of two communities. However, I am not yet claiming anything about the relevant modularity metrics.
Now suppose that the weights do correspond to the strength of the joint. Then the stronger the connection, the fewer shortest paths will go through this edge (because we still need to calculate them), the lower its border between them, the less likely it will be removed. So that makes sense.
What is not pleasant or rather strange is that the path length is now defined as the sum of its bond strength. However, we can reformulate the algorithm. Suppose that scales β 1 in communities and <1 between them. Then we can interpret the length of the path as the confidentiality of this path (for example, the path within the community will contain many close interactions, and the edge connecting the two communities will be somewhat open, open). Given this interpretation, the algorithm will look for the least private / most open paths and calculate the corresponding among themselves. Then we will remove edges that belong to many of the most open paths.
So maybe Iβve made a mistake somewhere, but it seems like it would be wiser to see weights as the strengths of the mix.
Newman (2004) does something related:
... we will specifically consider those networks in which the weights at the edges take higher values ββfor pairs of vertices that are closer or more similar to each other.
It would seem that this should make sense. However, in order to preserve a more natural definition of the shortest path, he writes:
You can define paths in a weighted network by assuming that the βlengthβ is an edge that changes inversely with its weight, so that two vertices that are connected are twice as strong as twice as much from each other.
That is, the shortest path lengths are now inversely related to weights. Since not doing this seemed to give good results, now we have a problem:
To see this, note that any two vertices that are especially strongly connected to each other will have a particularly short distance along the edge between them. Thus, the geodesic paths will be ceteris paribus, prefer to flow along such an edge, another longer edge between two less well connected peaks and therefore closely connected pairs will tend to attract many paths and acquire a high degree. This means that, as a rule, we are more likely to remove the edges between well-connected pairs than we are between weakly-connected pairs, and this is the exact opposite of what we would like to do the algorithm.
What is the result that I described when we see weights as distances. As I mentioned at the beginning of the answer, to examine this, Newman (2004) proposes to compare weighted graphs with unweighted multigraphs, and then act in exactly the same way as in the standard case. I believe that this idea of ββa multigraph can be realized by setting weighted = NULL , but not having a binary adjacency matrix (when defining a graph, see weighted in ?graph_from_adjacency_matrix ).
Modularity
First of all, you can use modularity with weighted graphs, as Newman (2004) does, which is not a problem. In general, it is not so obvious how the use of weights affects modularity, as the choice of the number of communities. I may add a few examples with R. It seems that there should be an improvement over the unweighted case, as Newman (2004) believes, when the interpretation corresponds to the method of the algorithm. Otherwise, I think that the structure of the graph and the weights themselves can be very important to describe the degree of how far we get from the truth.
References
Newman, ME, 2004. Weighted Network Analysis. Physical review E, 70 (5).
Brandes, U., 2001. Faster centrality algorithm. Journal of Mathematical Sociology, 25 (2), pp. 163-177.