I am trying to use the Kowalski graph algorithm to prove the resolution theorem. The description of the algorithm in http://www.doc.ic.ac.uk/~rak/ is silent about what to do with a large number of duplicate sentences created. I wonder if there is a well-known technique to deal with them?
In particular, you cannot just suppress the generation of duplicates, because the links that come with them are relevant.
It seems to me that it is probably necessary to track a lot of all the clauses created so far, and when a duplicate is created, add new links to an existing instance instead. This should probably be supported even when the proposal is nominally deleted, because when it is being regenerated.
Duplication is probably necessary to define in terms of textual representation rather than equality of objects, since literals of various articles are separate objects, even if they are identical.
Can someone confirm if I am right here? Also, the Significant online link I could find in this algorithm is the link above, does anyone know about others, or is there any existing code implementing it?
source share