Count Kowalski's theorem proving

I am trying to use the Kowalski graph algorithm to prove the resolution theorem. The description of the algorithm in http://www.doc.ic.ac.uk/~rak/ is silent about what to do with a large number of duplicate sentences created. I wonder if there is a well-known technique to deal with them?

In particular, you cannot just suppress the generation of duplicates, because the links that come with them are relevant.

It seems to me that it is probably necessary to track a lot of all the clauses created so far, and when a duplicate is created, add new links to an existing instance instead. This should probably be supported even when the proposal is nominally deleted, because when it is being regenerated.

Duplication is probably necessary to define in terms of textual representation rather than equality of objects, since literals of various articles are separate objects, even if they are identical.

Can someone confirm if I am right here? Also, the Significant online link I could find in this algorithm is the link above, does anyone know about others, or is there any existing code implementing it?

+4
source share
2 answers

It turns out that the Kowalski algorithm is simply not as useful as I thought. Basically, you need to save everything that you create, so as not to spend almost all the time on your processor, generating the same articles again and again. Saving all means you want to define duplicates, which means you want the whole hash, which has a useful side effect that can be checked by simply matching pointers (since you only have one copy of each expression).

0
source

It looks completely believable; some googling do not present any obvious implementations. I agree, you want to look at the equality between representations, and not at identity.

0
source

All Articles