An example of the real world of unification in first-order logic?

I know that this is only part of the programming issue, but at the moment I'm doing a little logic programming. One thing that I still don’t understand correctly is the union in first order logic.

I read the Wikipedia article , and it’s more or less clear that the goal is to find a term that combines two sentences ... There are also examples in this article, but I just don’t understand why this should be useful. Can someone give an example with real world objects instead of A, B, C, etc.? Hope this helps me understand. Thanks

+6
computer-science logic unification
source share
3 answers

If you look at real-world examples where unification is used and useful, look at unification-based grammars that are used in computational linguistics, such as HPSG and LFG. At first glance, it looks like another flavor of unification, but they are actually the same.

Unification-based grammar can be thought of as CFG (context-free grammar), where productions are extended with unification. Each CGF member receives an AVM (Attribute Value Matrix), which is a directed acyclic graph of attributes and values. The idea here is somewhat akin to the attribute grammars used in compilers.

Imagine this toy grammar:

S -> NP VP NP -> Kim NP -> The cats VP -> V NP V -> see V -> sees 

We have a slight rebirth here in agreement:

* Cats see Kim [S [NP Cats] [VP [V sees] [NP Kim]]]

To fix this, we could refine the CFG to include the notion of agreement:

  S -> NP_sg VP_sg S -> NP_sg VP_pl NP_sg -> Kim NP_pl -> The cats VP_sg -> V_sg NP_sg VP_sg -> V_sg NP_pl V_sg -> sees V_pl -> see VP_pl -> V_pl NP_pl VP_pl -> V_pl NP_sg 

Here we abandon the excessive generation from the previous one. But this leads to a combinatorial reaction very quickly. However, we could enlarge each term with AVM and combine them together in the analysis:

  S -> NP VP , C = A unified with B. NP -> kim /[ AGR sg ]. We mark Kim as being singular NP -> The cats / [ AGR pl ] VP[ AGR #1 ] -> V [ AGR #1 ] NP 

The # 1 notation is retries, which means that the value of this function must be the same, in fact they will point to the same node on the graph after the merge, if it is executed. Here, in practice, we say that the agreement function of a verb phrase coincides with the agreement of the verb in the phrase.

  V -> See / [ AGR pl ] V -> Sees / [ AGR sg ] 

With our expanded toy grammar, “Kim sees cats” is rejected because NP and VP do not combine, having a different meaning for their AGR function. When we understand, we combine AVM together and therefore express very expressively, allowing grammars to write grammars. As a rule, UBGs with a wide coverage have about 100 rules, while an equivalent CFG, which may not exist, once completed Turing CFGs will have thousands or more rules.

See HPSG and LFG for details .

+2
source share

Thank you for these detailed answers. Now I really understand. However, I would like to write here an example that I found in the book “Artificial Intelligence: A Modern Approach” by Stuart Russell and Peter Norwig, if someone is looking for the same question again. I think this answer uses a very practical approach:

The rules of withdrawn output require replacements that make different logical expressions look the same. This process is called unification and is a key component of all first-order output algorithms. The UNIFY algorithm accepts two sentences and returns a unifier for them, if one exists.

Let's look at some examples of how UNIFY should behave. Suppose we have the query Knows (John, x): who do you know John? Some answers to this query can find all the sentences in the knowledge base combining Knows (John, x). Here are the results of combining with four different sentences that may be in the knowledge base:

 UNIFY(Knows(John, x), Knows(John, Jane)) = {x/Jane} UNIFY(Knows(John, x), Knows(y, Bill)) = {x/Bill, y/John} UNIFY(Knows(John, x), Knows(y, Mother(y))) = {y/John, x/Mother(John)} UNIFY(Knows(John, x), Knows(x, Elisabeth)) = fail 

The latter unification fails because x cannot take the values ​​of John and Elizabeth at the same time.

+5
source share

Logical programming is AFAIK, almost all unification. You provide instructions to the interpreter, and the interpreter tries to combine it with what he knows is “true,” that is, what is in his database.

eg.

 cat(tom) :- true. 

It is claimed that tom is a cat.

Then you can request

 ?- cat(X). 

and the prologue will return

  X = tom 

Prolog scans its database and tries to combine the provided statement ( cat(X) ) with the fact that it already “knows”. In this case, it finds cat(tom) and therefore can tell you that X=tom .

+1
source share

All Articles