What is the combinatorial logical equivalent of an intuitionistic theory?

I recently completed a university course that introduced Haskell and Agda (a dependent typed functional programming language) and wondered if lambda calculus could be replaced with combinatorial logic. With Haskell, this seems possible with the S and K combinators, which makes it silent. I was wondering what the equivalent for Agda is. Ie, is it possible to make typed functional programming language equivalent to Agda without using any variables?

Also, can quantification be replaced by combinators in some way? I don't know if this is a coincidence, but universal quantification, for example, makes a type signature look like a lambda expression. Is there a way to remove a universal quantification from a type signature without changing its value? For example. at:

forall a : Int -> a < 0 -> a + a < a 

Is it possible to express the same thing without using forall?

+80
types logic functional-programming haskell agda
Jul 10 2018-12-12T00:
source share
2 answers

So I thought about it a little more and made some progress. Here, the first hit on Martin-Lรถf coding is delightfully simple (but inconsistent) Combinatorial Set : Set . This is not the best way to finish, but it is the easiest place to start. The syntax for this type theory is just a lambda calculus with type annotations, Pi types, and a set of universes.

Target type theory

For completeness, I will introduce the rules. Contextual validity simply says that you can create contexts from empty, adjacent to the new variables that inhabit Set s.

  G |- valid G |- S : Set -------------- ----------------------------- x fresh for G . |- valid G, x:S |- valid 

And now we can say how to synthesize types for terms in any given context and how to change the type of something before the computational behavior of the terms contained in it.

  G |- valid G |- S : Set G |- T : Pi S \ x:S -> Set ------------------ --------------------------------------------- G |- Set : Set G |- Pi ST : Set G |- S : Set G, x:S |- t : T x G |- f : Pi STG |- s : S ------------------------------------ -------------------------------- G |- \ x:S -> t : Pi STG |- fs : T s G |- valid G |- s : SG |- T : Set -------------- x:S in G ----------------------------- S ={beta} T G |- x : SG |- s : T 

In a small version of the original, I made lambda the only binding operator, so the second argument to Pi should be a function that computes the way the return type depends on the input. By convention (for example, in Agda, but unfortunately not in Haskell), the volume of lambda extends to the right as far as possible, so you can often leave abstractions uncommitted when they are the last argument of a higher order operator: you can see what i did that with pi. Type Agda (x : S) -> T becomes Pi S \ x:S -> T

(Digression. Type annotations on lambda are necessary if you want to be able to synthesize type abstractions. If you switch to type checking as your modus operandi, you still need annotations to check beta redex, for example (\ x -> t) s , since you donโ€™t have the opportunity to guess the types of parts from the whole. I advise modern designers to check the types and exclude beta redexes from the syntax itself.)

(Failure. This system is inconsistent because Set:Set allows you to encode many โ€œliar paradoxes.โ€ When Martin-Lรถf proposed this theory, Girard sent him an encoding in his own inconsistent U System. The subsequent paradox due to Hurkens is the purest toxic construction we know.)

Combinator Syntax and Normalization

In any case, we have two additional characters: Pi and Set, so we could control the combinatorial translation with S, K and two additional characters: I chose U for the universe and P for the product.

Now we can define the untyped combinatorial syntax (with free variables):

 data SKUP = S | K | U | P deriving (Show, Eq) data Unty a = C SKUP | Unty a :. Unty a | V a deriving (Functor, Eq) infixl 4 :. 

Note that I have included tools to include the free variables represented by type a in this syntax. In addition to the reflex on my part (each syntax worthy of a name is a free monad with return nesting of variables and >>= substitution of substitution), it will be convenient to represent the intermediate stages of the process of converting terms with binding to their combination form.

Here's the normalization:

 norm :: Unty a -> Unty a norm (f :. a) = norm f $. a norm c = c ($.) :: Unty a -> Unty a -> Unty a -- requires first arg in normal form CS :. f :. a $. g = f $. g $. (a :. g) -- S fag = fg (ag) share environment CK :. a $. g = a -- K ag = a drop environment n $. g = n :. norm g -- guarantees output in normal form infixl 4 $. 

(An exercise for the reader is to determine the type for exactly normal forms and sharpen the types of these operations.)

Type theory representation

Now we can define the syntax for our type theory.

 data Tm a = Var a | Lam (Tm a) (Tm (Su a)) -- Lam is the only place where binding happens | Tm a :$ Tm a | Pi (Tm a) (Tm a) -- the second arg of Pi is a function computing a Set | Set deriving (Show, Functor) infixl 4 :$ data Ze magic :: Ze -> a magic x = x `seq` error "Tragic!" data Su a = Ze | Su a deriving (Show, Functor, Eq) 

I use the de Bruyne index representation in the Belhard and Hook method (as popularized by Bird and Paterson). The type Su a has one more element than a , and we use it as the type of free variables under the binder, with Ze as the new related variable and Su x is the shifted representation of the old free variable x .

Conditions for transferring to combinators

And with this, we get a regular translation based on parenthesized abstraction.

 tm :: Tm a -> Unty a tm (Var a) = V a tm (Lam _ b) = bra (tm b) tm (f :$ a) = tm f :. tm a tm (Pi ab) = CP :. tm a :. tm b tm Set = CU bra :: Unty (Su a) -> Unty a -- binds a variable, building a function bra (V Ze) = CS :. CK :. CK -- the variable itself yields the identity bra (V (Su x)) = CK :. V x -- free variables become constants bra (C c) = CK :. C c -- combinators become constant bra (f :. a) = CS :. bra f :. bra a -- S is exactly lifted application 

Entering combinators

The translation shows how we use combinators, which gives us a fairly general idea of โ€‹โ€‹what their types should be. U and P are just the given constructors, therefore, writing down untranslated types and allowing "Agda notation" for Pi, we should have

 U : Set P : (A : Set) -> (B : (a : A) -> Set) -> Set 

The combinator K used to raise the value of some type a to a constant function with respect to some other type G

  G : Set A : Set ------------------------------- K : (a : A) -> (g : G) -> A 

The S combinator is used to lift applications by the type on which all parts can depend.

  G : Set A : (g : G) -> Set B : (g : G) -> (a : A g) -> Set ---------------------------------------------------- S : (f : (g : G) -> (a : A g) -> B ga ) -> (a : (g : G) -> A g ) -> (g : G) -> B g (ag) 

If you look at type S , you will see that it explicitly states the rule of a contextualized type theory application, so that makes it suitable to reflect the design of the application. Its' his job!

Then we have an application only for closed things

  f : Pi AB a : A -------------- fa : B a 

But there is a blunder. I wrote combinatorial types in conventional type theory, not combinatorial type theory. Fortunately, I have a machine that will translate.

Raman type system

 --------- U : U --------------------------------------------------------- P : PU(S(S(KP)(S(S(KP)(SKK))(S(KK)(KU))))(S(KK)(KU))) G : U A : U ----------------------------------------- K : P[A](S(S(KP)(K[G]))(S(KK)(K[A]))) G : U A : P[G](KU) B : P[G](S(S(KP)(S(K[A])(SKK)))(S(KK)(KU))) -------------------------------------------------------------------------------------- S : P(P[G](S(S(KP)(S(K[A])(SKK)))(S(S(KS)(S(S(KS)(S(KK)(K[B])))(S(KK)(SKK)))) (S(S(KS)(KK))(KK)))))(S(S(KP)(S(S(KP)(K[G]))(S(S(KS)(S(KK)(K[A]))) (S(S(KS)(KK))(KK)))))(S(S(KS)(S(S(KS)(S(KK)(KP)))(S(KK)(K[G])))) (S(S(KS)(S(S(KS)(S(KK)(KS)))(S(S(KS)(S(S(KS)(S(KK)(KS))) (S(S(KS)(S(KK)(KK)))(S(KK)(K[B])))))(S(S(KS)(S(S(KS)(S(KK)(KS)))(S(KK)(KK)))) (S(KK)(KK))))))(S(S(KS)(S(S(KS)(S(KK)(KS)))(S(S(KS)(S(KK)(KK))) (S(S(KS)(KK))(KK)))))(S(S(KS)(S(S(KS)(S(KK)(KS)))(S(KK)(KK))))(S(KK)(KK))))))) M : AB : U ----------------- A ={norm} B M : B 

So, you have it in all its unreadable glory: the combination representation of Set:Set !

There is still a small problem. The syntax of the system makes it impossible to guess the parameters G , a and B for S and similarly for K , only from terms. Accordingly, we can algorithmically derive derived fingerprints, but we cannot simply introduce combinatorial terms, as best we can, with the original system. What might work is to require input with a type pointer in order to have type annotations when using S and K, effectively writing down the output. But this is another worm of worms ...

This is a great place to stop if you were passionate enough to get started. The rest is behind the scenes.

Creating Combinator Types

I generated these combinatorial types using a translation of the parenthesis abstraction from the corresponding type theory terms. To show how I did this and to make this post not completely pointless, let me offer my equipment.

I can write types of combinators that are completely abstracted by their parameters, as follows. I use my pil function, which combines Pi and lambda to avoid domain type repetition and, rather, helps me use the Haskell function space to bind variables. Perhaps you can almost read the following:

 pTy :: Tm a pTy = fmap magic $ pil Set $ \ _A -> pil (pil _A $ \ _ -> Set) $ \ _B -> Set kTy :: Tm a kTy = fmap magic $ pil Set $ \ _G -> pil Set $ \ _A -> pil _A $ \ a -> pil _G $ \ g -> _A sTy :: Tm a sTy = fmap magic $ pil Set $ \ _G -> pil (pil _G $ \ g -> Set) $ \ _A -> pil (pil _G $ \ g -> pil (_A :$ g) $ \ _ -> Set) $ \ _B -> pil (pil _G $ \ g -> pil (_A :$ g) $ \ a -> _B :$ g :$ a) $ \ f -> pil (pil _G $ \ g -> _A :$ g) $ \ a -> pil _G $ \ g -> _B :$ g :$ (a :$ g) 

With these definitions, I extracted the corresponding open sub-terms and led them through the translation.

Coding Toolkit de Bruijn

Here's how to build a pil . First, I define the class of Fin ite sets used for variables. Each such set has a preserving emb constructor in the set above plus a new top element, and you can tell them apart: the embd function tells you if there is a value in the emb image.

 class Fin x where top :: Su x emb :: x -> Su x embd :: Su x -> Maybe x 

We can, of course, create an instance of Fin for Ze and Suc

 instance Fin Ze where top = Ze -- Ze is the only, so the highest emb = magic embd _ = Nothing -- there was nothing to embed instance Fin x => Fin (Su x) where top = Su top -- the highest is one higher emb Ze = Ze -- emb preserves Ze emb (Su x) = Su (emb x) -- and Su embd Ze = Just Ze -- Ze is definitely embedded embd (Su x) = fmap Su (embd x) -- otherwise, wait and see 

Now I can define less or equal, with a weakened operation.

 class (Fin x, Fin y) => Le xy where wk :: x -> y 

The function wk must insert the elements of x into the number of the largest elements of y , so that additional things in y smaller and, therefore, in terms of the Bruin index related more locally.

 instance Fin y => Le Ze y where wk = magic -- nothing to embed instance Le xy => Le (Su x) (Su y) where wk x = case embd x of Nothing -> top -- top maps to top Just y -> emb (wk y) -- embedded gets weakened and embedded 

And after you figure it out, a little scrolling processing of rank-n does the rest.

 lam :: forall x. Tm x -> ((forall y. Le (Su x) y => Tm y) -> Tm (Su x)) -> Tm x lam sf = Lam s (f (Var (wk (Ze :: Su x)))) pil :: forall x. Tm x -> ((forall y . Le (Su x) y => Tm y) -> Tm (Su x)) -> Tm x pil sf = Pi s (lam sf) 

A higher-order function does not just give you a term representing a variable, it gives you an overloaded thing that becomes the correct representation of the variable in any scope of the variable. That is, the fact that I try to distinguish between different areas by type gives enough information about the Haskell method to calculate the shift needed to translate into Bruyne's representation. Why keep a dog and bark yourself?

+47
Aug 03 2018-12-12T00:
source share

I suggest that "Bracket Abstraction" also works for dependent types in some circumstances. In section 5 of the following document, you will find several types K and S:

Frantic but meaningful coincidences
Dependent Syntax and Type Evaluation
Conor McBride, University of Strathclyde, 2010

Converting a lambda expression to a combinatorial expression roughly corresponds to converting a proof of natural deduction to a proof of Hilbert's style.

+8
Jul 20 '12 at 20:22
source share