Enter a forced "strict / imperative" subset / version of Haskell

I really like Haskell, however one of the main things that concerns me about Haskell is the difficulty in thinking about using space. In principle, the likelihood of tricks and recursion seems to create some difficult situations where it seems that you need to be very careful, adding the right rigor so that the program does not run out of memory on certain inputs.

What I like about C / C ++ is that I can quickly be sure of the upper bounds of using the program space, in particular if you avoid recursion. The variables are obvious.

I was wondering if there is a way to create a typical “imperative” Haskell subset that does not have thunks and is strict.

I understand Data.STRef gives mutable cells, but as I understand it, these cells themselves are still lazy and may contain thunks. I thought to make the data in such cells be strict, but I'm not sure how to do this according to the type system.

I was thinking of something like "Strict Monad", but maybe this is the wrong form for this.

+4
source share
3 answers

I am sure that in Haskell it is almost impossible to implement this, which by default implies laziness. For example, there are functions through a standard library that would be useless in a strict language, because in reality they are not guaranteed to stop if you request all your products. Therefore, if you have a “strict subset” of Haskell, it would be difficult to interact with any other Haskell code, and you could still program in a different language.

In addition, I think that you are looking in the wrong place, thinking about monads. Data.STRef really has nothing to do with avoiding jolts and laziness. In fact, what you want (rigor) has nothing to do with necessity, and you do not need volatility or monads to get it. There are programming languages ​​that are as clean as Haskell, but rigorous around the world. For example, I worked with Mercury .

+6
source

You may be interested to learn about BangPatterns language extension and Unboxed types / operations . But you should also know that any function will always have a value of type boxed. This optimization responsibility removes any ref-kind values ​​by compiling the code according to "bangs" and other functions in the function.

+1
source

I thought about that. So are other people:

Now my own spec.

In the discussions above, the main idea in most cases is that you can snap ! to any type to get a strict, specifically priced WHNF version of that type. That way Int can be thunk, and !Int definitely not one. This raises interesting questions for typechecker. Is !Int ~ Int ? If this is not so - two are completely separate, incompatible types, then working with them will be very painful. On the other hand, if this is done, there will be nothing to prevent the transfer of an unvalued Int where it is expected !Int - after all, they are of the same type. What you ideally want is to provide !Int where Int is expected, but not vice versa. This is similar to the subtype relation:! !a <: a ,! !a is the subtype a , a populated with both estimated and unvalued values ​​(thunks), and !a is only estimated. A GHC type system has no subtypes and probably cannot, because other parts of the type system do not interact well with it. This is a very limited concrete subtype instance: the programmer cannot declare arbitrary subtype relationships, but there is one hard coding: forall a. !a <: a forall a. !a <: a . I do not know if this makes it more reasonable to implement.

Suppose this can be done. You will receive additional questions. What if you try to put Int where !Int is expected? Type error? Ideally, I think this is not the case; instead, the compiler will insert code to evaluate it and continue. Good. How about delivering [Int] where [!Int] is expected, or fa and f !a in general? How could the compiler know how to go through any given structure in order to find those points where it contains a to evaluate these and only those? So will it be a type error? So to speak. How does a programmer perform manual conversion - get f !a from fa ? Perhaps by displaying the function eval :: a -> !a ? But this is pointless. Haskell is universally lazy. If you apply it to the argument, eval x , then until its value is needed, it will be a bit. Therefore, eval x cannot be of type !a String annotations at the position of the return type make no sense. So what about: data Wrap a = Wrap { unwrap :: a } , eval' :: a -> Wrap !a , with the semantics that Wrap !a might be thunk, but the compiler inserts the code so that when you do it rate, inside !a will also definitely be rated? Actually, here is a simpler formulation: data Wrap' a = Wrap' { unwrap' :: !a } ( eval' = Wrap' ). Which is the current legal Haskell, falling under our new strong typing. This is probably good. But as soon as you try to use it, you will get problems again. As you can get from fa to f !a , again - fmap unwrap . fmap Wrap fmap unwrap . fmap Wrap ? But unwrap has the same task as eval . Thus, all this does not seem so trivial. What about a seemingly innocuous converse: f !a delivery, where fa is expected? It works? (In other words, f !a <: fa ?) It depends on how a used inside f . The compiler would have to know the covariant, contravariant and invariant positions of type arguments - another thing related to subtyping.

This is how much I thought. It seems harder than it sounds.

Another interesting thing. You may have heard or not heard of the concepts of uncovered types: types that are not populated by the bottom. This, as far as I can tell, is the same as it is. Types that are guaranteed to be evaluated in WHNF; types that are guaranteed not to be bottom. It makes no difference, right? GHC actually already has a bunch of heterogeneous types as primitives ( Int# , etc.), but they are connected (you cannot add new ones) and also unboxed in addition to being decoupled, so they have a different look ( # instead of * ) and cannot mix with regular types. Whereas !a will be asymmetric, but box-type like * . Unlifted types are what I have already mentioned several times in the theoretical and typological contexts, so perhaps some research has been done on what is required to implement them in a more general way. I have not watched yet.

+1
source

All Articles