Understanding Link Transparency

As a rule, my head hurts, because something is wrong with my reasoning:

  • For 1 set of arguments, the reference transparent function always returns 1 set of output values.

  • this means that such a function can be represented as a truth table (a table in which 1 set of output parameters is specified for a set of arguments).

  • which makes the logic of such functions combinational (as opposed to sequential)

  • this means that with a pure functional language (which has only rt functions), only combinational logic can be described.

The last statement is deduced from this argument, but it is clearly false; this means that there is a mistake in the reasoning. [question: where is the mistake in this reasoning?]

UPD2. You guys say a lot of interesting things, but don't answer my question. I spotted it more right now. Sorry for the mess with the definition of the question!

+7
functional-programming referential-transparency
source share
5 answers

The error in your reasoning is this:
"this means that such a function can be represented as a truth table."

You conclude this from the property of the functional language of referential transparency. So far, the conclusion would be plausible, but you control that the function is able to accept collections as input and process them, in contrast to the fixed inputs of a logical element.

Therefore, the function is not equal to a logical gate, but rather to the construction plan of such a logical element, depending on the actual (at run time) input!

Comment on your comment: Functional languages โ€‹โ€‹can - although without citizenship - implement a state machine, creating states from scratch every time they are accessed.

0
source share

Question: where is the mistake in this reasoning?

For a transparent reference function, an endless truth table may be required to represent its behavior. It will be difficult for you to construct an infinite circuit in combinational logic.

Another mistake: the behavior of sequential logic can be represented purely functionally as a function of states to states. The fact that, when implemented, these states occur sequentially in time does not prevent us from defining a purely referentially transparent function that describes how the state evolves over time.

+5
source share

Edit: Although I apparently missed the bullseye on a relevant issue, I think my answer is pretty good, so I keep it :-) (see below).

I suggest that a more concise way of phrasing might be the question: can a purely functional language compute something imperative?

First of all, suppose you take an imperative language such as C and make it impossible for you to change variables after they are defined. For example:.

int i; for (i = 0; // okay, that one assignment i < 10; // just looking, that all i++) // BUZZZ! Sorry, can't do that! 

Well, here is your for loop. Do we need to maintain a while ?

 while (i < 10) 

Of course, but it is not very useful. i cannot change, so it will either start forever or not start at all.

How about recursion? Yes, you should keep recursion, and it is still useful:

 int sum(int *items, unsigned int count) { if (count) { // count the first item and sum the rest return *items + sum(items + 1, count - 1); } else { // no items return 0; } } 

Now, with functions, we do not change state, but variables can, well, change. As soon as the variable goes into our function, it is locked. However, we can call the function again (recursion), and this is like getting a new set of variables (the old ones remain unchanged). Although there are multiple instances of items and count , sum((int[]){1,2,3}, 3) will always be evaluated to 6 , so you can replace this with 6 if you want.

Can we do whatever we want? I am not 100% sure, but I think the answer is yes. Of course you can, if you have a closure.


Are you alright. The idea is that after defining a variable, it cannot be redefined. A referentially transparent expression, given the same variables, always gives the same result value.

I recommend looking at Haskell, a purely functional language. Strictly speaking, Haskell does not have an "assignment" operator. For example:

 my_sum numbers = ??? where i = 0 total = 0 

Here you cannot write a "for" loop that increases me and total as you go. However, all is not lost. Just use recursion to get new i and total s:

 my_sum numbers = f 0 0 where fi total = if i < length numbers then fi' total' else total where i' = i+1 total' = total + (numbers !! i) 

(Note that this is a dumb way to summarize a list in Haskell, but it demonstrates a method for solving a single task.)

Now consider this highly imperative code:

 main = do a <- readLn b <- readLn print (a + b) 

It is really syntactic sugar for:

 main = readLn >>= (\a -> readLn >>= (\b -> print (a + b))) 

The idea is that instead of the main one is a function consisting of a list of operators, main is an IO action performed by Haskell, and the actions are defined and connected together with the binding operations. In addition, an action that does nothing giving an arbitrary value can be determined using the return function.

Please note that snapping and returning do not apply to actions. They can be used with any type who calls himself a Monad to do all kinds of funky things.

To clarify, consider readLn . readLn is an action that, if executed, will read a string from standard input and display its parsed value. To do something with this value, we cannot store it in a variable, as this will violate referential transparency :

 a = readLn 

If allowed, the value will depend on the world and will be different every time we call readLn , that is, readLn will not be referentially transparent.

Instead, we bind the readLn action to a function that deals with the action, giving a new action, for example:

 readLn >>= (\x -> print (x + 1)) 

The result of this expression is the value of the action. If Haskell got off the couch and performed this action, he would read the integer, increase it and print it. By associating the result of an action with a function that does something with the result, we get to maintain referential transparency while playing in the state world.

+3
source share

As far as I understand, referential transparency simply means: this function will always give the same result when called with the same arguments. So, the math functions you learned about at school are referentially transparent.

A language that you can test to find out how everything is done in a purely functional language will be Haskell . There are ways to leverage the features of the โ€œupdated repository,โ€ such as the Reader Monad and State Monad . If you are interested in purely functional data structures, Okasaki can be read well.

And yes, you are right: the evaluation procedure in a purely functional language, such as haskell, does not matter, as in non-functional languages, because if there are no side effects, there is no reason to do something before / after something else - if the input one of them does not depend on the output of the other, or are used as monads.

I really don't know about the truth.

+2
source share

Here is my punch when answering the question:

Any system can be described as a combinatorial function, large or small.

There is nothing wrong with the fact that pure functions can only deal with combinatorial logic - this is true, only these functional languages โ€‹โ€‹hide it from you to one degree or another.

You can even describe, say, the work of the game engine as a truth table or a combinatorial function.

You can have a deterministic function that takes the "current state of the entire game" as the RAM occupied by the game engine and keyboard input, and returns the "game state one frame later." The return value will be determined by the combinations of bits at the input.

Of course, in any meaningful and sensible function, the input is parsed for blocks of integers, decimals, and logical values, but the combination of bits in these values โ€‹โ€‹still determines the output of your function.

Remember also that basic digital logic can be described in truth tables. The only reason this has not been done for anything more than, say, arithmetic for 4-bit integers is because the size of the truth table grows exponentially.

+1
source share

All Articles