Is writing only static methods equivalent to free programming on the C # side?

I have two questions related to the observed behavior of C # static methods (which I might misunderstand):

First: Will a recursive static method be a tail call, optimized in a way by how the static method is implemented under covers?

Secondly: Was it equivalent to functional programming for writing an entire application with static methods and without variables that go beyond the local scale? I am interested because I still have not wrapped my head in this term “no side effects”. I keep hearing about functional programming.

Edit: Let me mention, I use and understand why and when to use static methods in the usual C # OO methodology, and I understand that tail call optimization will not be explicitly done for the recursive static method. However, I understand that tail call optimization is an attempt to stop the creation of a new stack frame with each pass, and I observed at several points that it seemed to be a static method executed as part of its call method, although I may be wrong interpreted his observation.

+7
source share
4 answers

Will a recursive static method be a tail call, optimized in a way by how the static method is implemented under covers?

Static methods have nothing to do with tail recursion optimization. All the rules apply equally to instance and static methods, but personally I will never rely on JIT to optimize my tails. Moreover, the C # Compiler does not generate a tail call command, but sometimes this is done anyway . In short, you never know .

The F # compiler supports tail recursion optimization and, when possible, compiles recursion into loops.
Read more about C # vs F # behavior in this question .

Was this equivalent to functional programming for writing an entire application with static methods and without variables outside the local area?

It is not, and yes.

Technically, nothing prevents you from calling Console.WriteLine from a static method (which is the static method itself!), Which obviously has side effects. Nothing prevents you from writing a class (using instance methods) that does not change any state (that is, instance methods do not access instance fields). However, from a design point of view, such methods really don't make sense as instance methods, right?

If you Add a .NET Framework List<T> element (which has side effects), you will change its state. If you append an item in the F # list , you will get another list and the original will not be changed.

Note that append indeed a static method in the List module. Writing "conversion" methods in separate modules helps to create a free effect without side effects, since by definition internal storage is not available, even if the language allows it (F # does, LISP). However, nothing prevents you from writing a free non-static side effect method .

Finally, if you want to understand the functional concepts of a language, use one! . It is much more natural to write F # modules that work with immutable F # data structures than to simulate the same in C # using or without static methods,

+6
source

The CLR performs some tail call optimization, but only in 64-bit CLR processes. To find out where this is done, see the following: David Broman's CLR Profiling Blog: JIT conditions for tail calls .

Regarding creating software with only static variables and a local scope, I have done this a lot and this is actually good. This is just another way to do the same thing as OO. In fact, since there is no state outside the function / closure, it is safer and easier to test.

However, at first I read the entire SICP book cover to cover: http://mitpress.mit.edu/sicp/

The absence of side effects simply means that the function can be called with the same arguments as many times as you need, and always return the same value. It simply determines that the result of the function is always consistent, therefore it does not depend on any external state. In this regard, it is trivial to parallelize a function, cache it, test, modify, decorate, etc.

However, a system without side effects is generally useless, so things that do I / O will always have side effects. This allows you to carefully encapsulate everything else, although that is the point.

Objects are not always the best way, despite what people say. In fact, if you've ever used the LISP option, you will no doubt determine that a typical OO sometimes gets in the way.

+3
source

Regarding the second question: I suppose you mean the “side effects” of mutable data structures, and obviously this is not a problem for (I think) most functional languages. For example, Haskel basically (or even everything !?) uses immutable data structures. So nothing is said about "static" behavior.

+1
source

There is a very good book written on this topic, http://www.amazon.com/Real-World-Functional-Programming-Examples/dp/1933988924 .

And in the real world, using F #, unfortunately, is not an option due to team skills or existing code bases, which is another reason I like this book, because it shows many ways to implement F # functions in code, which you use today. And for me, at least a significant reduction in state errors, which take much more time than debugging than simple logical errors, costs a small reduction in OOP spelling.

For the most part, which does not have a static state and works in a static method, only by the specified parameters side effects are eliminated, since you limit yourself to pure functions. One of the points that you should pay attention to is obtaining data that will act or storing data in a database in such a function. However, combining OOP and static methods can help here if your static methods are delegated to lower-level object commands for state management.

Also a big help in ensuring the purity of a function is to keep objects unchanged whenever possible. Any valid object must return a new modified instance, and the original copy is discarded.

+1
source

All Articles