F # Mailbox Processor and Functional Design

If state is considered a bad idea for functions, why is it considered that when using MailboxProcessor, state has state?

To expand, I explained to someone functional programming how functions do not use state (there are no variables outside the function β€” that is, data for the same data) and the good things that it brings. But then I thought about MailboxProcessor and how it uses recursion to maintain state between function calls, and I can’t come to terms with why everything is fine in this situation.

Is this the case when it is the least bad way to maintain state?

+7
functional-programming f # mailboxprocessor
source share
1 answer

Evil is indeed a common volatile state. In the single-threaded case, a shared changed state means that the functions cannot be safely assembled - since one call can change some state, which is then read by the second call, and therefore you will get unexpected results. In a multi-threaded case, a shared volatile state means that you have the potential for race conditions.

Functional programming generally avoids mutation. Functions may still share some state (for example, a closure may capture a state), but it cannot be changed. In the single-threaded case, there is also no determinism. In a multi-threaded case, almost the only thing you can do in a pure functional style is to make fork-join parallelism (and data-parallelism), which does not need a mutable state and is completely deterministic.

Subscriber programming also avoids a shared, changed state, but in a different way. You have isolated agents that can share immutable messages. So there is some non-determinism (because they communicate by sending messages), but they exchange immutable values. In fact, you can even use mutable state inside an agent - if it is not shared, you still avoid a shared mutable state.

+13
source share

All Articles