Perhaps this is due to the principle of the command line ?
CQS tends to be popular at the intersection of OO styles and functional programming, as it creates an obvious distinction between object methods that perform or do not have side effects (i.e. modify an object). Applying CQS to variable assignments takes it further than usual, but the same idea applies.
A brief illustration of why CQS is useful: consider a hypothetical hybrid F / OO language with the List class, which has the Sort , Append , First and Length methods. In the imperative style of OO, you can write this function:
func foo(x): var list = new List(4, -2, 3, 1) list.Append(x) list.Sort()
While in a more functional style, something like this will most likely be written:
func bar(x): var list = new List(4, -2, 3, 1) var smallest = list.Append(x).Sort().First()
It seems that they are trying to do the same, but obviously one of the two is wrong and, not knowing more about the behavior of the methods, we cannot say which one.
However, using CQS, we will insist that if
Append and
Sort modify the list, they must return the unit type, which prevents us from creating errors using the second form, if we should not. Therefore, the presence of side effects also becomes implicit in the method signature.
CA McCann Jan 04 2018-10-01T00-01-04 16:38
source share