Not quite the answer ... but this comment is too long for the comment.
I believe that it is a little erroneous to think that complex differentiability simply implies infinite differentiability. In fact, it is much stronger: if a function is complex differentiable, then its derivatives at any point determine the entire function . And since infinite differentiability gives you the full Taylor series, you have an analytic function that is equal to your function, i.e. Your very function. Thus, in a sense, complex differentiable functions are analytic ... because they exist.
From the point of view of the (standard) calculus, the key contrast between real distinguishability and complex difference is that in the actions there is only one direction in which you can take the limit of difference coefficients (f (x + δ) - fx) / δ ;. You just require the left border to equal the correct limit. But since this equality after the limit, it only has an effect locally. (Topologically speaking, a constraint simply compares two discrete values, so it has nothing to do with continuity properties at all.)
OTOH, for complex differentiability, we require that the limit of the difference factor be the same if we approach x from any direction in the entire complex plane. That a whole continuous degree of freedom is limited. Then you can perform topological tricks (the Cauchy integrals are essentially such) to “extend” the restriction to the entire domain. Sub>
I find this problematic philosophically. Holomorphic functions are not really functions at all , as in: they are not so much determined by their values of the result over the entire domain, but somehow written down with analytic formulas (i.e. possibly infinite algebraic expressions / polynomials).
Most mathematicians and physicists seem to have so many, such expressions are how they usually write functions. I actually don’t like it: I need the function to be a function defined by individual values, such as field strengths that you can measure in space, or results that you can define in Haskell.
Anyway, I'm distracted ...
If we translate this problem from functions into numbers into functors on Haskell types, I believe that the result is that complex diffyability means nothing more than that: a type can be written as (possibly infinite?) An ADT polyhedron. And how to get infinite differentiability for such ADTs was shown in the post you linked to .
Another rotation ... perhaps closer to the answer.
These "derivatives" of Haskell types are actually not derivatives in the sense of calculus. As in, they are not motivated by the concept of analysis of the response with small perturbation & dagger; . It so happened that you can mathematically create, for a very specific class of functions - those that are defined by an algebraic expression - that the derivative of a calculus can again be written in a simple algebraic way (given by well-known differentiation rules). This means it is trivial that you can differentiate infinitely often.
The usefulness of this symbolic differentiation also motivates to think of it as a more abstract operation. And when you distinguish between Haskell types, it is basically this algebraic definition that you are going to, and not the original calculus.
Which is good ... but as soon as you make algebra, not calculus, it’s not very important to distinguish between the “real” and the “complex” - in fact, this is not so because you are not processing the values, but the symbolic representations of the values. Untyped language, if you want (and indeed, a language like Haskell is still untyped, with everything having a good * ).
& dagger; Be with traditional converging limits or NSA -infinitesimals.