The functions of the .NET Framework Math mainly work with a double point with double precision, there are no overloads with one precision (float). When working with single-precision data in a high-performance scenario, this leads to unnecessary casting, as well as to the calculation of functions with greater accuracy than required, therefore, performance depends to some extent.
Is there a way to avoid some extra processor overhead? For example. Is there an open source math library with floating point overloads that directly calls basic FPU instructions? (I understand that this will require support in the CLR). And in fact, I'm not sure that modern processors even have instructions with the same accuracy.
This question was partially inspired by this question about sigmoid function optimization:
Mathematical Optimization in C #
redcalx
source share