As you already understood, according to another answer, it doubles working with binary floating-point code rather than floating point, so the original approach does not work.
It is also unclear whether it can work with a deliberately simplified formula, because it is unclear what maximum range you need, so rounding becomes inevitable.
The problem is fast but definitely well understood and often supported by CPU instructions. Your only chance to defeat inline transforms:
- You are in a mathematical breakthrough worth writing serious letters.
- You exclude enough cases that will not occur in your own examples, because while built-in functions are better overall, yours is optimized for your own use.
If the range of values ββyou use is very limited, the potential for short cutting when converting between IEEE 754 with double precision and long integer becomes smaller and smaller.
If you are in a place where you have to cover most cases covered by IEEE 754, or even a significant part of them, then you end up doing things slower.
I would recommend either staying with what you have, moving cases where double more convenient to keep a long way, despite the inconvenience or, if necessary, using decimal . You can easily create decimal from long with:
private static decimal DivideByBillion (long l) { if(l >= 0) return new decimal((int)(l & 0xFFFFFFFF), (int)(uint)(l >> 32), 0, false, 9); l = -l; return new decimal((int)(l & 0xFFFFFFFF), (int)(uint)(l >> 32), 0, true, 9); }
Now decimal are values ββthat are slower to use in arithmetic than double (precisely because it implements an approach similar to yours in the initial question, but with a variable exponent and a larger mantissa). But if you only need a convenient way to get the value for display or rendering in a string, then manually intercepting the conversion in decimal has advantages over hacking the conversion to double , so it might be worthwhile> to look.
Jon hanna
source share