Rarely are all double precision value bits significant.
If you have billions of values โโthat are the result of some measurements, find the calibration and error of your measuring device. Quantifying values โโso that you only work with significant bits.
Often you find that you only need 16 bits of the actual dynamic range. You can compress all this into short arrays that preserve the entire original input.
Use a simple โZ-scoreโ technique where each value is truly a signed part of the standard deviation.
Thus, a sequence of samples with an average value of m and a standard deviation of s is converted to a heap of Z-score. Normal Z-score conversions use double, but you must use the double dot version. s / 1000 or s / 16384 or something that only stores the actual accuracy of your data, not the noise bits at the end.
for u in samples: z = int( 16384*(um)/s ) for z in scaled_samples: u = s*(z/16384.0)+m
Your Z-scores keep pleasant ease in working with statistical relationships with the original samples.
Say you are using a signed 16-bit Z account. You have +/- 32,768. Scale this to 16,384, and your Z-points have an effective resolution of 0.000061 decimal places.
If you use a signed 24-but Z-account, you have +/- 8 million. Scale this to 4,194,304 and you have a resolution of 0.00000024.
I seriously doubt that you have measuring instruments. In addition, any arithmetic performed as part of a filter, calibration, or noise reduction can reduce the effective range due to the noise bits entered during arithmetic. A poorly designed separation operator could make most of your decimal places nothing more than noise.