Do JSON libraries agree that a double precision value is .5000 ... 1?

If I need to pass floating point numbers exactly from C # to JSON in Java, can I use JSON numbers?

If not, why not? What information may be lost and how can I guarantee its preservation?

To be specific, I use Json.NET in C # and Jackson (through my ObjectMapper class) in Java.

It appears that double.TryParse is what is ultimately used when Json.NET parses a number that doubles, and Double.parseDouble is what is ultimately used when Jackson ObjectMapper parses a number to double.

Is it possible to expect double.TryParse and Java Double from Microsoft . parseDouble to precisely match the value of each JSON number?

I am concerned that the number of digits in the JSON number is not limited in ECMA-404, IETF RFC 7159 or json.org. This other question (The maximum number of decimal digits that can affect the double ) makes me wonder how much I can trust the general belief that the first block of 17 decimal digits (having discarded leading zeros) or virtually any limited number of decimal digits is enough for determining double-precision floating point values.

+6
source share
3 answers

Part of the JSON question can be safely ignored. In both cases, this is just a string that is parsed. Important:

  • Are double data types the same on both systems.
  • If so, double.TryParse and Double.parseDouble always return the same double value for the given input.
  • And it is not set: is the double really correct data structure really?

According to Yuunas, both use the same specification for doubles, so you should be able to consider double identical.

Nothing in the parsing documentation guarantees comparability with the other, nor how the processing of numbers that cannot be represented in both documents.

However, since you are using doubles here, and (assuming you are using doubles with knowledge that they are wrong), they are probably “good enough”, and if they are not “good enough”, then most likely that doubles the correct data structure anyway, and you should look at a fixed decimal format. How it works with JSON is another question. :)

+2
source

Well, for floating point numbers, the number of decimal places doesn't really matter. The important thing is that the number can be represented exactly in binary format. You may already know that some numbers cannot be represented accurately. Floating point is always an approximation.

Because C # and Java use the same spec (IEEE 754), any number that Java could represent as a double must convert to the same binary form.

+3
source

No, not every JSON number is strictly converted with some specific IEEE 754 binary64 number (the so-called double precision aka "double").

In many situations, using JSON numbers is sufficient. It depends on what kind of guarantees you or the people whose data you transmit will be needed. Without agreement outside of JSON, there will be no clear definition of which JSON numbers are equivalent to doubling. However, in many situations, you can implicitly assume that the required accuracy is within two pairs. Many floating point numbers are physical measurements made with easily accessible tools.

Here is one way to make an exact agreement outside of JSON. One author suggested in an article on the Internet that if double is what you want, then in your JSON you can put an array or an object that contains three components of this structure as integers: sign, exponent and mantissa. Then, people who use the document must know the rules for interpreting a double of its three components, which are considered integers. When the integer of the "exponent" is 2047, the double value is NaN , etc.

There are more complex and more standard features. You can use HDF5 or match if you need it. There is already more than one specification on how to represent HDF5 in JSON. Then your JSON document can be self-descriptive in the form of HDF5, annotating your number as type H5T_IEEE_F64BE.

+1
source

All Articles