Double and string formatting

double val = 0.1; std::stringstream ss; ss << val; std::string strVal= ss.str(); 

In the Visual Studio debugger, val is set to 0.10000000000000001 (because 0.1 cannot be represented). When val converted using a string stream, strVal is "0.1" . However, when using boost :: lexical_cast, the result of strVal is "0.10000000000000001" .

Another example is the following:

double val = 12.12305000012;

In the visual studio val , 12.123050000119999 is displayed, and using the string sequence and the default precision (6), it becomes 12.1231. I do not quite understand why this is not 12.12305 (...).

Is there any precision by default, or does stringstream have a specific algorithm for converting a double value that cannot be accurately represented?

Thanks.

+6
source share
4 answers

You can change stringstream floating point stringstream as follows:

 double num = 2.25149; std::stringstream ss(stringstream::in | stringstream::out); ss << setprecision(5) << num << endl; ss << setprecision(4) << num << endl; 

Conclusion:

 2.2515 2.251 

Please note how the numbers are also rounded when necessary.

+10
source

For those who get the error "setprecision not declaration in this scope": you should #include <iomanip> else setprecision(17) not work!

+8
source

The problem occurs when inserting the stream ss << 0.1; , not when converting to a string. If you need non-standard precision, you need to specify this before inserting a double:

 ss << std::setprecision(17) << val; 

On my computer, if I just use setprecision(16) , I still get "0.1" and not "0.10000000000000001" . I need a (slightly fictitious) precision of 17 to see the final 1.

Adding
The best demo comes with a value of 1.0 / 3.0. With the default precision, you get the string representation "0.333333" . This is not the string equivalent of 1/3 double precision. Using setprecision(16) makes the string "0.3333333333333333" ; an accuracy of 17 gives "0.33333333333333331" .

+3
source

There are two issues you should consider. The first is the precision parameter, which by default is 6 (but which you can set no matter what you like). The second is what this parameter means, and it depends on the format you use: if you use a fixed or scientific format, it means the number of digits after the decimal (in rotation, it has a different effect on what usually means accuracy of two format); however, if you use the default precision ( ss.setf( std::ios_base::fmtflags(), std::ios_base::formatfield ) , this means that the number of digits in the output, regardless of whether the output was actually formatted using scientific or fixed notation, this explains why your display is 12.1231 , for example; you use both the default and the default.

You might want to try the following with different meanings (and perhaps various refinements):

 std::cout.setf( std::ios_base::fmtflags(), std::ios_base::floatfield ); std::cout << "default: " << value[i] << std::endl; std::cout.setf( std::ios_base::fixed, std::ios_base::floatfield ); std::cout << "fixed: " << value[i] << std::endl; std::cout.setf( std::ios_base::scientific, std::ios_base::floatfield ); std::cout << "scientific: " << value[i] << std::endl; 

Viewing the actual output is likely to be clearer than any detailed description:

 default: 0.1 fixed: 0.100000 scientific: 1.000000e-01 
+3
source

Source: https://habr.com/ru/post/927741/


All Articles