There are two issues you should consider. The first is the precision parameter, which by default is 6 (but which you can set no matter what you like). The second is what this parameter means, and it depends on the format you use: if you use a fixed or scientific format, it means the number of digits after the decimal (in rotation, it has a different effect on what usually means accuracy of two format); however, if you use the default precision ( ss.setf( std::ios_base::fmtflags(), std::ios_base::formatfield ) , this means that the number of digits in the output, regardless of whether the output was actually formatted using scientific or fixed notation, this explains why your display is 12.1231 , for example; you use both the default and the default.
You might want to try the following with different meanings (and perhaps various refinements):
std::cout.setf( std::ios_base::fmtflags(), std::ios_base::floatfield ); std::cout << "default: " << value[i] << std::endl; std::cout.setf( std::ios_base::fixed, std::ios_base::floatfield ); std::cout << "fixed: " << value[i] << std::endl; std::cout.setf( std::ios_base::scientific, std::ios_base::floatfield ); std::cout << "scientific: " << value[i] << std::endl;
Viewing the actual output is likely to be clearer than any detailed description:
default: 0.1 fixed: 0.100000 scientific: 1.000000e-01
source share