Assigning a float to a string reference works — why?

In a recent project, I ran into an error when I accidentally assigned a float to a string reference (instead of converting the float to a string and then assigning it).

The code looked something like this (tested on both Xcode / Apple LLVM 7.1 and GCC 4.9.2):

#include <iostream> using namespace std; static void get_text(string &s) { s = 1.0f; // Legal (not even a warning!) } // This version gives a compiler error (as I'd expect) // static void get_text(string &s) { // string out = 1.0f; // s = out; // } int main() { string s; get_text(s); cout << s << endl; // Prints garbage return 0; } 

Printing a line obviously leads to garbage, but I do not understand why this did not give as much as a warning. (My best guess is that the compiler did some kind of implicit reinterpretation to go from float to int, to the memory address ...?)

Is there a warning I could include (in Xcode, ideally) that would prevent similar things in the future?

+5
source share
1 answer

This is because of the member function:

 string& operator=( char ch ); 

There is an implicit conversion from floating point to integer types in C ++ ( char is an integer type).

You can usually use -Wfloat-conversion in g ++ to get a warning about this conversion, but I tried it and it did not warn. (Maybe a compiler error?)

An easy way to change the code to get errors for unexpected floating point / integer conversions is:

 s = { 1.0f }; 

Another option is to make the string function as the return type (generally speaking, this construction is preferable to have the "out" reference parameter):

 static string get_text() { return 1.0f; } 

Unfortunately, this is one of the many small errors associated with using std::string that was "designed" when C ++ was still very young, and it was not clear what undesirable long-term consequences would arise.

+8
source

All Articles