When does it make sense to introduce typed data types?

The company’s C ++ internal coding standards document states that even for basic data types, such as int, char, etc., you should define your own typedef, such as typedef int Int. This is justified by the advantage of code portability. However, are there general considerations / recommendations about when (in the funds for what types of projects) does this really make sense? Thanks in advance.

+6
source share
7 answers

Typedefing int to int gives almost no advantage (it does not give semantic benefits and leads to absurdities, for example, typedef long Int on other platforms in order to remain compatible).

However, typedefing int , for example, int32_t (along with long to int64_t , etc.) really gives an advantage, since now you can freely choose a data type with the appropriate width in a self-documenting form and it will be portable (just switch typedefs on another platform )

In fact, most compilers offer stdint.h , which already contains all of these definitions.

+8
source

It depends. The example you are quoting:

 typedef int Int; 

just stupid. This is a bit like defining a constant:

 const int five = 5; 

Just as the variable five has a zero chance, ever becoming a different number, typedef Int can only refer to a primitive type Int .

OTOH, typedef like this:

 typedef unsigned char byte; 

makes life easier on the fingers (although it has no portability benefits), and one such:

 typedef unsigned long long uint64; 

It’s easier to print and carry, since on Windows you should write instead (I think):

 typedef unsigned __int64 uint64; 
+3
source

Trash.

"Portability" does not make sense, because int always int . If they think they want something like an integer type to be 32-bit, then typedef should be typedef int int32_t; , because then you will name the real invariant and you can actually guarantee that this invariant is executed through the preprocessor, etc.

But this, of course, is a waste of time, because you can use <cstdint> either in C ++ 0x, or through extensions, or in any case use the Boost implementation.

+3
source

typedef int Int is a terrible idea ... people will wonder if they look at C ++, it’s hard to type, visually distract, and the only vaguely imaginary rationalization for it is wrong, but let's lay it out explicitly so that we can get it down:

if one day you say that a 32-bit application is ported to 64-bit, and there is a lot of stupid code that works only for 32-bit ints, then at least the typedef can be changed to save Int to 32 bits.

Criticism: if the system sucks in code that is so poorly written (that is, it does not explicitly use the 32-bit type from cstdint), it for the most part has other parts of the code where now it will need to use 64-bit, which gets stuck in 32 bit mode using typedef. Code that interacts with library / system APIs using ints is likely to get Ints, which will lead to truncated descriptors that work until they fall outside the 32-bit range, etc. The code will need to be fully tested before being reliable in any case. If this excuse floating around people can only prevent them from using explicitly sized types, where they are really useful ("what are you doing this for?" "Portability?" "But Int for portability, just use this").

However, coding rules can be designed to encourage typedef for things that are logically distinct types, such as temperatures, prices, speeds, distances, etc. In this case, typedefs can be vaguely useful, because they allow an easy way to recompile the program to say that updating is accurate from float to double , switching from a real type to an integral type, or replacing a user-defined type with special behavior. This is also convenient for containers, so changing the container has less work and less impact on the client, although such changes are usually a little painful: the container APIs are designed to be a little incompatible, so important details need to be reviewed and not compiled but not working or silently working much worse than before.

It is important to remember that typedef is only an “alias” for the actual base type and does not actually create a new separate type, so people can pass any value of the same type without receiving any compiler warning about type mismatches. This can be processed using a template, for example:

 template <typename T, int N> struct Distinct { Distinct(const T& t) : t_(t) { } operator T&() { return t_; } operator const T&() const { return t_; } T t_; }; typedef Distinct<float, 42> Speed; 

But it’s a pain to make unique N values ​​... you can have a central enum list of individual values ​​or use __LINE__ if you are dealing with one translation unit and several multiple typedefs on a line or take const char* from __FILE__ , but not really an elegant solution that I know of.

(One classic article from 10 or 15 years ago demonstrated how you can create patterns for types that knew about several orthogonal units, while preserving current “power” counters and setting the type as multiplications, divisions, etc. For example, you you can declare something like Meters m; Time t; Acceleration a = m / t / t; and check if all units were reasonable at compile time.)

Is that a good idea? Most people clearly find this unnecessary, since almost no one ever does it. However, this can be useful, and I used it several times when it was easy and / or especially dangerous if the values ​​were accidentally incorrectly assigned.

+3
source

Typedefs can help describe the semantics of a data type. For example, if you are typedef float distance_t; , you give the developer the opportunity to interpret the distance_t values. For example, you can say that values ​​can never be negative. What is -1.23 kilometers? In this case, it may just not make sense with negative distances.

Of course, typedefs in no way limits the scope of values. This is just a way to make the code (at least) readable and convey additional information.

The portability your workstation is associated with seems to be that you want a particular data type to always be the same size, regardless of which compiler is used. For example,

 #ifdef TURBO_C_COMPILER typedef long int32; #elsif MSVC_32_BIT_COMPILER typedef int int32; #elsif ... #endif 
+2
source

I believe the main reason is the portability of your code. For example, as soon as you intend to use a 32-bit integer type in a program, you need to be sure that the other int platform is also 32 bits long. The typedef in the header helps localize your code changes in one place.

0
source

I would like to say that it can also be used for people speaking another language. Say, for example, if you speak Spanish and your code is in Spanish, you would not want a type definition in Spanish. Just to think.

0
source

All Articles