Why are constructor copies sometimes declared explicitly nested?

It’s hard for me to understand the proposal for compatibility with built-in and client binary. Can someone explain?

C ++ FAQ Cline, Lomow:

When the compiler synthesizes the copy constructor, it makes them inline. If your classes are susceptible to your clients (for example, if your clients # include your header files, and not just executables created from your classes), your embedded code is copied to your clients' executables. If your clients want to maintain binary compatibility between the releases of your header files, you should not modify the built-in functions that are visible to clients. Because of this, you will need an explicit, non-built-in version of the copy constructor that will be used directly by the client.

+7
c ++ copy-constructor
source share
5 answers

Binary compatibility for dynamic libraries ( .dll , .so ) is often important.

eg. you don’t want to recompile half of the software on the OS because you have updated some low-level library that everyone uses in an incompatible way (and consider how often security updates can occur). Often you cannot even use the entire source code, even if you want.

To update your dynamic library that is compatible and actually has an effect, you can hardly change anything in the public header file, because everything that was compiled into these other binaries directly (even in C code, this often includes the sizes of structures and layouts of elements, and obviously you cannot delete or modify any function declarations).

In addition to C problems, C ++ introduces much more (the order of virtual functions, how inheritance works, etc.), so it’s quite possible that you can do something that modifies the automatically created C ++ constructor, a copy, destructor etc. otherwise maintaining compatibility. If they are defined as "built-in" along with the class / structure, and not explicitly in your source, then they will be included directly by other applications / libraries that link your dynamic library and use these automatically generated functions, and they will not receive your modified version ( which you might not even understand has changed!).

+2
source share

This answer says that if the signatures of the functions involved have not changed, then the "rebuilding" of the program means that the object files must be linked again. You will not need to compile them again.

This document describes the concept of binary compatibility for shared libraries implemented in C ++ on GNU / Linux systems . This link can also help you understand do and don'ts when you direct binary compatibility when writing a library.

The concept of why we do not have virtual constructors is also related to this.

You might be interested in a tool that checks the compatibility of two given versions: abi-compliance-checker for Linux.

+1
source share

This refers to problems that may occur between binary versions of a library and changes to headers in that library. There are certain changes compatible with binary and some changes which are not. Changes to built-in functions, such as the built-in copy constructor, are not binary compatible and require that the user code be recompiled.

You see this in one project all the time. If you change a.cpp , you do not have to recompile all files that include a.hpp . But if you change the interface in the header, any user of this header should usually be recompiled. This is similar to using shared libraries.

Maintaining binary compatibility is useful when you need to change the implementation of a binary library without changing its interface. This is useful for things like bug fixes.

For example, the program uses liba as a shared library. If liba contains an error in the method for the class that it provides, then it can change the internal implementation and recompile the shared library, and the program can use the new binary version of this liba without recompiling itself. If, however, liba modifies a public contract, such as an implementation of an inline method, or by moving an inline method to declare from outside, then it aborts the application binary interface (ABI) and the consumer program must be recompiled to use the new binary version of liba .

+1
source share

Consider the following code compiled into a static library:

 // lib.hpp class t_Something { private: ::std::string foo; public: void Do_SomethingUseful(void); }; // lib.cpp void t_Something:: Do_SomethingUseful(void) { .... } // user_project.cpp int main() { t_Something something; something.Do_SomethingUseful(); t_Something something_else = something; } 

Now, when the fields of the t_Something class somehow change, for example, a new one is added, we find ourselves in a situation where all user code must be recompiled. Basically, constructors implicitly created by the compiler "leaked" from our static library into user code.

0
source share

I think I understand what this passage means. However, I do not approve of this.

I suppose they describe a scenario when you develop a library and provide it to your clients in the form of header files and a pre-compiled part of the binary library. After the client completes the initial assembly, it is expected that they will be able to replace the binary part with a newer one without recompiling their application - only relink is required. The only way to achieve this is to ensure that the header files are immutable, that is, do not change between versions.

I assume that this will come from the fact that in 98 systems the assemblies are not intelligent enough and will not be able to detect a change in the header file and cause recompilation of the damaged source file.

Any and all of this is a completely controversial issue at the present time, and in fact, the grain is coming again. How a significant number of libraries actually try to be the only libraries heading, for several reasons

-one
source share

All Articles