Using shared libraries versus a single executable

My colleague claims that we must parse our C ++ (C ++, Linux) application into shared libraries to improve code modularity, validation and reuse.

From my point of view, the burden that the code that we write does not have to be shared between applications on the same computer does not have to be dynamically loaded or unloaded, and we can just bundle a monolithic executable application.

In addition, wrapping C ++ classes with the IMHO C-functional interfaces makes it more ugly.

I also think that a single-file application will be much easier to update remotely on the clientโ€™s site.

Should dynamic libraries be used when there is no need to share binary code between applications and without loading dynamic code?

+6
c ++ c shared-libraries
source share
9 answers

I would say that splitting code into shared libraries for improvement without any immediate purpose is a sign of a buzzwords-dependent development environment. Itโ€™s better to write code that can be easily broken at some point.

But why do you need to put C ++ classes on C-functional interfaces, except, perhaps, to create an object?

In addition, the division into shared libraries here sounds like an interpreted language of thought. In compiled languages, you try not to delay until runtime what you can do at compile time. Unnecessary dynamic linking is exactly the case.

+7
source share

Using shared libraries ensures that libraries do not have circular dependencies. Using shared libraries often leads to faster linking, and link errors are detected earlier than if the link were not linked before the final application was linked. If you want to avoid sending multiple files to clients, you can consider adding the application dynamically to the development environment and statically when creating releases.

EDIT: I really see no reason why you need to wrap your C ++ classes using C interfaces - this is done behind the scenes. On Linux, you can use shared libraries without any special handling. However, on Windows you will need ___ declspec (export) and ___ declspec (import).

+5
source share

Improve reuse, although they will not? Not like a strong argument.

The modularity and verifiability of the code should not depend on the unit of the final deployment. I expect the link to be a late solution.

If you really have one product and never expect any change in it, then it sounds like unnecessary and unnecessary complexity to deliver to pieces.

+4
source share

The short answer is no.

Longer answer: dynamic libraries do not add anything to add to testing, modularity or reuse, which cannot be performed just as easily in a monolithic application. The only good I can think of is that it can make you create an API in a team that doesn't have the discipline to do it on its own.

There is nothing magical in the library (dynamic or otherwise). If you have all the code to create the application and sorted libraries, you can simply compile it all together in one executable file.

In general, we found that the cost of working with dynamic libraries is not worth it if there is no urgent need (libraries in several applications that need to be updated with a number of applications without recompilation, which allows the user to add functions to the application).

+2
source share

Dispelling Colleague's Arguments

If he believes that splitting your code into shared libraries will improve modularity, testability, and code reuse, then I think it means that he believes that you have some problems with your code and that using the shared library architecture "correct her.

Modularity?

Your code should have unwanted interdependencies that would not be related to a cleaner separation between "library code" and "code using library code."

Now this can be achieved with the help of static libraries.

Testing?

Your code may be better tested, it may build unit tests for each separate shared library that is automated during each compilation.

Now this can be achieved with the help of static libraries.

Code reuse?

Your colleague would like to reuse some code that is not displayed because it is hidden in the sources of your monolithic application.

Conclusion

Points 1 and 2 can still be reached using static libraries. 3 will make shared libraries mandatory.

Now, if you have more than one library linking depth (I'm thinking of linking two static libraries that have been compiled, linked to other libraries), this can be tricky. On Windows, this leads to an error for the link, because some functions (usually C / C ++ runtime functions when linking statically) are referenced more than once, and the compiler cannot choose which function to call. I donโ€™t know how it works on Linux, but I think it can happen too.

Dispelling your own arguments

Your own arguments are somewhat biased:

The burden of compiling / linking shared libraries?

The burden of compiling and linking to shared libraries compared to compiling and linking to static libraries does not exist. Therefore, this argument does not matter.

Dynamic loading / unloading?

Dynamically loading / unloading a shared library can be a problem in a very limited use case. In normal cases, the OS loads / unloads the library if necessary without your intervention, and in any case, performance problems lie elsewhere.

Providing C ++ code with C interfaces?

Regarding the use of the C function interface for your C ++ code, I do not understand: you are already linking the static libraries with the C ++ interface. Linking shared libraries is no different.

You would have a problem if you had different compilers to create each library of your application, but this is not the case, since you already link your libraries statically.

Is a double file with one file easier?

You're right.

On Windows, the difference is not significant, but then there is still a Hell DLL problem that disappears if you add a version to your library names or work with Windows XP.

On Linux, in addition to the problem with Windows above, you have the fact that by default shared libraries must be in some system directories by default in order for them to be used, so you have to copy them during installation (which can be a pain ... ) or change some default environment settings (which may also hurt ...)

Conclusion: who is right?

Now your problem is not, "is my colleague right?" Is he. Like you.

Your problem:

  • What do you really want to achieve?
  • Is the work necessary for this task necessary?

The first question is very important, because it seems to me that your arguments and the arguments of your colleagues are biased in order to lead to a conclusion that seems more natural for each of you.

Put it in a different wording: each of you already knows what should be the ideal solution (according to each point of view), and each of you puts together arguments to reach this solution.

There is no way to answer this hidden question ...

^ _ ^

+2
source share

Shared libraries come with their headaches, but I think shared libraries are the right way. I would say that in most cases you should be able to make parts of your application modular and reusable elsewhere in your business. In addition, depending on the size of this monolithic executable, it may be easier to simply download a set of updated libraries instead of a single large file.

IMO, libraries in general lead to better code, more verifiable code and allow you to create future projects more efficiently because you are not reinventing the wheel.

In short, I agree with your colleague.

+1
source share

Conduct a simple cost-benefit analysis โ€” do you really need modularity, testability, and reuse? Do you have time to refactor the code to get these functions? Most importantly, if you do refactoring, will the benefits you get justify the time it takes to carry out refactoring?

If you have no problems with testing now, I would recommend leaving your application as is. Modulation is excellent, but Linux has its own version of the "DLL hell" (see ldconfig ), and you have already pointed out that reuse is not necessary.

+1
source share

On Linux (and Windows), you can create a shared library using C ++ and not load it by exporting C functions.

That is, you create classA.cpp in classA.so, and you create classB.cpp in classB (.exe) that references classA.so. All you really do is split your application into multiple binaries. This has the advantage of being faster compiled, easier to manage, and you can write applications that load this particular library code for testing.

Everything is still C ++, all links, but your .so is separate from your statically linked application.

Now, if you want to load another object at runtime (that is, you do not know which one you need to load before execution), you will need to create a shared object with c-export, but you will also load these functions manually; you cannot use the linker to do this for you.

+1
source share

If you ask a question and the answer is not obvious, then stay where you are. If you have not reached the point that creating a monolithic application takes too much time, or it is too much pain for your group to work together, then there is no good reason to switch to libraries. You can create a test environment that works with application files if you want it to stand, or you can just create another project that uses the same files, but attach a test API to it and create a library with that.

For delivery purposes, if you want to create libraries and send one large executable, you can always reference them statically.

If modularity would help with development, i.e. you always come across executives with other developers on file modifications, then libraries can help, but this also does not guarantee. Using good object-oriented code design will help regardless.

And there is no reason to port any functions with the C-called interfaces needed to create the library if you do not want it to be called from C.

+1
source share

All Articles