Is it true that large embedded companies are "forced" to use old programming standards / compilers?

Our lecturer told us that in carrying out our assignments we are allowed to use only C ++ 98 / C99 standards, indicating the correct flags for the compiler, we can guarantee that we will not violate this rule.

I understand that in this way people can learn the “real” C or C ++, depending on what they choose, and use this skill without any help from the new language features (I do not agree, but who am I to argue).

When I ask my teacher why this rule he answered (finding out that I am not satisfied with the answer above): "because large old companies, such as ASML, which work with embedded devices, have old code bases that (can) to break when switching to C11 / C ++ 11).

I asked for a specific real world / practical example code snippet that compiles like C99 / C11 (or C ++ 98 / C ++ 11), is standard compatible (C99 / C ++ 98), but behaviorally very different when in binary form - summarize it, the question has not received an answer. If the statement that companies adhere to old compilers and standards is true, can someone provide a piece of code that I want to see for myself?

+6
source share
4 answers

I don't know much about the embedded world, but few other companies using C ++ and having old hardware / platforms


It really depends on the company and the platform they use. But with some effort and open control, there should be nothing against modern C ++ everywhere (from a technical point of view)

At several companies, I know that developers are encouraging the transition to modern C ++, and they are moving more and more.

Sometimes you have to put more effort into it than just by installing a new compiler. When you need to deliver to an old platform (for example, Debian 6) and cannot change the OS, you need to manually compile libstdC ++ on this platform and deliver it using your product / use its specific one (there are more details, but you get the point) .


So maybe companies are sticking to old C ++ because of conservative management or developers who don't care about modern C ++. More and more companies are being modernized. And the study of modern C ++ is also not mistaken , because the "old" style is usually discouraged in companies that use modern C ++.


The code can be “broken” when switching compilers, but only at the compilation level and because they use non-standard functions / syntax (which older compilers are more tolerant). But the behavior is reasonable, I don’t know what is “quietly” breaking (the standard committee is actively trying to avoid this with every change), and you also get more and better warnings with better compilers.

+4
source

A product in a large company does not have to be from the same code base. The code base may depend on many other libraries (including third-party developers).

Product code cannot be compiled using the latest compiler until all its dependencies are compiled using this version of the compiler. (At least this applies to static libraries)

It’s generally difficult to upgrade to the latest compilers for large products with large dependencies. There is also a lot of overhead for testing that should occur after moving to the latest compiler.

Management would agree to do this only when ROI (Return On Investment) is good, which is rarely the case with legacy code.

+2
source

It depends on the company, their integrated product line and their customer base.

Some companies have an ingrained culture, so they insist on using older methods, even in the face of reasoned technical arguments for the upgrade. Some companies are progressive and support updating new compilers and standards whenever it makes sense (and, conversely, support sticks to older methods when it makes sense). Some companies are so inclined to accept the latest trend that they use unproven technologies and endanger the stability of their products.

Some embedded products use specific hardware components that are no longer manufactured and for which there is no low-cost alternative available through a combination of newer hardware or software. For example, with hard real-time systems, there may be obsolete equipment that meets critical time constraints, but new equipment cannot be easily obtained, which meets the original requirement.

Some embedded products are highly critical (security critical, critical, etc.) with an active regulatory body that insists on providing solid technical evidence before the update is approved (and the delegate who signs such an update without documented evidence will be legally responsible, if something goes wrong in the field - for example, the system kills someone due to a time error). This documentation can be very expensive to produce. A company with such a product may be more cost-effective to adhere to an older development environment (accepted by the regulator). To pay several million dollars to justify an update for a newer compiler, the tendency to stir up the desire of the company to fork out for the creation of new evidence that justifies the use of the new compiler.

Ultimately, in order to sell a product, you must convince a paid customer to pay for it. If a large percentage of customers of the old system do not want to pay for the upgrade - after all, the existing system works fine - there is no excuse for the seller to upgrade - unless they can make the change neutral for the client (which is often difficult to achieve when updating a critical embedded system if the supplier does not want to absorb a lot of costs). Similarly, a key customer could actually pay for the development and then maintenance of the existing system, and may believe that continuing to pay for the maintenance of the existing system is a better value for money than paying for the upgrade, and in the process of providing evidence that it works as needed.

+2
source

This year alone, I used two different compilers that are over 20 years old. However, this was not for new projects. It was for serving products that also turned about 20 years old. Embedded products can be supported for decades. In one of my cases, the hardware component is outdated, which required a small redesign requiring a software update. In another case, the hardware design had to be updated to meet RoHS requirements, and as a result of this project, a software update was required. In both cases, I believe that there would be more work and more risk using a modern compiler. This old code and binary code have been proven with 20 years of experience.

In the third case, the microcontroller failed, and we redesigned the FPGA using a soft processor. In this case, some porting is required, and I used a different, more modern compiler. But I was still very cautious about how I changed the source code.

I would not say that I was forced to use these old compilers and avoid new compiler features. When you support old projects, this can be a matter of practicality. Or it may be a choice to make fewer changes to avoid inadvertently violating what has been proven to work.

For new projects, we use modern compilers. And I never had to limit myself to the older C standard.

PS: For an even more severe experience, you can limit yourself to compilers that work only on Windows XP and a debugger that needs a parallel port on your PC.

+1
source

All Articles