Is C ineffective compared to the Assembly?

Possible duplicate:
When is assembler faster than C?

Hello,

This is a purely theoretical question, therefore, given the “infinite” time to create a trivial program and advanced knowledge of C and Assembly, is it really better to do something in the Assembly? "performance" is lost when compiling C in Assembly (for machine code)?

By performance, I mean, do modern C compilers not do poor work for certain tasks that program directly in Assembly speed?

Thanks.

+6
c assembly compiler-construction
source share
11 answers

Modern C can work better than assembly in many cases, because keeping track of which operations may overlap and blocking others is so difficult that it can only be reasonably tracked on a computer.

+11
source share

C is nothing ineffective. C is a language, and we do not describe languages ​​in terms of efficiency. We compare programs in terms of efficiency. C does not write programs; programmers write programs.

Building gives you tremendous flexibility when comparing with C, and it depends on the programming time. If you are a programmer-programmer and programmer-programmer-guru, then most likely you can squeeze a few more juices using the Assembly to write any given program, but the price for this will almost certainly be prohibitive.

Most of us are not gurus in any of these languages. For most of us, the responsibility for tuning performance for the C compiler is a double victory: you get the wisdom of a number of Assembly gurus, the people who wrote the C compiler, and the enormous amount of time in your hands. Next, fix and improve your C program. You also get mobility as a bonus.

+11
source share

This question seems to stem from the fallacy that higher performance automatically improves. From a higher level point of view, there is too much to make the assembly better in the general case. Even if performance is your primary concern, compilers usually do a better job of creating an efficient assembly than you could write yourself. They have a much broader “understanding” of all of your source code than you might think. Many optimizations can be made from NOT using a well-structured assembly.

Obviously, there are exceptions. If you need to directly access equipment, including special processor processing functions (such as SSE), then assembly is the way to go. However, in this case, you are probably better off using a library that will address your common problem directly (e.g., numeric packets).

But you should only worry about such things if you have a specific specific need for increased performance, and you can show that your assembly is actually faster. Specific specific needs include: noticed and measurable performance issues, embedded systems where performance is a major design issue, etc.

+9
source share

Use C for most tasks and write build code for specific builds (for example, to use SSE, MME, ...)

+5
source share

If you are not a build expert and / or use advanced opcodes not used by the compiler, the C compiler will most likely benefit.

Try it for fun; -)

More realistic solutions often allow the C compiler to do this a bit, then a profile and, if necessary, configure certain sections - many compilers can unload some kind of IL (or even a low-level “build”).

+4
source share

Ignoring how long it takes to write the code, and assuming that you have all the knowledge necessary to complete any task most effectively in both situations, assembly code by definition can always meet or beat the code generated by the C compiler, because the compiler C must create assembly code to accomplish the same task, and it cannot optimize everything; and anything that the C compiler writes, you can also write (theoretically), and unlike the compiler, sometimes you can use a shortcut because you know more about the situation than you can express in C code.

However, this does not mean that they work poorly and that the code is too slow; just that it is slower than it could be. It can be no more than a few microseconds, but it can still be slower.

What you should remember is that some optimizations performed by the compiler are very complex: aggressive optimizations tend to lead to very unreadable assembler, and it becomes more difficult to talk about the code as a result, if you had to do them manually, so first you write it in C (or some other language), then project it to find problem areas, and then continue to manually optimize this piece of code until it reaches an acceptable speed - since the cost of writing is all in The assembly is much higher, but often does not give any benefit.

+3
source share

It depends. Intel C compilers work pretty well today. I was not so impressed with the compilers for ARM - I could easily write a version of the inner loop assembly that ran twice as fast. Usually you do not need to build on x86 machines. If you want direct access to SSE instructions, check out the compiler's internal specifications!

+3
source share

In fact, C can be faster than assembly in many cases, as compilers apply optimizations to your code. Despite this, the difference in performance (if any) is negligible.

I would focus more on the readability and maintainability of the code base, as well as the fact that what you are trying to do is supported in C. In many cases, the assembly will allow you to do lower-level things that C simply cannot do. For example, during assembly, you can directly use the MMX or SSE instructions.

So, in the end, focus on what you want to achieve. Remember - assembly language code is terrible to maintain. Use it only when you have no other choice.

+2
source share

No, compilers do not work at all. The amount of optimization that can be superseded by the build is negligible for most programs.

This amount depends on how you define the "modern C compiler." A completely new compiler (for a chip that has just reached the market) can have a lot of inefficiencies that will be fixed over time. Just compile some simple programs (like string.h functions) and analyze what each line of code does. You may be surprised at some wasteful things that the C compiler has not tested, and recognize the error with a simple reading of the code. The mature, well-tested, carefully optimized compiler (Think x86) will do the job of building the build well, although the new one will still do a decent job.

In no case can C work better than assembly. You can simply compare the two, and if your assembly was slower, compile with -S and submit the resulting assembly, and you guaranteed a draw. C is compiled into an assembly that has a 1: 1 correlation with bytecode. The computer cannot do anything that the assembly cannot complete, provided that a complete set of instructions is published.

In some cases, C is not expressive enough to be fully optimized. A programmer may know something about the nature of data that simply cannot be expressed in C so that the compiler can take advantage of this knowledge. Of course, C is expressive and close to metal and very good for optimization, but full optimization is not always possible.

The compiler cannot define "performance" as a person can. I understand that you said trivial programs, but even in the simplest (useful) algorithms there will be a compromise between size and speed. The compiler cannot do this on a smaller scale than the -Os / -O flags [1-3], but a person may know what “better” means in the context of the program’s purpose.

Some architecture-related assembly instructions cannot be expressed in C. Here are the ASM () instructions. Sometimes this is not for optimization at all, but simply because there is no way to express in C that this line should use, say, an atomic test and given operation, or that we want to issue an SVC interrupt with an encoded parameter X.

Despite the above points, C is an order of magnitude more efficient to program and manage. If performance is important, build analysis is needed and optimizations are likely to be found, but the trade-off between time and developer effort is rarely worth the effort for complex PC programs. For very simple programs that should be as fast as absolutely possible (for example, RTOS) or that have serious memory limitations (for example, ATTiny with 1 KB of flash memory (non-writable) and 64 bits of RAM), the assembly may be the only one way to go.

+2
source share

Given the infinite time and extremely deep understanding of how a modern processor works, you can actually write an “ideal” program (that is, the best performance on this machine), but you will have to consider that any instruction in your program how the processor behaves in this context, pipelining and caching of related optimizations and more. The compiler is built to create the best build code. You rarely understand the modern assembler generated by the collector because it tends to be extremely extreme. From time to time, this task fails because they cannot always foresee what is happening. They usually do fine, but sometimes they fail ...

Resuming ... knowing that C and Assembly are completely inadequate to do a better job than the compiler in 99.99% of cases, and thought that programming something in C could be 10,000 times faster than programming the same thing build programs, a more convenient way to spend some time is to optimize what the compiler did wrong in the remaining 0.01%, rather than reinvent the wheel.

+1
source share

Does it depend on the compiler you use? This is not a property of C or any language. Theoretically, you can load a compiler with such a complex AI that you can compile the prolog into a more efficient machine language than GCC can do with C.

It depends on 100% of the compiler and 0% of C.

It doesn't matter that C is written as a language for which it is easy to write an optimizing compiler from C → assembly, and with the help of assembly this means the instructions of the Von Neumann machine. It depends on the purpose, some languages, such as prologue, will probably be easier to display on hypothetical "reduction machines".

But, given that assembly is your target language for your C compiler (you can technically compile C to the brain or to Haskell, there is no theoretical difference), then:

  • You can write an optimally fast program in the assembly itself (duh)
  • You can write a C compiler, which at any given time should create the most optimal assembly. That is, there is a function from each program on the most optimal way to get the same I / O in the assembly, and this function is computable, although perhaps not deterministic.
  • It is also possible with any other programming language in the world.
0
source share

All Articles