Are there C99 compilers where with default settings -1 >> 1! = -1?

Many people often indicate in discussions about the real shift operator that the C standard explicitly states that the effect of the right shift of a negative number is determined by the implementation. I can understand the historical background of this statement, given that C compilers were used to generate code for many platforms that do not use arithmetic with two additions. However, all the new developments that I know are centered around processors that do not have inherent support for any whole arithmetic other than the two add-ons.

If the code wants to perform a half-divisioned whole division into two, and it will only be run for the current or future architecture, is there any real danger that any future compiler will interpret the right shift how to do anything else? If there is a real possibility, is there a good way to provide it without adversely affecting readability, performance, or both? Are there any other dependencies that would justify the full assumption of the operator’s behavior (for example, the code will be useless for implementations that do not support function X, and implementations are unlikely to support X if they do not use extended shift characters)?

Note. I ask under the tags C99 and C11 because I would expect the new language features to be among the things that, if supported, suggest that the platform is likely to use a right shift, which is arithmetically equivalent and would be interested in knowing any compilers C99 or C11, which implement a right-side shift in any other way.

+7
c ++ c bit-shift c99 c11
source share
2 answers

This is just one of many reasons why this is so, but consider the case of signal processing:

1111 0001 >> 1 0000 1111 >> 1 

In the form of the right shift arithmetic (SRA) that you are referring to, you will get the following:

 1111 0001 >> 1 = 1111 1000 OR -15 >> 1 = -8 0000 1111 >> 1 = 0000 0111 OR 15 >> 1 = 7 

So what's the problem? Consider a digital signal with an amplitude of 15 "units". Dividing this signal by 2 should give equivalent behavior regardless of sign. However, using SRA, as indicated above, a positive signal 15 would lead to a signal with an amplitude of 7, while a negative signal 15 would lead to a signal with an amplitude of 8. This unevenness leads to a DC bias at the output. For this reason, some DSP processors prefer to implement "arithmetic" "rounded to 0" or other methods in general. Since the C99 standard is worded as it is, these processors may be compatible.

On these processors -1 >> 1 == 0

Linked Wiki

+3
source share
Theoretically, there are currently subtleties in compiler implementations that can abuse the so-called ā€œw90> behaviorā€, in addition to what the backend processor will do with the actual integers in the register (or ā€œfilesā€ or memory locations or something else)
  • Cross-compilers are commonplace: the compiler may abuse implementation-specific specifications when performing simple calculations. Consider the case when the target architecture implements this method, and the other is different. In your specific example, the compile-time constants may turn out to be 1, even if any assembly output in the target architecture would otherwise give 0 (I cannot think of such an architecture). And again, vice versa. A request (other than a user complaint) would not be required for a compiler that would otherwise have to take care.

  • Consider CLANG and other compilers that generate intermediate abstract code. There is nothing stopping the type mechanics from optimizing some operations to the last bit in an intermediate period of time on some code paths (i.e., when it can reduce the code to constants, there is a folding fold), leaving the server part of the assembly to solve this problem at runtime in other ways. In other words, you could see mixed behavior. In this abstract case, there is no obligation on the executor to comply with any standards other than what C expects. Think where all the whole math is done using libraries of arbitrary precision arithmetic instead of directly matching the whole processor goals. The implementation may decide for some reason that it is undefined and will return 0. It can do this for any of the signed arithmetic operations undefined, and there are many then in the ISO C standard, especially in packaging, etc.

  • Consider the (theoretical) case where, instead of emitting a complete instruction to execute a low-level op, the compiler captures a sub-operation. An example of this is ARM with a barrel switch: an explicit instruction (i.e., Add or something else) can have range and semantics, but a sub-operation can work with slightly different limits. The compiler can use this to the extent where the behavior may differ, for example, one case may set the result flags and the other not. I can’t think of a specific case where it matters, but it’s a possibility that some strange instruction can deal only with subsets of ā€œotherwise normal behaviorā€, and the compiler may consider it a good optimization, since undefined behavior should really means undefined :-)

Besides weird architectures, where you actually have weird runtime behavior, these are some of the reasons why I can think about why you can't assume anything but undefined behavior.

Having said all this, we should also consider:

  • You asked for the C99 compiler. Most weird architectures (i.e. Built-in targets) do not have a C99 compiler.
  • Most "large-scale" compilers implement very large databases of user codes and, as a rule, nightmares with support for the face, overly optimizing minor details. So they do not. Or they do it like other players do.
  • In the specific case of the integer character "undefined behavior", usually an additional unsigned operation is a specific operation, i.e. I saw that casting with code signed without sign is only for performing op operation and then returning the result.

I think the best direct answer I could give is "you can assume that all of this doesn't matter, but maybe you shouldn't."

+1
source share

All Articles