Integer multiplication mod 2³² in ActionScript 3

Has anyone come across an authoritative specification of how int and uint arithmetic works in ActionScript 3? (By "authoritative" I mean either "comes from Adobe" or "was declared authoritative by Adobe"). In particular, I am looking for a supported way to do integer multiplication modulo 2 32 . This does not apply to any Adobe documentation that I could find.

ActionScript claims to be based on ECMAScript, but ECMAScript does not perform integer arithmetic at all. It does everything on IEEE-754 double and reduces the result modulo 2 32 before bitwise operations, which in most cases imitate integer arithmetic. However, this does not work for multiplication: the true result of the multiplication, say, 0x10000001 * 0x0FFFFFFF will be too long for the double mantissa, so the lower digits will be lost if the specification follows the letter.

Now enter ActionScript. I experimentally discovered that multiplying two int or uint variables and immediately dropping the product by int or uint always gives me the exact result. However, the generated bytecode AVM2 simply contains the usual “mul” command without explicitly indicating that it should produce an integer result, not a floating point; the virtual machine will have to look ahead to find out. I'm worried that I was just lucky in my experiments and got extra accuracy as a bonus, and not on something that I can rely on.

(Firstly, all my experiments were performed using the x86 Flash player. Perhaps it presents intermediate results as Intel 80-bit doubles or stores a 64-bit int in the evaluation stack until it finds out that it will It wouldn’t be easily possible on a non-x86 tablet without 32 × 32 → 64 multiplication instructions, so that the VM could decide to reduce the accuracy to what the ECMAScript standard specifies?)

24-hour status: Mike Welch did some research and provided some very useful links, but unfortunately not enough to close the question. Anyone else?

(tl; dr discusses in the comments: whitequark to some extent refutes one of my hypothetical reasons why the answer may be “no.” Its points are merit, but, of course, are not an indication that the answer is “yes”).

+8
language-lawyer actionscript actionscript-3
source share
1 answer

ActionScript 3 was based on ECMAScript 4, which includes true 32-bit int and uint operations. For example, the multipy_i command performs integer multiplication (source: AVM2 Overview ).

Unfortunately, the Adobe AS compiler seems to be performing floating versions of these opcodes, for example. multiply , which supposedly distinguishes operands as 64-bit floats. Perhaps this is in line with the ECMAScript specification , which states that ints will rise to double during math operations to overflow the descriptor. If it really does 64-bit multiplication by the boards, and then converts back to int, then there should be a loss of precision.

Despite this, Flash Player does not seem to lose accuracy when immediately returning to int. For example:

 var n:int = 0x7FFFFFFF; var n2:int = n*n; trace(n2); 

Although this code generates a multiply command, it displays 1 in Flash Player, which is the result if there is no loss of precision. It is unclear whether this behavior is consistent and cross-platform. However, I tested it in Flash Player on several platforms, including several mobile phones, and the result seemed to be consistent. However, running this code through the tamarine shell in interpreted mode outputs 0! (JIT mode still returned 1, so this behavior should be a side effect of JIT). Therefore, it is risky to rely on this.

Using the opcode multiply_i instead should behave appropriately. HaXe will use this opcode when working with int. Apparat can also be used to apply this opcode.

+4
source share

All Articles