
Necessary knowledge
To perform arithmetic and logical operations, computers have a digital circuit called ALU (Arithmetic Logical Unit) in its CPU (Central Processing Unit). ALU loads data from input registers . Processor register memory in L1 cache (data requests within three CPU clock pulses) implemented in SRAM (Static Random Access Memory) located in the CPU chip . A processor often contains several types of registers, which usually differ in the number of bits that they can hold .
Numbers expressed in discrete bits may contain a finite number of values. Typically, numbers have the following primitive types represented by a programming language ( in Haskell ):
8 bit numbers = 256 unique representable values 16 bit numbers = 65 536 unique representable values 32 bit numbers = 4 294 967 296 unique representable values 64 bit numbers = 18 446 744 073 709 551 616 unique representable values
Fixed-precision arithmetic for these types is implemented at the hardware level . Word size refers to the number of bits that can be processed by a computer processor at a time. For x86 architecture it is 32 bit and x64 it is 64 bit .
IEEE 754 defines the number of floating point numbers that is standard for numbers {16, 32, 64, 128}. For example, a 32-bit point number (with 4,294,967,296 unique values) may contain approximate values [- 3.402823e38 to 3.402823e38] with an accuracy of at least 7 floating point digits .

Besides,
The acronym GMP stands for GNU Multicast Arithmetic Library and adds support for software emulated arithmetic of arbitrary precision. Compiler Glasgow Haskell . Using Integer uses this.
GMP aims to be faster than any other bignum library for all operand sizes. Here are some important factors:
- Using complete words as the main arithmetic type.
- Using different algorithms for different sizes of operands; algorithms that are faster for very large numbers are usually slower for small numbers.
- Highly optimized assembly language code for the most important internal loops specialized for different processors.
Answer
For some Haskell, it may be a little difficult to understand the syntax, so here is the javascript version
var integer_cmm_int2Integerzh = function(word) { return WORDSIZE == 32 ? goog.math.Integer.fromInt(word)) : goog.math.Integer.fromBits([word.getLowBits(), word.getHighBits()]); };
Where goog The Google Closure class used is located in Math.Integer . Called Functions:
goog.math.Integer.fromInt = function(value) { if (-128 <= value && value < 128) { var cachedObj = goog.math.Integer.IntCache_[value]; if (cachedObj) { return cachedObj; } } var obj = new goog.math.Integer([value | 0], value < 0 ? -1 : 0); if (-128 <= value && value < 128) { goog.math.Integer.IntCache_[value] = obj; } return obj; }; goog.math.Integer.fromBits = function(bits) { var high = bits[bits.length - 1]; return new goog.math.Integer(bits, high & (1 << 31) ? -1 : 0); };
This is not entirely correct, since the return type must be return (s,p); Where
To fix this GMP shell, you must create. This was done in Haskell for the JavaScript compiler ( source link ).
Lines 5-9
ALLOC_PRIM_N (SIZEOF_StgArrWords + WDS(1), integer_cmm_int2Integerzh, val); p = Hp - SIZEOF_StgArrWords; SET_HDR(p, stg_ARR_WORDS_info, CCCS); StgArrWords_bytes(p) = SIZEOF_W;
Follow
- makes space like a new word
- creates a pointer to it
- set pointer value
- set pointer size