How to use 128 bit integers in Cython

On my 64-bit computer, the long long type has 64 bits.

 print(sizeof(long long)) # prints 8 

I need to use 128-bit integers and, fortunately, GCC supports these . How can i use them in cython?

The following does not work. A foo.pyx compilation containing only

 cdef __int128_t x = 0 

gives

 $ cython foo.pyx Error compiling Cython file: ------------------------------------------------------------ ... cdef __int128_t x = 0 ^ ------------------------------------------------------------ foo.pyx:2:5: '__int128_t' is not a type identifier 
+8
c gcc long-integer cython int128
source share
3 answers

EDIT: this is NOT UPDATED, this is the right way to do this. See also @IanH answer.

Now the problem is that cython does not recognize your type, but gcc . Therefore, we can try to trick cython .

File helloworld.pyx :

 cdef extern from "header_int128.h": # this is WRONG, as this would be a int64. it is here # just to let cython pass the first step, which is generating # the .c file. ctypedef unsigned long long int128 print "hello world" cpdef int foo(): cdef int128 foo = 4 return 32 

File header_int128.h :

 typedef __int128_t int128; 

Setup.py setup.py :

 from distutils.core import setup from Cython.Build import cythonize setup(ext_modules = cythonize("helloworld.pyx")) 

Now, on my machine, when I run python setup.py build_ext --inplace , the first step goes through and the helloworld.c file is helloworld.c , and then gcc compilation also goes through.

Now, if you open the helloworld.c file, you can check that your foo variable is actually declared as int128 .

Be very careful when using this workaround. In particular, it may happen that cython does not require conversion to C code if you have assigned int128 to int64 , for example, because at this stage of the process it does not actually distinguish between them.

+8
source share

I'll throw my two cents here.

Firstly, the solution suggested in other answers talking about using an external typedef is not just a workaround, it is how Cython docs says that such things should be done. See the relevant section . Quote: "If the header file uses typedef names such as word to refer to platform-specific varieties of numeric types, you will need the appropriate ctypedef operator, but you do not need to match the type exactly, just use something suitable for the generic type (int, float etc.). For example, ctypedef int word will work fine, regardless of the actual size of the word (if the header file correctly defines it). Converting to and from Python types, if any, will also be for this new type. "

In addition, there is no need to create a header file with a typedef for a type that you have already included somewhere else along the way. Just do it

 cdef extern from *: ctypedef int int128 "__int128_t" 

Or, if you want to keep the name in Cython the same as in C,

 cdef extern from *: ctypedef int __int128_t 

Here is a test demonstrating that this works. If 128-bit arithmetic works, a > 1 , and a is represented as a 64-bit integer, the first function will print the same number again. If this is not the case, the integer overflow should make it print 0. The second function shows what happens if 64-bit arithmetic is used.

Cython file

 # cython: cdivision = True cdef extern from *: ctypedef int int128 "__int128_t" def myfunc(long long a): cdef int128 i = a # set c to be the largest positive integer possible for a signed 64 bit integer cdef long long c = 0x7fffffffffffffff i *= c cdef long long b = i / c print b def myfunc_bad(long long a): cdef long long i = a # set c to be the largest positive integer possible for a signed 64 bit integer cdef long long c = 0x7fffffffffffffff i *= c cdef long long b = i / c print b 

In Python, after importing both functions, myfunc(12321) prints the correct value, and myfunc_bad(12321) prints 0.

+4
source share

Here is an example of the use of hack suggested by @Giulio Ghirardo.

The cbitset.px file contains:

 typedef unsigned __int128 bitset; 

The bitset.pyx file contains:

 from libc.stdlib cimport malloc from libc.stdio cimport printf cdef extern from "cbitset.h": ctypedef unsigned long long bitset cdef char* bitset_tostring(bitset n): cdef char* bitstring = <char*>malloc(8 * sizeof(bitset) * sizeof(char) + 1) cdef int i = 0 while n: if (n & <bitset>1): bitstring[i] = '1' else: bitstring[i] = '0' n >>= <bitset>1 i += 1 bitstring[i] = '\0' return bitstring cdef void print_bitset(bitset n): printf("%s\n", bitset_tostring(n)) 

The main.pyx file contains:

 from bitset cimport print_bitset cdef extern from "cbitset.h": ctypedef unsigned long long bitset # x contains a number consisting of more than 64 1's cdef bitset x = (<bitset>1 << 70) - 1 print_bitset(x) # 1111111111111111111111111111111111111111111111111111111111111111111111 

The setup.py contains:

 from distutils.core import setup from Cython.Build import cythonize setup( name="My app that used 128 bit ints", ext_modules=cythonize('main.pyx') ) 

Compile it with the command

 python3 setup.py build_ext --inplace 

and run the command

 python3 -c 'import main' 
+3
source share

All Articles