If performance is a problem, you don't want to use ctypes arrays with a star (e.g. (ctypes.c_float * size)(*t) ).
In my pack test, the fastest way is to use the array module with an accent address (or using the from_buffer function).
import timeit repeat = 100 setup="from struct import pack; from random import random; import numpy; from array import array; import ctypes; t = [random() for _ in range(2* 1000)];" print(timeit.timeit(stmt="v = array('f',t); addr, count = v.buffer_info();x = ctypes.cast(addr,ctypes.POINTER(ctypes.c_float))",setup=setup,number=repeat)) print(timeit.timeit(stmt="v = array('f',t);a = (ctypes.c_float * len(v)).from_buffer(v)",setup=setup,number=repeat)) print(timeit.timeit(stmt='x = (ctypes.c_float * len(t))(*t)',setup=setup,number=repeat)) print(timeit.timeit(stmt="x = pack('f'*len(t), *t);",setup=setup,number=repeat)) print(timeit.timeit(stmt='x = (ctypes.c_float * len(t))(); x[:] = t',setup=setup,number=repeat)) print(timeit.timeit(stmt='x = numpy.array(t,numpy.float32).data',setup=setup,number=repeat))
The array.array approach is slightly faster than the Jonathan Hartley approach in my test, while the numpy method is about half the speed:
python3 convert.py 0.004665990360081196 0.004661010578274727 0.026358536444604397 0.0028003649786114693 0.005843495950102806 0.009067213162779808
The winner of the network is the package.
Daniel Lemire
source share