Using Ipython %timeit magic, I get:
In [218]: X=np.ones((100,100)) In [219]: timeit XT 1000000 loops, best of 3: 379 ns per loop In [220]: timeit X.transpose() 1000000 loops, best of 3: 470 ns per loop In [221]: timeit np.transpose(X) 1000000 loops, best of 3: 993 ns per loop In [222]: timeit X+1 10000 loops, best of 3: 21.6 ยตs per loop
So yes, .T is faster and the function is slower. But compare these times with time for a simple addition
Or a copy or fragment
In [223]: timeit X.copy() 100000 loops, best of 3: 10.8 ยตs per loop In [224]: timeit X[:] 1000000 loops, best of 3: 465 ns per loop
Transposing in all its forms returns a new array object with new shape and strides , but with a common data buffer (look at the dictionary .__array_interface__ to see this). Therefore, it takes about the same time as other actions that return a view . But none of the transpose functions copy data or iterate through it. So the time difference is just the result of a call over your head.
Again with ipython magic
np.transpose?? def transpose(a, axes=None): try: transpose = a.transpose except AttributeError: return _wrapit(a, 'transpose', axes) return transpose(axes)
So np.function(X) ends with a call to X.transpose() .
I need to look at the numpy code, but I remember that .T implemented as attribute (not quite the same as property ). I suspect this is faster because it does not use the axes parameter and thus saves a function call of C or two.
hpaulj
source share