I'm curious to see how memory management differs between Bytearray and a list in Python.
I found several questions, such as The difference between bytearray and a list , but didn’t exactly answer my question.
My question is for sure ...
from array import array >>> x = array("B", (1,2,3,4)) >>> x.__sizeof__() 36 >>> y = bytearray((1,2,3,4)) >>> y.__sizeof__() 32 >>> z = [1,2,3,4] >>> z.__sizeof__() 36
As we can see, there is a size difference between the list /array.array (36 bytes for 4 elements) and byte (32 bytes for 4 elements). Can someone explain to me why this is? For a byte array, it makes sense that it takes 32 bytes of memory for 4 elements ( 4 * 8 == 32 ) , but how can this be interpreted for a list and array.array?
# Lets take the case of bytearray ( which makes more sense to me at least :p) for i in y: print(i, ": ", id(i)) 1 : 499962320 2 : 499962336
Why the difference between two adjacent elements here is different from 16 units, when each element takes up only 8 bytes. Does this mean that each memory address pointer points to a nibble?
Also what are the criteria for allocating memory for an integer? I read that Python will allocate more memory based on the value of an integer (correct me if I am wrong), the larger the number, the more memory.
For instance:
>>> y = 10 >>> y.__sizeof__() 14 >>> y = 1000000 >>> y.__sizeof__() 16 >>> y = 10000000000000 >>> y.__sizeof__() 18
What are the criteria by which Python allocates memory?
And why is Python taking up much more memory, and C takes up only 8 bytes (mine is a 64-bit machine)? when they are perfectly in the range of integers (2 ** 64) ?
Metadata:
Python version: '3.4.3 (v3.4.3:9b73f1c3e601, Feb 24 2015, 22:43:06) [MSC v.1600 32 bit (Intel)]'
Machine Architecture: 64-bit
PS : Please bring me to a good article where Python memory management is better explained. I spent almost an hour to understand these things, and ended up asking this Question in SO .:(