Well, I tried a little and looked:
dct = {'four': 3, 'three': 2, 'two': 1, 'one': 0} print(sys.getsizeof(dct)) # = 272 print(sys.getsizeof(dict(dct))) # = 272 print(sys.getsizeof({k: v for k, v in dct.items()})) # = 272 dct = {'four': 3, 'three': 2, 'five': 4, 'two': 1, 'one': 0} print(sys.getsizeof(dct)) # = 272 print(sys.getsizeof(dict(dct))) # = 272 print(sys.getsizeof({k: v for k, v in dct.items()})) # = 272 dct = {'six': 5, 'three': 2, 'two': 1, 'four': 3, 'five': 4, 'one': 0} print(sys.getsizeof(dct)) # = 1040 print(sys.getsizeof(dict(dct))) # = 656 print(sys.getsizeof({k: v for k, v in dct.items()})) # = 1040 dct = {'seven': 6, 'six': 5, 'three': 2, 'two': 1, 'four': 3, 'five': 4, 'one': 0} print(sys.getsizeof(dct)) # = 1040 print(sys.getsizeof(dict(dct))) # = 656 print(sys.getsizeof({k: v for k, v in dct.items()})) # = 1040 dct = {'seven': 6, 'six': 5, 'three': 2, 'two': 1, 'four': 3, 'five': 4, 'eight': 7, 'one': 0} print(sys.getsizeof(dct)) # = 656 print(sys.getsizeof(dict(dct))) # = 1040 print(sys.getsizeof({k: v for k, v in dct.items()})) # = 1040
I’m not sure what kind of optimization is going on here, but I assume that these structures use different “best practices”. I mean when to allocate how much memory for the hash table. For example, if you have eleven or more elements, you get another strange mismatch:
dct = {1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10:10, 11:11} print(sys.getsizeof(dct))
Thus, this is probably just some “optimization” of memory consumption when creating dictionaries in different ways, why does this non-monotonous outlier exist for literal syntax using 6 or 7 elements: I don’t know. Maybe some memory optimization went wrong and is it a bug that allocates too much memory? I have not read the source code yet.