In most cases, @Martijn Pieters answer is correct, i.e. theoretically. However, in practice, you want to consider a lot of things when it comes to performance.
I recently ran into the problem of hashing long strings as keys , and in the practice that I am doing, a timeout error occurred only due to hashing of the Python dictionary key. I knew this because I solved the issue using the JavaScript object as a βdictionaryβ, and it worked just fine, which means there was no timeout.
Then, since my keys are actually a long string of lists of numbers, I instead made them tuples of numbers (the key can be an immutable object). This works fine as well.
At the same time, I tested the time using the hash function that @Martijn Pieters wrote in the example with a long line of a list of numbers as keys to the version of tuples. The tuple version takes longer on repl.it, their python compiler. I am not talking about 0.1 difference. This is the difference between 0.02 and 12.02 .
Strange, isn't it ?! :>
Now the thing is, every environment is changing. The volume of your operations is accumulating . Thus, you CANNOT just say whether a particular operation will last longer or shorter. Even if this operation takes 0.01 s, that is, only 1000 times, the user waits 10 s.
For any production environment, you really want to try to optimize your algorithm, if necessary, and always use the best design. For regular software, this saves valuable time for your users. For cloud services, these will be the dollar bills we are talking about.
Finally, I definitely DO NOT recommend using long strings as keys just because of the conflicting results I got in different environments. You definitely want to use identifiers as keys and iterate over string values ββto find identifiers if you need to. But if you need to use a long string as keys, consider limiting the number of operations on the dictionary. Storing two versions is certainly a waste of space / RAM. The topic of performance and memory is another lesson.