Like all previous comments, the hash code is used for hashing in collections or can be used as a negative condition on equal terms. So yes, you can slow down your application. Obviously, there are more use cases.
First of all, I would say that the approach (regardless of whether to rewrite it at all) depends on the type of objects you are talking about.
- The default implementation of the hash code is as fast as possible, since it is unique for each object. This is possible for many cases.
- This is not good if you want to use hashset and let them choose not to store two identical objects in the collection. Now the dot is in the "same" word.
“Same” may mean “same instance”. “Same” can mean an object with the same (base) identifier, when your object is an object or “the same” can mean an object with all equal properties. It seems that this may affect performance.
But one of the properties can be an object that can also evaluate hashCode () on demand, and right now you can always get the hash code rating of the object tree when you call the hash code method on the root object.
So what would I recommend? You need to identify and clarify what you want to do. Do you really need to distinguish between different instances of objects, or is the identifier crucial, or is it an object of value?
It also depends on immutability. You can calculate the hashcode value once when the object is built using all the properties of the constructor (which only gets) and is always used when hashcode () is a call. Or another option is to always evaluate hashcode when any property receives changes. You need to decide whether most cases read the value or write it.
The last thing I would say is to override the hashCode () method only when you know that you need it, and when you know what you are doing.
Martin podval
source share