Removing a random item is pretty easy from a hash or dictionary. Everything is averaged by O (1), which in the real world means O (1). Example:
public class MyNode { ... } public class MyDataStructure { private HashSet<MyNode> nodes = new HashSet<MyNode>();
Almost everything that can be compared can also be hashed :) in my experience. I would like to know if there is someone who knows something that cannot be hashed.
In my experience, this also applies to some tolerance floating point comparisons using special methods.
The hash function for the hash table does not have to be perfect, it just has to be good enough. Also, if your data is very complex, usually hash functions are less complex than red black trees or avl trees. They are useful because they preserve the order of things, but you do not need it.
To show how to make a simple hashset, I will consider a simple dictionary with integer keys. This implementation is very fast and very good for rare arrays for examples. I did not write code for the growth of the bucket table, because it is annoying and usually a source of big mistakes, but since this is proof of concept, this should be enough. I did not write an iterator either.
I wrote it in place, there may be errors.
public class FixedIntDictionary<T> {
If you wander if this implementation is good or not, this is a very similar implementation of what the .NET platform does for the Dictionary class :)
To make it hashset, just delete T and you have hashset of integers. If you need to get hash codes for shared objects, just use x.GetHashCode or provide your hash function.
To write iterators, you need to change a few things, but donβt want to add too many other things to this post :)