Yesterday morning I had an interview, and they asked me the question "Give an algorithm to remove duplicates from a list of integers." This is a pretty standard question, so I was sure I could answer it.
I rephrase, but I said something like the lines "You can use a hash table. Start with the first integer and paste it into the hash table. Then for each consecutive integer, search for the hash table to check if the integer is valid in the hash table, if not, then insert it, if it is already there, then throw it away because it is a duplicate. So, iterate through the list this way. If the hash table is designed correctly, queries and inserts should be constant on average . "
Then the interviewer replied (again I rephrase) "But the search for the hash table is not a constant time, it depends on how many elements are already in it. The algorithm that you described will be O (n ^ 2)"
Then I answered "Really?" I thought that if you developed a good hash function, would it be a constant time? This is usually O (n) "
Then the interviewer replied: βSo you say that the search time will be the same for a hash table with many records and a hash table with several recordsβ
Then I said: "Yes, if it is designed correctly."
Then the interviewer said: "This is not so."
So, I'm very confused right now. If someone can indicate where I am wrong, I would be very grateful
source share