Algorithmic riddle: random access, insert, and delete sequence

Describe the data structure where:

  • Any element is indexed by an integral value, as in an array
    • an integer can index a single value
    • the integers used to index elements are adjacent: they go from 1 to n without holes
  • Getting the item at position i (i.e.: the item associated with the integer i ) should be as fast as possible
    • random access
  • Inserting a new item at position i should be as fast as possible
    • this will shift to the right of any element with i forward
  • Deleting an item at position i should also be as fast as possible
    • this will shift to the left any element from i+1 forward

EDIT: a little thing I forgot about: item indices can only be shifted when adding / removing, they cannot be accidentally replaced.

In this description, n is the size of the structure (i.e., the number of elements it contains), and i is the total integer (1 <= i <= n ), of course.


I heard this from a guy I met at my faculty. I donโ€™t know if this is an interview question, an exam question, just a mystery or what, but I think that could be all.

If I remember correctly (but, by the way, until December 24), he said that such a data structure can be implemented either with O(sqrt n) insertion / deletion, or access time O(1) , or using O(log n) for any operation.


EDIT: Some correct answers have been given. Read this if you donโ€™t want to think about this issue anymore.

+6
language-agnostic algorithm data-structures
source share
2 answers

Well, any self-balancing binary tree (like red-black) will provide all three operations for O(logn) . The C ++ map RB mentioned is one example (unless I forgot C ++ completely).

As for the indexing ( get ) operation, there is a standard trick. We will store in each node how many nodes its left subtree has. Now we can find the element at position i in O(logn) times in a way like this

 Data get(Node root, int i) { if (i <= root.leftCount) { return get(root.left, i); } else if (i == root.leftCount + 1) { return node; } else { return get(root.right, i - root.leftCount - 1); } } 

Obviously, each time element is added or deleted, leftCount values โ€‹โ€‹should be recalculated, but this should be done only for O(logn) nodes. (think about how many nodes deleted one in their left subtree - only those that are directly between it and the root)

+3
source share

The skip list can insert / delete / search the index each in O(log n) : http://en.wikipedia.org/wiki/Skip_list#Indexable_skiplist

For the O(n^1/2) insert / remove and O(1) index, I assume something like a C ++ deque , but consisting of n^1/2 adjacent blocks, each of the elements is n^1/2 (+/- 1 for every square root, of course). To delete, you must shuffle the part of the block containing the deleted element ( O(n^1/2) ) and then shuffle all the higher blocks, which you would do by adjusting the offset for each block (not more than O(n^1/2) blocks), not something to move. To insert, you may need to redistribute the block ( O(n^1/2) ) and adjust the offsets. In both cases, you may also need to move one or two elements from the beginning / end of some blocks to the end / beginning of the previous / next block, again at most O(n^1/2) times to save an invariant that does not contain two blocks differ in size by more than 1, with the exception of the latter, which is used to attenuate. Finally, you sometimes have to create or destroy a block. Search O(1) because you can fall into one block of the element you are looking for with one division, and then look at the stored offsets for one or two blocks to find it.

I do not know what caused this, or I missed some important details, which means that it does not work as I described it.

+2
source share

All Articles