Red-black tree - binary search tree. It's just a BST flavor that has fancy versions of insert and delete operations that reorganize the tree as it starts, so the tree never becomes "long and stiff." The longer and stricter the tree gets, the more it behaves like a linked list. On average, operations with a linked list require half the list to be touched (or the entire list in the worst case), so the execution time changes linearly (O (n) in the number of elements n). When the tree is "thick" or almost balanced, then the operations are much cheaper (O (log n)) each. Red-black algorithms ensure that the tree remains dense.
To make it concrete, here are two trees that store keys from A to G. The left ones are long and hard. Pay attention to what the list looks like. In the worst case, 4 key comparisons are required to search for an item. The tree on the right is thick. He needs only 3. Here the difference is small. For a tree of 1023 elements for a string tree, 512 comparisons are required, but only a thick one - 10. This is the power of log n.
B _D_ / \ / \ ADBF / \ / \ / \ CFACEG / \ EG
A red-black tree is not guaranteed to be absolutely thick (in the correct terminology it is "perfectly balanced"), but the rules of red-black tree guarantee that it is sufficiently thick mathematically strict, so that the operating time varies depending on the log n, and not linearly in n.
The genius of red-black rules is that they are "local." During insertion or deletion that violate the rules, you can restore them by adjusting only a constant number of nodes for each node affected by the operation. Therefore, they are cheap and quite simple to implement.
However, when the red-black rules are true for the whole tree, it can be shown with the help of clever mathematical proof that it is thick enough, as described above.
How about a map of trees? A map is a function with a domain called a set of keys K and a range called a set of values V. To implement a map of trees, each node stores a pair of key values <k,v> , where k \in K and v \in V The figures above (and most presentations) show only the keys (letters AG). On the map, insertion, search, and deletion work on these pairs in a rather obvious way. For example, a search searches for a key using the usual BST algorithm. When a key is found, the value is also found because it is in the same pair. It is back. In java, a couple is called Map.Entry . You can check this in the Java source code .
I will not go into details about how red-black rules work, because I could not improve the Wikipedia page . I assume and hope that you are missing the "big picture", so now the discussion will make sense. The good news is that almost all languages provide a thoroughly tested and performance-optimized tree implementation, so knowledge of the secret details of the rotation is not required. Of course, if you are interested and just want to know, have it! Congratulations!
For what it's worth, Top Coder explains the algorithms not always clearer. Hunt around for others until you need a “click”. Dear CS textbooks are respected for one reason: their explanations are excellent. Corman and Rivest is a recognized favorite.