There is a lot and a lot of warmth, but not so much light, so let's see if we can provide some of them.
First , the RB tree is an associative data structure, unlike, say, an array that cannot take a key and return the associated value, well, if only an integer "key" in 0% sparse index of adjacent integers. The array cannot grow in size (yes, I also know about realloc (), but under covers that require a new array, and then memcpy ()), so if you have one of these requirements, the array will not. The array memory efficiency is perfect. Zero waste, but not very smart or flexible - realloc () does not stand up.
The second , unlike bsearch () for an array of elements, which is an associative data structure, the RB tree can grow (and shrink) in size dynamically. The bsearch () function works great for indexing a data structure with a known size that remains that size. Therefore, if you do not know the size of your data in advance or if new elements should be added or removed, bsearch () is absent. Bsearch () and qsort () are well supported in classic C and have good memory efficiency, but are not dynamic enough for many applications. This is my personal favorite, because they are fast, light, and if you are not dealing with real-time applications, quite often quite flexible. In addition, in C / C ++ you can sort the array of pointers to data records by pointing to the struc {} element, for example, you want to compare and then rearrange the pointer in the array of pointers so that reading the pointers in order at the end of the pointer sort output your data in sorted order. Using this with memory mapped data files is extremely economical, fast and easy. All you have to do is add a few "*" to your / s comparison function.
Third , unlike a hash table, which should also be a fixed size and cannot be increased after filling, the RB tree will automatically grow and balance to maintain its O (log (n)) guarantee of operation. Especially if the RB tree key is int, it can be faster than the hash, because although the complexity of the hash table is O (1), this 1 can be a very expensive hash calculation. A multiple single-entry integer with a tree often exceeds 100-hour + hash calculations, not to mention reuse, and malloc () is the space for hash collisions and repetitions. Finally, if you want access to ISAM, as well as a key to access your data, a hash is excluded, since the ordering of data inherent in the hash table is not performed, unlike the natural ordering of data in any tree implementation. The classic use of a hash table is to provide access to the reserved word table for the compiler. The memory efficiency is excellent.
The fourth and very low in any list is a linked or doubly linked list, which, unlike an array, naturally supports inserting and deleting elements, and, as it follows, resizing, This is the slowest of all data structures, since each element knows how to go to the next element, so you need to look on average for links (element_knt / 2) to find your database. It is mainly used where inserts and deletions somewhere in the middle of the list are common, and especially when the list is round and loads an expensive process that makes time to read links relatively small. My shared RX should use an arbitrarily large array instead of a linked list, if your only requirement is to increase its size. If you finish the size with an array, you can realloc () to increase the array. STL does this for you under the covers when you use a vector. Rough, but potentially 1000 times faster if you don’t need inserts, deletions or key searches. Poor memory quality, especially for doubly linked lists. In fact, a doubly linked list that requires two pointers, exactly the same as a memory, is inefficient like a red-black tree, having NO of its attractive fast, ordered search characteristics.
Fifth , trees support many more operations on their sorted data than any other data structure. For example, many database queries exploit the fact that the range of leaf values can be easily specified by specifying their common parent, and then focusing the subsequent processing on the part of the tree that the parent "owns". The potential for multithreading offered by this approach should be obvious, since only a small area of the tree needs to be blocked, namely, only the nodes owned by the parent and the parent itself.
In short, trees are Cadillac data structures. You pay a high price in terms of memory usage, but you get a completely self-preserving data structure. That's why, as pointed out in other answers, transaction databases use trees almost exclusively.