Disclaimer: Although the information below should be carried out independently, the first basic sanity check, given such a sharp difference in performance compared to such a simple case, is to first make sure that construction optimization is turned on. With this off the road ...
unordered_map is ultimately designed as a rather large-scale container, predetermining a potentially large number of buckets.
See here: std :: unordered_map very high memory usage
And here: How does C ++ STL unordered_map resolve conflicts?
When calculating the hash index is trivial, the amount of memory (and steps between them) that can be accessed for such a small unordered_maps can very easily turn into a bottleneck with no cache with something that is often accessed, since retrieving the component interface from an object .
For systems with entity components, you usually do not have many entity related components - maybe something like dozens of vertices, and often only a few. As a result, std::vector is actually a much more suitable structure and, above all, in terms of locality of links (small arrays that can often be accessed every time, every time you extract a component interface from an object). While the smaller dot, std::vector::operator[] also a function that is trivially nested.
If you want to do even better than std::vector here (but I recommend this only after profiling and defining it for you), provided that you can output some general upper case bound, N , for the number of components usually available in essence, something like this might work even better:
struct ComponentList { Component* components[N]; Component** ptr; int num; };
Start by installing ptr on components and then access them through ptr . Insertion of a new component increases num . When num >= 4 (a rare case), change ptr to point to a dynamically allocated block with a large size. When destroying a ComponentList free dynamically allocated memory if ptr != components ComponentList . This takes a little memory if you store fewer N elements (although std::vector also usually does this with initial bandwidth and how it grows), but it turns your entity and component list into completely contiguous if num > N As a result, you get better link locality and possibly even better results than where you started (I assume that due to a significant reduction in frame rate, you extract components from objects quite often from loops, which is not uncommon in ECS )
Given how often access to component interfaces can be obtained from an object, and very often in very tight cycles, this can be worth the effort.
However, your initial choice of std::vector was actually the best considering the typical data scale (the number of components available in the entity). With very small data sets, basic linear sequential searches often outperform more complex data structures, and you often want to focus on memory / caching efficiency instead.
I tried to use const char *, std :: string, type_info and finally enum enter unordered_map as key values, but nothing really helps: the whole implementation got me 15-16 FPS.
Just on this note, for the keys you want something that can be compared in constant time, as an integer value. One of the features that may be convenient in this case is an interned string, which simply stores int for a balance between convenience and performance (allowing clients to create them via string , but compare them using int while searching for components).