I came across this by setting up some performance sensitive code:
user> (use 'criterium.core)
nil
user> (def n (into {} (for [i (range 20000) :let [k (keyword (str i))]] [k {k k}])))
user> (quick-bench (-> n :1 :1))
WARNING: Final GC required 32.5115186521176 % of runtime
Evaluation count : 15509754 in 6 samples of 2584959 calls.
Execution time mean : 36.256135 ns
Execution time std-deviation : 1.076403 ns
Execution time lower quantile : 35.120871 ns ( 2.5%)
Execution time upper quantile : 37.470993 ns (97.5%)
Overhead used : 1.755171 ns
nil
user> (quick-bench (get-in n [:1 :1]))
WARNING: Final GC required 33.11057826481865 % of runtime
Evaluation count : 7681728 in 6 samples of 1280288 calls.
Execution time mean : 81.023429 ns
Execution time std-deviation : 3.244516 ns
Execution time lower quantile : 78.220643 ns ( 2.5%)
Execution time upper quantile : 85.906898 ns (97.5%)
Overhead used : 1.755171 ns
nil
It is not interesting to me that it is get-inmore than twice as slow as flowing through gethere, since it is get-inapparently defined as the best abstraction for this kind of thing.
Does anyone have an idea why this is so (both technically and philosophically)?
source
share