An elegant way to reduce the list of dictionaries?

I have a list of dictionaries, and each dictionary contains exactly the same keys. I want to find the average value for each key, and I would like to know how to do it using shorthand (or if this is not possible with another more elegant way than using nested fors).

Here is the list:

[
  {
    "accuracy": 0.78,
    "f_measure": 0.8169374016795885,
    "precision": 0.8192088044235794,
    "recall": 0.8172222222222223
  },
  {
    "accuracy": 0.77,
    "f_measure": 0.8159133315763016,
    "precision": 0.8174754717495807,
    "recall": 0.8161111111111111
  },
  {
    "accuracy": 0.82,
    "f_measure": 0.8226353934130455,
    "precision": 0.8238175920455686,
    "recall": 0.8227777777777778
  }, ...
]

I would like to return the dictionary I like this:

{
  "accuracy": 0.81,
  "f_measure": 0.83,
  "precision": 0.84,
  "recall": 0.83
}

Here is what I have had so far, but I don't like it:

folds = [ ... ]

keys = folds[0].keys()
results = dict.fromkeys(keys, 0)

for fold in folds:
    for k in keys:
        results[k] += fold[k] / len(folds)

print(results)
+4
source share
5 answers

, , pandas ( , ...)

import pandas as pd

data = [
  {
    "accuracy": 0.78,
    "f_measure": 0.8169374016795885,
    "precision": 0.8192088044235794,
    "recall": 0.8172222222222223
  },
  {
    "accuracy": 0.77,
    "f_measure": 0.8159133315763016,
    "precision": 0.8174754717495807,
    "recall": 0.8161111111111111
  },
  {
    "accuracy": 0.82,
    "f_measure": 0.8226353934130455,
    "precision": 0.8238175920455686,
    "recall": 0.8227777777777778
  }, # ...
]

result = pd.DataFrame.from_records(data).mean().to_dict()

:

{'accuracy': 0.79000000000000004,
 'f_measure': 0.8184953755563118,
 'precision': 0.82016728940624295,
 'recall': 0.81870370370370382}
+7

, reduce():

from functools import reduce  # Python 3 compatibility

summed = reduce(
    lambda a, b: {k: a[k] + b[k] for k in a},
    list_of_dicts,
    dict.fromkeys(list_of_dicts[0], 0.0))
result = {k: v / len(list_of_dicts) for k, v in summed.items()}

0.0 , ( ) . .

:

>>> from functools import reduce
>>> list_of_dicts = [
...   {
...     "accuracy": 0.78,
...     "f_measure": 0.8169374016795885,
...     "precision": 0.8192088044235794,
...     "recall": 0.8172222222222223
...   },
...   {
...     "accuracy": 0.77,
...     "f_measure": 0.8159133315763016,
...     "precision": 0.8174754717495807,
...     "recall": 0.8161111111111111
...   },
...   {
...     "accuracy": 0.82,
...     "f_measure": 0.8226353934130455,
...     "precision": 0.8238175920455686,
...     "recall": 0.8227777777777778
...   }, # ...
... ]
>>> summed = reduce(
...     lambda a, b: {k: a[k] + b[k] for k in a},
...     list_of_dicts,
...     dict.fromkeys(list_of_dicts[0], 0.0))
>>> summed
{'recall': 2.4561111111111114, 'precision': 2.4605018682187287, 'f_measure': 2.4554861266689354, 'accuracy': 2.37}
>>> {k: v / len(list_of_dicts) for k, v in summed.items()}
{'recall': 0.8187037037037038, 'precision': 0.820167289406243, 'f_measure': 0.8184953755563118, 'accuracy': 0.79}
>>> from pprint import pprint
>>> pprint(_)
{'accuracy': 0.79,
 'f_measure': 0.8184953755563118,
 'precision': 0.820167289406243,
 'recall': 0.8187037037037038}
+4

Counter :

from itertools import Counter

summed = sum((Counter(d) for d in folds), Counter())
averaged = {k: v/len(folds) for k, v in summed.items()}

, oneliner:

averaged = {
    k: v/len(folds)
    for k, v in sum((Counter(d) for d in folds), Counter()).items()
}

, , reduce(); sum() .

oneliner, :

averaged = {
    k: sum(d[k] for d in folds)/len(folds)
    for k in folds[0]
}

, ( pandas?!), .

statistics.mean() Python 3.5, 10 .

+2
source

Here is a terrible one liner using list comprehension. You should probably not use this.

final =  dict(zip(lst[0].keys(), [n/len(lst) for n in [sum(i) for i in zip(*[tuple(x1.values()) for x1 in lst])]]))

for key, value in final.items():
    print (key, value)

#Output
recall 0.818703703704
precision 0.820167289406
f_measure 0.818495375556
accuracy 0.79
0
source

Here's another way, a little step by step:

from functools import reduce

d = [
  {
    "accuracy": 0.78,
    "f_measure": 0.8169374016795885,
    "precision": 0.8192088044235794,
    "recall": 0.8172222222222223
  },
  {
    "accuracy": 0.77,
    "f_measure": 0.8159133315763016,
    "precision": 0.8174754717495807,
    "recall": 0.8161111111111111
  },
  {
    "accuracy": 0.82,
    "f_measure": 0.8226353934130455,
    "precision": 0.8238175920455686,
    "recall": 0.8227777777777778
  }
]

key_arrays = {}
for item in d:
  for k, v in item.items():
    key_arrays.setdefault(k, []).append(v)

ave = {k: reduce(lambda x, y: x+y, v) / len(v) for k, v in key_arrays.items()}

print(ave)
# {'accuracy': 0.79, 'recall': 0.8187037037037038,
#  'f_measure': 0.8184953755563118, 'precision': 0.820167289406243}
-1
source

All Articles