I have an array of 2D Numpy integers, for example:
a = np.array([[ 3, 0, 2, -1], [ 1, 255, 1, 2], [ 0, 3, 2, 2]])
and I have a dictionary with integer keys and values ββthat I would like to use to replace the values ββof a with new values. Bitrate may look like this:
d = {0: 1, 1: 2, 2: 3, 3: 4, -1: 0, 255: 0}
I want to replace the values ββof a corresponding to the key in d with the corresponding value in d . In other words, d defines a map between the old (current) and new (desired) values ββin a . The result for the toy example above would be the following:
a_new = np.array([[ 4, 1, 3, 0], [ 2, 0, 2, 3], [ 1, 4, 3, 3]])
What would be an effective way to implement this?
This is an example of a toy, but in practice the array will be large, its shape will be, for example, (1024, 2048) , and the dictionary will have the order of dozens of elements (in my case 34), and while the keys are integers, they are not necessarily all in a row, and they can be negative (as in the example above).
I need to do this replacement with hundreds of thousands of such arrays, so it should be fast. However, the dictionary is known in advance and remains constant; therefore, asymptotically, any time used to change the dictionary or transform it into a more suitable data structure does not matter.
I'm currently looping through array entries in two nested for loops (row and column a ), but there should be a better way.
If there were no negative values ββon the map (for example, -1, as in the example), I would simply create a list or an array from the dictionary when the keys are the indexes of the array, and then use this for efficient Numpy regular indexing. But since there are negative values, this will not work.