The idea of loading a billion images into one montage process is ridiculous. Your question is not clear, but your approach should be to determine the number of pixels of each source image in the final image, and then extract the required number of pixels from each image in parallel. Then connect these pixels to the final image.
So, if each image will be represented by one pixel in your final image, you need to get the average value for each image, which you can do as follows:
convert image1.png image2.png ... -format "%[fx:mean.r],%[fx:mean.g],%[fx:mean.b]:%f\n" info:
Output result
0.423529,0.996078,0:image1.png 0.0262457,0,0:image2.png
You can do this very quickly in parallel with GNU Parallel , using something like
find . -name \*.png -print0 | parallel -0 convert {} -format "%[fx:mean.r],%[fx:mean.g],%[fx:mean.b]:%f\n" info:
Then you can make the final image and place the individual pixels.
Scanning even 1,000,000 PNG files is likely to take many hours ...
You don’t say how large your images are, but if they are of the order of 1 MB each and you have 1,000,000,000, then you need to do a petabyte of I / O to read them, so even with 500 MB / s an ultrafast SSD, You will be there 23 days.
Mark setchell
source share