When you speak
compare each section with a subset of Unicode
this is not entirely clear, because there is more than one way to do this. I would bring the comparison down to the pixel level. In a gray image, each pixel has a gray value. Suppose you want to replace each pixel with a corresponding character, how does this character correspond to a pixel? If you look at a character from a very long distance, you will see only a gray spot. If you now replace the pixel with a symbol, you must select the symbol with the closest gray value to that pixel.
In a monospace font, each character uses the same space. If you now take this rectangle of space, draw a symbol on it, you can calculate the average value of gray. This average gray value is no larger than the area of the white rectangle compared to the entire rectangle. Space has a gray value of 1. And maybe the dollar sign is one of the blackest characters you'll find.
So here is what I would do:
- Take the character set, whether you use ascii or uni-code only. Calculate the amount of white for each character. It should be obvious that this may be different for different fonts, but you should use a monospace.
- You now have a list that maps each character to a gray value. You should now scale the gray values to the target gray value. When you have an 8-bit image, your brightest character (space) should correspond to 255, and your darkest should correspond to level 0 at gray level.
- Now drag the original image so that it is not too large, because even with a very small font you may not get 2,000 characters in one line.
- Replace each pixel with a symbol whose gray level is close to your own graylevel
In Mathematica, these are just a few lines of code. In python, this may be a little longer, but that should be fine too.
Using this method, you get pretty amazing results when you look at text from afar, and when you get closer, you see that it all consists of characters.



Update
If you want to create an image of the same size as the original, then the approach is not very different, but even here you have, as Mark already pointed out, to create a bitmap image of each letter that you use. I really do not see a faster way to compare your images with a plate with a letter to decide which one is most suitable.
Perhaps one hint:. If you use this approach, the letters will be visible on your image, because when you have, for example, a 12pt font, each letter will have an image size of about 10x15. When you now convert an image of 1000x1500 size, which is not so small, you use only 100x100 letters. Therefore, it may be worth considering not to use the image itself, but image gradients. This can give better images, because then a letter is selected that follows the edges pretty well. Using only gradients, the google logo doesn't look so bad.
