Ok, I'm a little late. Let me share my thoughts on this:
You can speed it up using dynamic programming to figure out the tools, but it's much easier and faster to let scipy and numpy do all the dirty work. (Note that I'm using Python3 for my code, so xrange is changed to a range in your code).
#!/usr/bin/env python3 import numpy as np from scipy import ndimage from PIL import Image import copy import time def faster_bradley_threshold(image, threshold=75, window_r=5): percentage = threshold / 100. window_diam = 2*window_r + 1 # convert image to numpy array of grayscale values img = np.array(image.convert('L')).astype(np.float) # float for mean precision # matrix of local means with scipy means = ndimage.uniform_filter(img, window_diam) # result: 0 for entry less than percentage*mean, 255 otherwise height, width = img.shape[:2] result = np.zeros((height,width), np.uint8) # initially all 0 result[img >= percentage * means] = 255 # numpy magic :) # convert back to PIL image return Image.fromarray(result) def bradley_threshold(image, threshold=75, windowsize=5): ws = windowsize image2 = copy.copy(image).convert('L') w, h = image.size l = image.convert('L').load() l2 = image2.load() threshold /= 100.0 for y in range(h): for x in range(w): #find neighboring pixels neighbors =[(x+x2,y+y2) for x2 in range(-ws,ws) for y2 in range(-ws, ws) if x+x2>0 and x+x2<w and y+y2>0 and y+y2<h] #mean of all neighboring pixels mean = sum([l[a,b] for a,b in neighbors])/len(neighbors) if l[x, y] < threshold*mean: l2[x,y] = 0 else: l2[x,y] = 255 return image2 if __name__ == '__main__': img = Image.open('test.jpg') t0 = time.process_time() threshed0 = bradley_threshold(img) print('original approach:', round(time.process_time()-t0, 3), 's') threshed0.show() t0 = time.process_time() threshed1 = faster_bradley_threshold(img) print('w/ numpy & scipy :', round(time.process_time()-t0, 3), 's') threshed1.show()
This accelerated significantly on my machine:
$ python3 bradley.py original approach: 3.736 s w/ numpy & scipy : 0.003 s
PS: Please note that the average value that I used from scipy is slightly different at the borders than your code (for positions where the window for the average calculation is no longer contained in its image). However, I think this should not be a problem.
Another minor difference is that the window from the for loops was not exactly centered in the pixel, since shifting xrange (-ws, ws) with ws = 5 gives -5, -4 -, ..., 3, 4 and leads to an average of -0.5. It was probably not intended.