Given the sample input, it is clear that you are not after accessing the images. Such an operation will not correct the distortions that you have, instead you need to perform a perspective transformation. This is clearly seen in the following figure. The four white rectangles represent the edges of your four black boxes, the yellow lines are the result of connecting the black boxes. The yellow quadrangle is not skewed red (the one you want to achieve).

So, if you can really get the above figure, the problem will be much simpler. If you did not have four corner boxes, you will need the other four control points, so they will help you a lot. After you get the image above, you know the four yellow corners, and then just match them to the four red corners. This is a promising conversion that you need to do, and according to your library, a function may be ready for this (there is at least one, check the comments on your question).
There are several ways to get to the image above, so I will just describe a relatively simple one. First, binarize the image in grayscale. To do this, I chose a simple global threshold of 100 (your image is in the range [0, 255]), which holds in the image other details of the image (for example, strong lines around the image). Intensities greater than or equal to 100 are set to 255, and below 100 it is set to 0. But, since this is a printed image, how dark the appearing boxes can change dramatically. So you might need a better method here, something as simple as a morphological gradient might work potentially better. The second step is to eliminate irrelevant details. To do this, perform a morphological closure with a 7x7 square (about 1% of the minimum between the width and height of the input image). To get the border of the boxes, use morphological erosion, as in current_image - erosion(current_image) , using a 3x3 elementary square. Now you have an image with four white outlines, as described above (this assumes everything except the boxes that were excluded, simplifying the other inputs that I consider). To get the pixels of these white outlines, you can mark the connected components. Using these 4 components, define the upper right, upper left, lower right, and lower left. Now you can easily find the points you need to get the corners of the yellow rectangle. All these operations are easily accessible in AForge, so for C # it is only a translation of the following code:
import sys import numpy from PIL import Image, ImageOps, ImageDraw from scipy.ndimage import morphology, label # Read input image and convert to grayscale (if it is not yet). orig = Image.open(sys.argv[1]) img = ImageOps.grayscale(orig) # Convert PIL image to numpy array (minor implementation detail). im = numpy.array(img) # Binarize. im[im < 100] = 0 im[im >= 100] = 255 # Eliminate undesidered details. im = morphology.grey_closing(im, (7, 7)) # Border of boxes. im = im - morphology.grey_erosion(im, (3, 3)) # Find the boxes by labeling them as connected components. lbl, amount = label(im) box = [] for i in range(1, amount + 1): py, px = numpy.nonzero(lbl == i) # Points in this connected component. # Corners of the boxes. box.append((px.min(), px.max(), py.min(), py.max())) box = sorted(box) # Now the first two elements in the box list contains the # two left-most boxes, and the other two are the right-most # boxes. It remains to stablish which ones are at top, # and which at bottom. top = [] bottom = [] for index in [0, 2]: if box[index][2] > box[index+1][2]: top.append(box[index + 1]) bottom.append(box[index]) else: top.append(box[index]) bottom.append(box[index + 1]) # Pick the top left corner, top right corner, # bottom right corner, and bottom left corner. reference_corners = [ (top[0][0], top[0][2]), (top[1][1], top[1][2]), (bottom[1][1], bottom[1][3]), (bottom[0][0], bottom[0][3])] # Convert the image back to PIL (minor implementation detail). img = Image.fromarray(im) # Draw lines connecting the reference_corners for visualization purposes. visual = img.convert('RGB') draw = ImageDraw.Draw(visual) draw.line(reference_corners + [reference_corners[0]], fill='yellow') visual.save(sys.argv[2]) # Map the current quadrilateral to an axis-aligned rectangle. min_x = min(x for x, y in reference_corners) max_x = max(x for x, y in reference_corners) min_y = min(y for x, y in reference_corners) max_y = max(y for x, y in reference_corners) # The red rectangle. perfect_rect = [(min_x, min_y), (max_x, min_y), (max_x, max_y), (min_x, max_y)] # Use these points to do the perspective transform. print reference_corners print perfect_rect
The final code output is above with your input image:
[(55, 30), (734, 26), (747, 1045), (41, 1036)] [(41, 26), (747, 26), (747, 1045), (41, 1045)]
The first list of points describes the four corners of the yellow rectangle, and the second a red rectangle. To transform perspective, you can use AForge with a predefined feature. I used ImageMagick for simplicity, as in:
convert input.png -distort Perspective "55,30,41,26 734,26,747,26 747,1045,747,1045 41,1036,41,1045" result.png
Which gives alignment, after which (with blue lines found as before, to better show the result):

You may notice that the left vertical blue line is not completely straight, in fact the two leftmost blocks are not aligned 1 pixel along the x axis. This can be corrected using other interpolation used in perspective conversion.