I honestly don't know how to mathematically describe perspective distortion. You can try to find literature for this (e.g. Google Scholar ). See also the OpenGL documentation, glFrustum .
EDIT . Interestingly, starting with version 8, Mathematica has ImagePerspectiveTransformation . The relevant part says:
For a 3 * 3 m matrix, ImagePerspectiveTransformation[image,m] applies LinearFractionalTransform[m] to the image.
This transformation, which for some a (matrix), b (vector), c (vector) and d (scalar), converts the vector r into (a.r+b)/(c.r+d) . In a 2D situation, this gives a homogeneous matrix :
a_11 a_12 b_1 a_21 a_22 b_2 c_1 c_2 d
To apply the transformation, you multiply this matrix by a column vector expanded with z=1 , and then take the first two elements of the result and divide them into the third:
{{a11, a12, b1}, {a21, a22, b2}, {c1, c2, d}}.{{x}, {y}, {1}} // #[[ 1 ;; 2, All]]/
which gives:
{(b1 + a11 x + a12 y)/(d + c1 x + c2 y), (b2 + a21 x + a22 y)/(d + c1 x + c2 y)}
In the example:
a = {{0.9, 0.1}, {0.3, 0.9}} b = {0, -0.1} c = {0, 0.1} d = 1
You get this conversion:
im = Import["/home/cataphract/Downloads/so_q.png"]; orfun = BSplineFunction[ImageData[im], SplineDegree -> 1]; (*transf=TransformationFunction[{{0.9, 0.1, 0.}, {0.3, 0.9, -0.1}, {0., 0.1, 1.}}] -- let expand this:*) transf = {(0.9 x + 0.1 y)/(1.+ 0.1 y), (-0.1 + 0.3 x + 0.9 y)/( 1. + 0.1 y)} /. {x -> #[[1]], y -> #[[2]]} &; ParametricPlot[transf[{x, y}], {x, 0, 1}, {y, 0, 1}, ColorFunction -> (orfun[1 - #4, #3] &), Mesh -> None, FrameTicks -> None, Axes -> False, ImageSize -> 200, PlotRange -> All, Frame -> False ]

When you have a map that describes the position of the point of the final image in terms of the point of the original image, it is simply a matter of finding its value for each of the points of the new image.
There is one more difficulty. Since the image is discrete, that is, it has pixels instead of continuous values, you must make it continuous.
Say you have a transform that doubles the size of the image. The function for calculating the point {x,y} in the final image will search for the point {x/2, y/2} in the original. This point does not exist because the images are discrete. Therefore, you need to interpolate this point. There are several possible strategies for this.
In this Mathematica example, I do a simple two-dimensional rotation and use a degree-1 spline function for interpolation:
im = Import["d:\\users\\cataphract\\desktop\\img.png"] orfun = BSplineFunction[ImageData[im], SplineDegree -> 1]; transf = Function[{coord}, RotationMatrix[20. Degree].coord]; ParametricPlot[transf[{x, y}], {x, 0, 1}, {y, 0, 1}, ColorFunction -> (orfun[1 -
This gives:

PHP:
For google interpolation for "B-spline". The rest is as follows.
First, select the link for the original image, say if the image is 200x200, pixel (1,1) maps (0,0) and pixel (200,200) are displayed on (1,1).
Then you have to guess where your final image will be displayed when the transformation is applied. It depends on the transformation, you can, for example, apply it to the corners of the image or just guess.
Say you think that you are matched between (-.5.0) and (1, 1.5), like me, and that your final image should also be 200x200. Then:
$sizex = 200; $sizey = 200; $x = array("min"=>-.5, "max" => 1); $y = array("min"=>0, "max" => 1.5); // keep $sizex/$sizey == $rangex/$rangey $rangex = $x["max"] - $x["min"]; $rangey = $y["max"] - $y["min"]; for ($xp = 1; $xp <= $sizex; $xp++) { for ($yp = 1; $yp <= $sizey; $yp++) { $value = transf( (($xp-1)/($sizex-1)) * $rangex + $x["min"], (($yp-1)/($sizey-1)) * $rangey + $y["min"]); /* $value should be in the form array(r, g, b), for instance */ } }