Interestingly, you will get completely different results with np.linalg.lstsq and np.linalg.solve .
x1 = np.linalg.lstsq(A_star, B_star)[0] x2 = np.linalg.solve(A_star, B_star)
Both should offer a solution for the equation Ax = B. However, they give two completely different arrays:
In [37]: x1 Out[37]: array([[ 2.00000000e+01, 2.00000000e+01], [ 1.20000000e+01, 7.99999999e+00], [ 8.00000001e+00, 1.20000000e+01], [ -1.15359111e-15, 7.94503352e-16], [ -4.00000001e+00, 3.99999999e+00], [ 4.00000001e+00, -3.99999999e+00]] In [39]: x2 Out[39]: array([[ 2.00000000e+01, 2.00000000e+01], [ -4.42857143e+00, 2.43809524e+01], [ 2.44285714e+01, -4.38095238e+00], [ -2.88620104e-18, 1.33158696e-18], [ -1.22142857e+01, 1.21904762e+01], [ 1.22142857e+01, -1.21904762e+01]])
Both should give an exact (up to the accuracy of calculation) solution of a group of linear equations, and for a nonsingular matrix, exactly one solution.
Something must be wrong. Let's see if these two candidates can be solutions to the original equation:
In [41]: A_star.dot(x1) Out[41]: array([[ -1.11249392e-08, 9.86256055e-09], [ 1.62000000e+05, -1.65891834e-09], [ 0.00000000e+00, 1.62000000e+05], [ -1.62000000e+05, -1.62000000e+05], [ -3.24000000e+05, 4.47034836e-08], [ 5.21540642e-08, -3.24000000e+05]]) In [42]: A_star.dot(x2) Out[42]: array([[ -1.45519152e-11, 1.45519152e-11], [ 1.62000000e+05, -1.45519152e-11], [ 0.00000000e+00, 1.62000000e+05], [ -1.62000000e+05, -1.62000000e+05], [ -3.24000000e+05, 0.00000000e+00], [ 2.98023224e-08, -3.24000000e+05]])
They seem to give the same solution, which essentially coincides with B_star as it should be. This leads us to an explanation. With simple linear algebra, we could predict that A. (x1-x2) should be very close to zero:
In [43]: A_star.dot(x1-x2) Out[43]: array([[ -1.11176632e-08, 9.85164661e-09], [ -1.06228981e-09, -1.60071068e-09], [ 0.00000000e+00, -2.03726813e-10], [ -6.72298484e-09, 4.94765118e-09], [ 5.96046448e-08, 5.96046448e-08], [ 2.98023224e-08, 5.96046448e-08]])
And indeed it is. Thus, it seems that for the equation Ax = 0 there is a nontrivial solution, x = x1-x2, which means that the matrix is singular and, therefore, there are an infinite number of different solutions for Ax = B.
Thus, the problem is not in NumPy or Matlab, but in the matrix itself.
However, in the case of this matrix, the situation is a bit complicated. A_star seems to be singular by definition above (Ax = 0 for x & lt> 0). On the other hand, its determinant is nonzero and not singular.
In this case, A_star is an example of a matrix that is numerically unstable rather than singular. The solve method solves it using simple multiplication by the inverse. In this case, this is a bad choice, since the inversion contains very large and very small values. This makes the animation prone to rounding errors. This can be seen by looking at the condition number of the matrix:
In [65]: cond(A_star) Out[65]: 1.3817810855559592e+17
This is a very high condition, and the matrix is poorly conditioned.
In this case, using a reverse solution to the problem is a bad approach. As you can see, the least squares approach gives the best results. However, the best solution is to rescale the input values, so that x and x ^ 2 are in the same range. One very good scaling is to scale everything between -1 and 1.
One thing you can consider is to try to use the NumPy indexing capabilities. For example:
cam_x = np.array([i[0] for i in camera_vectors])
equivalent to:
cam_x = camera_vectors[:,0]
and you can build your array A as follows:
A_matrix = np.column_stack((np.ones_like(cam_x), cam_x, cam_y, cam_x*cam_y, cam_x**2, cam_y**2))
There is no need to create lists of lists or any loops.