I wrote a simple script to project 3D points into image databases on the inside of the camera and extreme. But when I have a camera at the origin, directed down along the z axis, and 3D points further down on the z axis, it appears behind the camera, not in front of it. Here is my script, I tested it so many times.
import numpy as np
def project(point, P):
Hp = P.dot(point)
if Hp[-1] < 0:
print 'Point is behind camera'
Hp = Hp / Hp[-1]
print Hp[0][0], 'x', Hp[1][0]
return Hp[0][0], Hp[1][0]
if __name__ == '__main__':
Rc = np.eye(3)
C = np.array([0, 0, 0])
R = Rc.T
t = -R.dot(C).reshape(3, 1)
K = np.array([
[2000, 0, 2000],
[0, 2000, 1500],
[0, 0, 1],
])
point = np.array([[0, 0, -10, 1]]).T
P = K.dot(np.hstack((R, t)))
project(point,P)
The only thing I can think of is that the identification rotation matrix does not match the camera pointing down the negative z axis with the vector up in the direction of the positive y axis. But I don’t see how it is not, for example, I built Rc from a function such as gluLookAt, and gave it a camera at the origin, directed down the negative z axis. I would get a unit matrix.