I am developing epipolar geometry using openCV. (the ultimate goal is to estimate the world coordinate of the model information or depth from two images taken from different points of view using the same camera)
However, I struggle with the estimation of the fundamental matrix and the essential matrix that are used in epipolar geometry.

As shown above, I captured two images with different viewpoints (one camera). Color dots are the corresponding dots.
exact coordinates (I write it if you want to check)
points1: left
points1.push_back(Point2f(157,223));
points1.push_back(Point2f(190,312));
points1.push_back(Point2f(157,541));
points1.push_back(Point2f(136,443));
points1.push_back(Point2f(355,374));
points1.push_back(Point2f(346,406));
points1.push_back(Point2f(252,410));
points1.push_back(Point2f(254,379));
points2: right
points2.push_back(Point2f(190,188));
points2.push_back(Point2f(303,284));
points2.push_back(Point2f(252,512));
points2.push_back(Point2f(166,420));
points2.push_back(Point2f(422,311));
points2.push_back(Point2f(412,339));
points2.push_back(Point2f(341,366));
points2.push_back(Point2f(345,335));
using dots
Mat FundamentalMatrix = findFundamentalMat (points1, points2, FM_RANSAC, 1.0, 0.99);
openCV . , .

- left: FM_RANSAC, : CV_FM_8POINTS
x'T F x = 0
x '(u2, v2, 1) ^ T x (u1, v1, 1) ^ T
u1, v1 = 1 u2, v2 = 2.
, x'T F x = 0. ( ) .
, .
Mat FundamentalMatrix = findFundamentalMat(points1, points2, CV_FM_8POINT);
.
.
? - , ? ( , 8 , )
, ( ) . . , E1 ( 1), E2, K ( ). ( , , , E1, E2 k, ).
- > E = K ^ T F K.
E E, 0, (1,1,0) (, , 1)
.
?