I worked on an independent project for image processing and robotics, where instead of this, the robot, as usual, detected colors and assembled an object, it tried to detect holes (similar to different polygons) on the board. For a better understanding of the settings, there is an image here: 
As you can see, I need to detect these holes, find out their shapes, and then use the robot to place the object in the holes. I use a kinect depth camera to get a depth image. The picture is shown below:

I was absorbed in how to detect holes with the camera, first using a mask to remove part of the background and part of the foreground based on the depth measurement, but it didnโt work, as with different orientations the camera will merge with the board ... that something like inranging (completely turns white). Then I came across the adaptiveThreshold function
adaptiveThreshold(depth1,depth3,255,ADAPTIVE_THRESH_GAUSSIAN_C,THRESH_BINARY,7,-1.0);
With noise removal using blur, stretch and Gaussian blur; who discovered the holes better, as shown in the image below. Then I used the cvCanny front detector to get the edges, but so far this has not been good, as shown in the figure below. After that, I tried various function detectors from SIFT, SURF, ORB, GoodFeaturesToTrack and found out that ORB gave the best time and detected signs. After that, I tried to get the relative camera position of the query image by finding its key points and matching these key points for good matches, which will be passed to the findHomography function. The results are shown below, as shown in the diagram:

In the end, I want to get the relative camera pose between the two images and move the robot to this position using the rotational and translational vectors obtained from solvePnP.
So, is there any other method with which I could improve the quality of the detected holes for detecting and matching key points?
I also tried edge detection and approxPolyDP, but approximate forms are not very good:

I tried setting input parameters for threshold and buzzing functions, but this is the best I can get
Also, my approach to properly setting up the camera?
UPDATE . No matter what I tried, I could not get good repeatable functions to display. Then I read online that the image of depth is cheap in resolution, and it is only used for things like masking and getting distances. Thus, it struck me that the functions are not correct due to the low resolution image with its dirty edges. So I thought about detecting the functions in the RGB image and using the depth image to get only the distances from these functions. The quality of the features that I received was literally off schedule. He even found screws on the board! Here are the key points discovered through GoodFeaturesToTrack key point detection.
. I met another obstacle, getting a distance with the distance of points that did not come out properly. I was looking for possible reasons, and after a while it seemed to me that in the RGB and depth images there was an offset due to the offset between the cameras. You can see this from the first two images. Then I searched the network on how to compensate for this bias, but could not find a working solution.
If any of you could help me compensate for the bias, it would be great!
UPDATE : I could not use the goodFeaturesToTrack function effectively. The function gives angles in type Point2f. If you want to compute descriptors, we need key points and converting Point2f to Keypoint with the code snippet below will result in loss of scale and rotational invariance.
for( size_t i = 0; i < corners1.size(); i++ ) { keypoints_1.push_back(KeyPoint(corners1[i], 1.f)); }
The ugly result of function matching is shown below.
.
Now I have to start with different combinations of functions. I will post additional updates. It would be very helpful if someone could help in fixing the bias problem.