In order for the iOS application to recognize coins of 1 β¬, 2 β¬ and 0.50 β¬, I tried to use opencv_createsamples and opencv_traincascade to create my own .xml classifier. So, I cropped 60 images from 2 β¬ coins from a short video, for example:

Then I combined them with random backgrounds using opencv_createsamples . I got 12,000 images similar to this:

and I ran the following commands:
opencv_createsamples -img positives/i.jpg -bg negatives.txt -info i.txt -num 210 -maxidev 100 -maxxangle 0.0 -maxyangle 0.0 -maxzangle 0.9 -bgcolor 0 -bgthresh 0 -w 48 -h 48 (for me from 0 up to 60)
cat *.txt > positives.txt
opencv_createsamples -info positives.txt -bg negatives.txt -vec 2.vec -num 12600 -w 48 -h 48
opencv_traincascade -data final -vec 2.vec -bg negatives.txt -numPos 12000 -numNeg 3000 -numStages 20 -featureType LBP -precalcValBufSize 2048 -precalcIdxBufSize 2048 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -w 48 -h 48
The training stopped at the 13th stage. As soon as I got cascade.xml , I tried it right away (with detectMultiScale() ) on a simple image taken on my smartphone, but nothing was found:

and if I use one of the images used as wiring as input, then it works very well:

I canβt understand why this is happening, and it is driving me crazy, primarily because I tried to make it work for weeks ... could you tell me where I am making a mistake?
The short program I wrote is here:
#include "opencv2/opencv.hpp" using namespace cv; int main(int, char**) { Mat src = imread("2b.jpg"); Mat src_gray; std::vector<cv::Rect> money; CascadeClassifier euro2_cascade; cvtColor(src, src_gray, CV_BGR2GRAY ); equalizeHist(src_gray, src_gray); if ( !euro2_cascade.load( "cascade.xml" ) ) { printf("--(!)Error loading\n"); return -1; } euro2_cascade.detectMultiScale( src_gray, money, 1.1, 3, 0|CASCADE_SCALE_IMAGE/*CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_SCALE_IMAGE*/, cv::Size(10, 10),cv::Size(2000, 2000) ); printf("%d\n", int(money.size())); for( size_t i = 0; i < money.size(); i++ ) { cv::Point center( money[i].x + money[i].width*0.5, money[i].y + money[i].height*0.5 ); ellipse( src, center, cv::Size( money[i].width*0.5, money[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 ); } namedWindow( "Display window", WINDOW_AUTOSIZE ); imwrite("result.jpg",src); }
I also tried to take into account 
UPDATE 2
As someone suggested, following this tutorial, I just created a .vec file using only cropped positive images, those that only have a coin. I used this command:
opencv_createsamples -vec i.vec -w 48 -h 48 -num 210 -img ./positives/i.jpg -maxidev 100 -maxxangle 0 -maxyangle 0 -maxzangle 0.9 -bgcolor 0 -bgthresh 0 (for me from 0 to 60)
So, as you can see, the background image is not used to create the patterns. Then, after loading mergevec.py , I merged all the vector files. Now I'm going to start another LBP training ... I hope it works better