Access a specific RGB pixel value in openCV

I searched the internet and stackoverflow completely, but I did not find the answer to my question:

How can I get / set (both) the RGB value of the specific (given by x, y coordinates) in OpenCV? What is important - I write in C ++, the image is stored in the variable cv :: Mat. I know that there is an IplImage () operator, but IplImage is not very convenient to use - as far as I know, it comes from the C API.

Yes, I know that this pixel access was already in OpenCV 2.2 , but it was only about black and white bitmaps.

EDIT:

Thanks so much for all your answers. I see that there are many ways to get / set the RGB value of a pixel. I have another idea from my close friend - thanks Benny! It is very simple and effective. I think it is a matter of taste that you choose.

Mat image; 

(...)

 Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x); 

And then you can read / write RGB values ​​with:

 p->x //B p->y //G p->z //R 
+70
c ++ opencv
Jan 19 '12 at 20:35
source share
6 answers

Try the following:

 cv::Mat image = ...do some stuff...; 

image.at<cv::Vec3b>(y,x); gives you an RGB vector (it can be ordered as BGR) of type cv::Vec3b

 image.at<cv::Vec3b>(y,x)[0] = newval[0]; image.at<cv::Vec3b>(y,x)[1] = newval[1]; image.at<cv::Vec3b>(y,x)[2] = newval[2]; 
+90
Jan 19 '12 at 20:55
source share

The low-level way is direct access to the matrix data. In the RGB image (which I assume OpenCV usually stores as BGR), and assuming your cv :: Mat variable is called frame , you can get the blue value at location ( x , y ) (top left) as follows :

 frame.data[frame.channels()*(frame.cols*y + x)]; 

Similarly, to get B, G and R:

 uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0]; uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1]; uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2]; 

Note that this code assumes the step is equal to the width of the image.

+16
Jan 19 '12 at 20:52
source share

Part of the code is easier for people who have such a problem. I am sharing my code and you can use it directly. Note: OpenCV saves pixels as BGR.

 cv::Mat vImage_; if(src_) { cv::Vec3f vec_; for(int i = 0; i < vHeight_; i++) for(int j = 0; j < vWidth_; j++) { vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR. vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_; ++src_; } } if(! vImage_.data ) // Check for invalid input printf("failed to read image by OpenCV."); else { cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE); cv::imshow( windowName_, vImage_); // Show the image. } 
+2
Aug 26 '13 at 0:34
source share

The current version allows the cv::Mat::at function to handle 3 dimensions . So, for the object Mat m , m.at<uchar>(0,0,0) should work.

0
Jan 19 2018-12-12T00:
source share
 uchar * value = img2.data; //Pointer to the first pixel data ,it return array in all values int r = 2; for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++) { if (r > 2) r = 0; if (r == 0) value[i] = 0; if (r == 1)value[i] = 0; if (r == 2)value[i] = 255; r++; } 
0
Jan 12 '16 at 7:15
source share
 const double pi = boost::math::constants::pi<double>(); cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){ float distance = 2.0f; float angle = ellipse.angle; cv::Point ellipse_center = ellipse.center; float major_axis = ellipse.size.width/2; float minor_axis = ellipse.size.height/2; cv::Point pixel; float a,b,c,d; for(int x = 0; x < image.cols; x++) { for(int y = 0; y < image.rows; y++) { auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y); auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y); distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis); if(distance<=1) { image.at<cv::Vec3b>(y,x)[1] = 255; } } } return image; } 
-5
Jan 30 '14 at 8:04
source share



All Articles