Quickly calculate โ€œdirtyโ€ areas between two similar images

I have two very similar images (in particular, two screenshots), and I'm trying to find the best (fastest) way to find which areas of the image have changed (like an array of rectangles representing different areas)

A few criteria:

  • It should not be accurate, but should include all changes, however small (i.e. it would be acceptable for a one-pixel change to have a large error)
  • It should be fast (ideally 2x 1920x1080 images should take up to 20 ms on a regular consumer machine purchased today).
  • It does not require a custom threshold (but if there is a solution that allows this, it will be a good bonus)
  • It can be assumed that the input images are always ideal lossless losses.

I have two working solutions, like one, but one is calculating one pixel by pixel, which, of course, is very slow. And for another, I tried to split the two images into pieces of different sizes and calculate the checksums for each fragment, but this is also pretty slow.

Just for those who are wondering what I'm creating, this is a kind of dull (and slower) remote desktop that can be used in a browser without any plugins.

+8
c # image-processing
source share
2 answers

You will need to perform pixel-to-pixel comparison. I donโ€™t think it should be so slow. For example, the code:

int size = 1920 * 1080 * 3; byte[] image1 = new byte[size]; byte[] image2 = new byte[size]; byte[] diff = new byte[size]; var sw = new System.Diagnostics.Stopwatch(); sw.Start(); for (int i = 0; i < size; i++) { diff[i] = (byte) (image1[i] - image1[i]); } sw.Stop(); Console.WriteLine(sw.ElapsedMilliseconds); 

works for about 40 ms on my laptop. If it is only in shades of gray, it runs less than 20 ms. If you use real image data, diff [i]! = 0 indicates a change in two images.

Your solution may be slow if you read pixel values โ€‹โ€‹using Bitmap.GetPixel or another slow method. If so, I suggest looking for Bitmap.LockBits or using an unsafe method.

+3
source share

My previous answer was deleted due to its format, I will write it again, in the best possible way.

I asked you if you think you should use a graphics processor to calculate the differences between the two images. This solution can significantly increase the computation time, since the GPU has a high parallel compared to CPU calculation.

Using C #, you can try using XNA for this purpose. In fact, I did a little test using a single HLSL pass (it is used to program the GPU with Direct3D) with a pixel shader:

 texture texture1; texture texture2; sampler textureSampler1 = sampler_state{ Texture = <texture1>; }; sampler textureSampler2 = sampler_state{ Texture = <texture2>; }; float4 pixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0{ float4 color1 = tex2D(textureSampler1,TextureCoordinate); float4 color2 = tex2D(textureSampler2,TextureCoordinate); if((color1.r == color2.r) && (color1.g == color2.g) && (color1.b == color2.b)){ color1.r = 0; color1.g = 0; color1.b = 0; } else{ color1.r = 255; color1.g = 255; color1.b = 255; } return color1; } technique Compare { pass Pass1 { PixelShader = compile ps_2_0 pixelShaderFunction(); } } 

Computing on the XNA part is very simple. Using a basic XNA snippet with visual studio, I simply wrote the draw function as:

 protected override void Draw(GameTime gameTime) { Stopwatch sw = new Stopwatch(); sw.Start(); GraphicsDevice.Clear(Color.CornflowerBlue); e.Parameters["texture1"].SetValue(im1); e.Parameters["texture2"].SetValue(im2); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone, e); spriteBatch.Draw(im1,new Vector2(0,0),Color.White); spriteBatch.End(); base.Draw(gameTime); sw.Stop(); Console.WriteLine(sw.ElapsedMilliseconds); } 

im1 and im2 are two images with a resolution of 1920 x 1080 bmp loaded as Texture2D, and e is a .fx file loaded as an effect.

Using this technique, I get 17/18 ms calculation time on a fairly ordinary computer (laptop with I5-2410m at 2.3 GHz, 4 GB of RAM, Nvidia Geforce GT525m.

Here is the result of the program working with a different image shown (sorry, this is greatly enlarged because I do not have a 1920 * 1080 screen:>), and, in addition, these are two images im1 and im2 with slight differences between them: http: // img526.imageshack.us/img526/2345/computationtime.jpg

I am new to GPU programming, so if I made a huge mistake regarding how time should be calculated or something else, feel free to say it!

Edit: The first thing to note, I just read that "it will be a non-trivial operation, since GPUs do not handle branching very well."

Best wishes

+3
source share

All Articles