Bilinear Interpolation - DirectX vs. Gdi +

I have a C # application for which I wrote GDI + code that uses Bitmap / TextureBrush rendering to represent 2D images that can have various image processing functions. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g., ViewToWorld / WorldToView) operations. My test bed consists of DX9 output images, which I compare with the release of the new GDI + code.

A simple test case that appears in a viewport that matches the size of the bitmap (i.e., without scaling or panning) does not match the ideal pixel (without a binary break) - but as soon as the image is enlarged (enlarged), I get very slight differences in 5-10% of the pixels. The difference is 1 (sometimes 2) / 256. I suspect this is due to differences in interpolation.

Question For a DX9 orthoprojection (and peaceful identity space), with a camera perpendicular and centered on a textured square, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical rectangle / polygon output filled with GDI + TextureBrush when using System.Drawing . Setting up Drawing2D.InterpolationMode.Bilinear ?

For this (increase) case, the DX9 code is used (MinFilter, MipFilter is set similarly):
Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear);

and the GDI + path uses: g.InterpolationMode = InterpolationMode.Bilinear;

I thought that “Bilinear Interpolation” was a rather specific filter definition, but then I noticed that in GDI + there is another option for “HighQualityBilinear” (which I tried without a difference), which makes sense, given the description “added preliminary filtering for compression ")

Question about the next question: Is it possible to expect that the correspondence between pixels and DirectX and GDI + will correspond to a pixel? (provided that all external coordinates are passed equal)? If not, why not?

Clarification: The images I use are opaque shades of gray (R = G = B, A = 1) using Format32bppPArgb.

Finally, there are a number of other APIs that I could use (Direct2D, WPF, GDI, etc.) - and this question is usually used to compare the output of the "equivalent" bilinear interpolated output images for either of the two. Thanks!

+7
source share
3 answers

DirectX works mainly on the GPU, and DX9 can work with shaders. GDI + runs on completely different algorithms. I don’t think it is reasonable to expect two to come up with exactly pixel matches.

I would expect the DX9 to have better quality than GDI +, which is a step towards the old GDI, but not so much. GDI + seems to have problems with anti-aliasing lines, as well as maintaining quality when scaling an image (which seems to be your problem). To have something similar in quality than the latest generation GPU texture processing, you need to go to WPF graphics. This gives a quality similar to DX.

WPF also uses a GPU (if available) and returns to software rendering (if there is no GPU), so the output between the GPU and software rendering is pretty close.

EDIT: Although this was chosen as an answer, this is only an early attempt to explain and does not really affect the true cause. The reader refers to the discussions set forth in the comments on the question and answers.

+6
source

Why do you make the assumption that they use the same formula?

Even if they use the same formula, and you agree that the implementation is different, do you expect the result to be the same?

At the end of the day, the code is designed to work with perception, not mathematically accurate. Although you can get it with CUDA if you want.

Instead of being surprised that you get different results, I would be very surprised if you got perfect pixel matches.

the way they represent color is different ... I know that nvidia uses a float (possibly double) to represent the color, since GDI uses int i believe.

http://en.wikipedia.org/wiki/GPGPU

Shader 2.0 appears in DX9, which occurs when the color implementation switches from int to 24 and 32 bits.

try comparing ati / amd rendering with nvidia rendering and you can clearly see that the color is very different. I first noticed this in earthquake 2 ... the difference between the two cards was overwhelming - of course, this is due to many things, to a lesser extent of which is their bilinear interpretation.

EDIT: information on how the specification was made happeend after I replied. In any case, I think that the data types used to store it will be different no matter how you specify it. Moreover, the implementation of float is different from each other. Maybe I'm wrong, but I'm sure C # implements float differently with the C compiler that uses nvidia. (and this suggests that GDI + does not just convert float to equivalent int ....)

Even if I am mistaken, I would enthusiastically consider it exceptional to expect that two different implementations of the algorithm will be the same. they are optomized for speed, as a result, the difference in optomization will directly go to the difference in image quality, since this speed will come from a different approach to cutting angles / approximations.

+2
source

There are two possibilities for rounded differences. The first is obvious when the RGB value is calculated as part of the values ​​on both sides. The second is more subtle when calculating the relationship to use when determining the proportion between two points.

In fact, I would be very surprised if two different implementations of the algorithm were the same in pixels for a pixel, so many places for a difference of +/- 1. Without getting the exact details of the implementation of both methods, it is impossible to be more accurate than that.

0
source

All Articles