Motion Blur of an OpenGL Dynamic Object

I am following the GPU Gems 3 tutorial on how to blur based on camera movement. However, I want to implement blur based on moving objects. The solution is presented in the article (see Quotation below), however, I am curious how to implement this.

At the moment, I multiply the matrix of the object by the projection of the view, then separately for the previous projection and then transfer them to the pixel shader to calculate the speed, and not just the forecast forecasts.

If this is really the right method, then why can't I just go through in the projection view model? I would suggest that they will be of the same value?

GPU 3 Motion Blur Graphics

To create a velocity texture for rigid dynamic objects, transform the object using the current frame projection prediction matrix and the last frame prediction matrix, and then calculate the difference in the position of the viewport in the same way as for the post-processing pass. This speed should be calculated per pixel, passing both converted positions to the pixel shader and calculating the speed there.

+7
source share
1 answer

Check out my research that I did a few months ago on this topic: http://www.stevenlu.net/files/motion_blur_2d/Fragment_shader_dynamic_blur.pdf

standard rendering of tower smash
(source: stevenlu.net )

blur rendering of tower smash
(source: stevenlu.net )

Unfortunately, I did not use textured objects when creating this material, but I use your imagination. I’m working on a game engine, so when it finally sees the light of the game, you can be sure that I will come and put the breadcrumbs here. First of all, it is considered how to realize this effect in 2D, and in cases where objects do not overlap. In fact, there is no good way to use a fragment shader to sweep samples to create an accurate blur. While the effect is approaching pixel perfection as the number of samples increases, the geometry that needs to be generated to cover the sweep area needs to be hand-picked using some ugly methods.

In full 3D, it’s quite difficult to determine which pixels the dynamic object will move during the frame. Even with static geometry and a moving camera, the solution proposed in the Gems article by the GPU is not correct if you quickly go past things because it cannot solve this problem, which requires mixing an area that is noticeable by something moving ...

However, if this approximation, which neglects the sweep, is sufficient (and maybe), then what you can do to propagate to dynamic objects is to take into account their movement. Of course, you will need to work out the details, but look at lines 2 and 5 in the second block of code in the article that you linked: they are the current and previous “positions” on the screen. You just need to somehow transmit matrices that will allow you to calculate the previous position of each pixel, taking into account the dynamic movement of your object.

This should not be too bad. At the stage where you visualize your dynamic object, you send an additional matrix, which represents its movement in the last frame.

Update: I found that this document describes an elegant and efficient approach that provides somewhat high quality physically correct blur for a 3D pipeline. This will be difficult to do much better than this, as part of the limitation of rendering a full scene no more than once for performance reasons.

I noticed that in some examples the quality of the speed buffer may be better. for example, a spinning wheel must have some curves in the velocity space. I believe that if they can be installed correctly (non-standard fragment shaders may be required to visualize the speed ...), they will look intuitively correct, like the rotating cube that I saw above from my 2D study in dynamic motion blur.

+9
source

All Articles