Multiple shaders versus multiple methods in DirectX

I look through all the Rastertek DirectX tutorials , which, by the way, are very good, and the author tends to use several shaders for different things. In a subsequent tutorial, he even introduces a shader manager class.

Based on some other sources, although I believe it would be more efficient to use a single shader with several methods. Are there several shaders in textbooks used for simplicity, or are there some scenarios in which using multiple shaders would be better than one big one?

+4
source share
3 answers

I think they use them in textbooks for simplicity.

Grouping them by methods or separately is a design decision. There are scenarios in which using multiple shaders is useful as you can combine them as you wish.

As with DirectX 11 in Windows 8, the D3DX library is deprecated, so you will find that it is changing. You can see an example of this in the source code of the DirectX Tool Kit: http://directxtk.codeplex.com/ and how they dealt with their effects.

Usually you will have different Vertex shaders, pixel shaders, etc. in mind; methods usually combine them as one, so when you compile a Shader File, a specific Vertex and Pixel Shader are compiled for this technology. Effect objects handle the fact that Vertex / Pixel Shader is installed on the device when an X-technique with a Y-pass is selected.

You can do this manually, for example, only compile the pixel shader and install it on the device.

+4
source

Basically the answer will be: it depends.

The effects structure gives a big advantage in that you can set up the entire pipeline in one pass using Pass-> Apply, which can make things very easy, but can lead to pretty slow code if not used properly, which is probably to abandon it, but you can do it as bad or even worse using several shaders, directxtk is a pretty good example of this (this is normal only for developing a phone).

In most cases, the effect structure will have a few additional api calls, which you could avoid using separate shaders (which I agree if you bind the call to the call can be significant, but then you should look at optimizing this part with rejecting / installing the technique ) Using separate shaders, you must manage all the states / constant buffer yourself and, possibly, do it more efficiently if you know what you are doing.

What I really like about the fx framework is a very nice reflection and the use of semantics, which at the development stage can be really useful (for example, if you do float4x4 tP: PROJECTION, your engine can automatically attach the camera projection to the shader).

Also, layout validation during compilation between shader steps is really convenient for creating (fx framework).

One big advantage of individual shaders is that you can easily change only the steps you need, so you can save a decent amount of permutations without touching the rest of the pipeline.

+4
source

It is never recommended that you download multiple fx files. Combine your fx files if you can, and use global variables if possible, if they should not be in your VInput structure.

That way, you can get the effects you need and pass them what you set up in your own Shader class to handle the rest, including the technique.

Make yourself an abstract ShaderClass and abstract ModelClass.

More precisely, your shaders are initialized in your Graphics class separately from your models. If you create a TextureShader class with your texture.fx file, then there is no need to initialize another instance; rather, share the TextureShader object with the appropriate model (s), then create a Renderer structure / class to hold both the Shader pointer and the model pointer (ever), using the virtual one when you need to.

0
source

All Articles