I am trying to figure out which is the best way to implement multi-pass post-processing. Basically I want to:
- Render the scene to color buffer A & depth buffer AD.
- Process buffer A + depth buffer AD in a shader, result in color buffer B.
- Process buffer A + buffer B in another shader, result is the final buffer I want to display on screen.
*Background: I am trying to see if I can implement Screen Space Reflection (would be nice if CGE has this built-in, alongside with SSAO). I look at Screen Effects and related documents/source code but looks like it only allow to read & write back to the same color buffer.
Edit: Pull request: https://github.com/castle-engine/castle-engine/pull/181
The ideal way would be to extend our screen effects ( https://castle-engine.io/x3d_extensions_screen_effects.php ) to enable this. That is, to access screen contents from “the one before previous” shader run.
This will need some modification to
CastleScreenEffects to enable this. Currently it deliberately keeps only 2 screen states, and when the screen is processed by multiple shaders it does
SwapValues(ScreenEffectTextureDest, ScreenEffectTextureSrc) as many times as necessary. Quite similar to how double-buffering works We only use 2 screens to implement “any number of screen effects”.
This optimization will of course have to be disabled for your case.
ScreenEffect X3D node could have additional variable
needsPreviousScreen, by default 1. When it is 2 or more, then you would be able to use
screenf_get_depth_2 to get the color/depth of screen “before the previous screen”, while
screenf_get_depth continue to get the previous screen.
With the way ScreenEffect is implementing, there is no way to change buffer size for additional optimization.
In case of SSR, I want to render it to a separate buffer so that I can apply filters on final result. It would be a waste of processing power (and memory if new buffer is created) if I render it to my screen size buffer (4k) and then blur the result. Something like a 1024x1024, or even 512x512 buffer would do the job just fine :).
Hm, got it. I don’t have a perfect answer yet.
Aside from screen effects (implemented in TCastleScreenEffects) another way to render to a texture (and then use it for anything) is TGLRenderToTexture. However, this is much more low-level, and actually requires to setup OpenGL textures and other stuff to actually use it, so it’s not how I would see the final CGE API.
So I want to extend the existing “screen effects” approach (in TCastleScreenEffects , in
ScreenEffect X3D node) as it has a clean and simple API, and I see I can make such effects nicely visible in the CGE editor too. But I don’t know yet how to address this particular need