I am looking for solutions to a problem and am considering Castle for an OSVR based project. OSVR’s environment is OpenGL based where the scene is given to the headset’s context and the OSVR libraries handle the viewports.
Can anyone give me an outline on the complexity in redirecting the engine’s screens to the OSVR context?
Castle Game Engine is using OpenGL (on desktops), it is also expecting to set it’s own cameras (since that’s mandatory when you use OpenGL directly).
I admit I don’t know OSVR API, I would need to investigate it how it can be integrated with CGE. I’m sure it can :), but without knowing OSVR I’m not sure about complexity.
For rendering 3D worlds, we initialize camera and pass it in TCastleAbstractViewport (in
src/game/castlescenemanager.pas). Note that this expects that TCastleAbstractViewport is a 2D user-interface control (so it has some place on the screen), and the camera is calculated just as a matrix inside TRenderingCamera which is then passed downward (used by shaders rendering the objects). From your description, this would need to change: TCastleAbstractViewport would somehow need to send scene to OSVR.