What is the overhead in the engine of dynamically adding and removing TCastleScenes?

Thank you for such a detailed answer - much appreciated :slight_smile:

What I have in mind is an open world (like a lot of people do…). For me, this is a revival of a project I worked on back in 2010 which was an “infinite” heightmap terrain based on recursive subdivision (mountains, valleys, lakes, oceans), without even using an engine at the time - just directly in OpenGL. It had just about enough performance to allow moving unlimited around the terrain with basic physics - with no 3D models, and was impractical to become a game with further overheads.

It sounds like we have quite similar ideas - and I like your Mazer game!

With what I imagine now - the only time there would be a delay to loading would be at the start when there is nothing loaded. The management system would first of all register cells that may potentially become visible (up to a maximum number of maintainable cells determined dynamically by framerate), and these would be instantiated in the Castle engine containing only a simple axis aligned cube, sorted by 3D distance from the camera. When these are rendered, then an occlusion query will tell when the cube is actually visible. If it only contains the dummy cube, then the cell will need to be populated with the terrain graphics (or other graphics such as buildings or caves etc). Cells will be populated with a level of detail appropriate for their distance from the camera (minimising seam artifacts between different detail levels)… As the camera moves, then further cells in the distance will need to be added to the potential set, and the system will make space for these by deallocating the furthest away cells currently registered. And cells that are generated at the wrong detail level may need to be regenerated according to distance from the camera - as a gradual process operating over many frames. The terrain is procedurally generated, but this may be overridden per cell to allow modifiable terrain. In this case the entire cell is instead loaded from some other source either locally stored or loaded from a server. There will be a delay to the loading / generation process, during which time the dummy cube will be displayed - possibly for several frames. But most of these should occur in the distance and be obscured by e.g. fog. In cases where the cells are hidden behind say a mountain, but otherwise are close to the camera, then these should be generated gradually and only made renderable as the framerate allows - otherwise these should remain as dummy cubes. The aim is to minimise the number of dummy cubes which get rendered to screen… Overall, the number of cells to render and maintain should be dynamically managed to maintain a minimum framerate - accepting minor artifacts created by delays in loading, but attempting to hide these in the distance etc.

The terrain would be logically divided into same sized chunks. But objects such as trees, buildings etc need not be constrained to this - allowing the graphics for say a building to be designed in Blender without regard to a cellular structure… But otherwise, they would be equivalent to the terrain in terms of occlusion and being loaded / unloaded etc. Perhaps these would exist in the engine as separate scenes, able to be repositioned etc. But I wonder if the terrain should be treated as a single scene, and the cellular structure handled at a lower level or not…