Can an index buffer validly have unused space in it?

In TIndexedFaceSetNode, an index of “-1” is intended to act as a delimiter between separate faces. But it also works to supply a list of indices such as [0, 1, 2, 3, -1, -1, -1, -1], and this correctly results in just the 4 vertices at the beginning of the buffer being correctly referenced.

But how does this affect the final index buffers that are sent to the GPU? Are they recomposed into indices of triangles anyway and this kind of padding is ignored?

I am looking for a way that the buffers can be treated as statically allocated on the GPU, but by using padding they can represent geometry sometimes with fewer vertices. This could be thought of as a kind of dynamic allocation inside the fixed size buffer. But allowing fast updates to the buffer indices without changing the buffer size.

At what point is the decomposition into triangles performed? On CPU in the engine before sending buffers to the GPU? Or in a vertex shader?

The decomposition is performed on CPU. So the GPU never sees the “-1” indexes, and for GPU it doesn’t matter whether you will have a longer sequence of consecutive “-1” indexes.

In general, for the list like [0, 1, 2, 3, -1, -1, -1, -1] , it should not affect a performance whether the “-1” will be once or a few times. Unless you will really have a huge number of -1 indexes (say, 1000x of consecutive “-1”) → then it may affect CPU processing. But I doubt you can reach a point when the speed difference is in any way noticeable in any real-world application.

So, the simple advise: you can freely use TIndexedFaceSetNode with a sequence of “-1” in indexes without any worry.

And to have perfect performance, you can use instead geometries that don’t allow polygons, and thus can load index arrays straight into GPU, like TIndexedTriangleSet. In most cases (but not all), TIndexedTriangleSet arrays can be loaded straight into GPU.

Actually in most cases, the TIndexedFaceSetNode coordinates (but not indexes) can also be loaded directly to GPU. And this already gives us enough performance for most real-life cases.

Thank you, @michalis