Happy New Year everyone 
I really do enjoy the skinned animations on GPU, and how I see it, it also makes implementation of Morph Targets (Blender’s shape keys) easier.
Are there any undergoing works in the morphs direction, or plans to implement it?
I understand that glTF loader already reads the position, normal, and tangent deltas. CGE has most of the requirements implemented already. Vertex attributes and animating morph weights are basically a copy/paste of skinning. So if no one is tackling the issue already, I though about coding it myself.
I admit I asked AI for a brief summary - CGE code is quite vast - to assess the amount of work. And, even though AI is very optimistic about how easy it is, I’m not so much
Well, it’s not very complicated, but it needs changing few units, which is not always straight forward… CGE receives tons of updates and I may break compatibility.
I also think about necessary changes to shadows and bounding boxes. Skinned animations provide the necessary tools, but will need some additional work.
-
For shadows purpose, I guess, I should apply the morphs first (for example a human smile) and then skinned animation (like an open jaw). That would make the shadows work exactly the same way they would without morphs, if I’m not wrong.
-
To calculate the bounding box I’ll need to start with base mesh, and apply the morph weights on CPU. Then the CGE can apply bone transforms as it does already. So again, morphs first, then skinned bones.
Although the bounding box formula wouldn’t slow down the existing bounding box calculation much, having dozens of morphs per model could affect the FPS when many models are present and animated. But most morphs could be calculated only once when the shape changes (e.g. human model is loaded and their body weight or jaw size are set, and aren’t changing after that). Which means calculating the bounding box could be on-demand (by explicit call) rather that with every animation frame - as long as the overall shape/size don’t change much. It’s a compromise that would limit CPU usage. Or maybe would it be better to allow users some flag to control this behaviour? If I’m to submit a PR then it may be beneficial to use the flag, but it’ll require more changes, and it would have to be serialised adding more properties in the editor.
I know I could replace morphs (shape keys) with bones, but with dozens of morphs on every model it’ll be quite a burden. And as we already have in CGE most of what’s needed I think it wouldn’t hurt to add morphs.
I’m open to suggestions and welcome any advice.
We indeed plan to support morph targets.
Useful reading to understand below musing: how animation is done using “interpolator” nodes in X3D.
2 ideas how to handle morph targets (at glTF → X3D conversion time):
-
Internally, X3D graph already supports a node TCoordinateInterpolatorNode which can be used to “drive” the animation of morphing. This addresses most concerns you mention:
-
It means that morph targets will be applied before the skin, which is what one wants when using both animation techniques on one model, it’s also what Blender does (from what I know → test to confirm it 100% welcome!).
- It should automatically work, without any extra work necessary, when combined with skinned animation on GPU.
- When skinned animation is done on CPU (as a fallback, e.g. because GPU is ancient, or shadow volumes force us) we may need to adjust it – we store things like
Geometry.InternalOriginalCoords, Geometry.InternalOriginalNormals, Geometry.InternalOriginalTangents (see here). If they are non-empty, and TCoordinateInterpolatorNode tries to animate the FdCoord → it should animate the Geometry.InternalOriginalCoords instead.
-
Bounding box of coordinates on CPU:
- … would change, and be recalculated on-demand. (When
TShapeNode,BoundingBox is not specified.) It should just work.
- Unless the shape includes TShapeNode,BoundingBox, which we auto-calculate when importing glTF animation. I think this auto-calculation may need to account for all possible morph targets.. so this will get a bit complicated. Anyhow, you can “cross that bridge once you get to it” :), for many use-cases simply ignoring this problem may be OK, as small morph targets (e.g. moving the eyelid) will not change bbox enough to cause any problems (breaking frustum culling).
-
It is performed on CPU, in a simple way. This is acceptable for start, as passing big amount of data on GPU needed for morph targets may be large. (Though glTF has “sparse matrices” to address it.) We should add a TODO to implement it using GPU some day, but for start, current approach will “just work”. The code is optimized so that TCoordinateInterpolatorNode just updates the VBO of the vertexes of geometry, it should go fast.
While the most common morph target usage animates coordinates, normals, tangents…
…but I see glTF also allows to animate colors and texture coordinates this way. We could do this using existing nodes too. There’s ColorSetInterpolator that can animate colors like CoordinateInterpolator. Texture coordinates can be animated using PositionInterpolator2D (2D) or PositionInterpolator (3D texture coords, useful for 3D textures, though this is not possible in glTF).
-
Another approach, and after some thinking I would actually recommend to start like it from the beginning, would be to introduce a node node like (new) TMorphTargetNode to express it, and (already existing) TScalartInterpolatorNode that is animating a simple float value that determined the influence of this morph target. In this case you don’t use TCoordinateInterpolatorNode.
To support multiple morph targets, we should probably just add a list (MFNode in X3D terms) of MorphTarget nodes to each node. Each morph target is coordinates (MFVec3f), normals (MFVec3f), tangents (MFVec4f), weight in 0..1 (SFFloat). This matches basic morph target possibilities in glTF ( glTF™ 2.0 Specification ). To account for additional color / texture coord animation this way, we could extend MorphTarget node with them too, but that’s likely not necessary for initial implementation (as more seldom used).
To understand the difference:
- glTF morph targets are like Blender shape keys, artist defines a number of “morph targets” and then animates (typically in 0..1) how much they influence the base shape. The imagined
TMorphTargetNode + TScalartInterpolatorNode would express it most directly in X3D.
- In contrast,
TCoordinateInterpolatorNode “flattens” this thinking. It just animates the vertexes through specified poses. So you would need to calculate animation, at loading, from morph targets and their floating point weights.
This would be X3D extension, only in CGE (quite like Skin actually), but it makes sense to 1. animate morph target using Pascal code comfortably too, 2. it will result in smaller X3D files (as TCoordinateInterpolatorNode would need more memory and more work to “precalculate” animations at loading).
I hesitated, but from my experience with our Skin, I would say it makes sense to do this “proper implementation”, in AD 2, from start. Doing TCoordinateInterpolatorNode as a first step is… tempting, but ultimately it will result in some useless code that we’ll need to throw away later (precalculating animation at loading). However, if someone wants to contribute, I would be fine with either AD 1 or AD 2 approach, and in case of AD 1 - just plan on upgrading it to AD 2, but it will absolutely useful for most practical purposes already.
I hope this is all helpful – to explain how I imagine implementing it. As always, help is appreciated, and if you want to tackle this task and submit a PR – you’re absolutely welcome and thank you in advance! 
1 Like
Note: I’ve done some edits to the above explanation after first posting. If you read it ~15 minutes ago, please reread, it’s better now 
Thank you for detailed answer. I think the option AD 2 makes a lot of sense. Especially that morphs can be mixed together at runtime.
It also aligns well with my draft, however I was keen to use “normal” class instead of x3d node. But having it as TX3dChildNode descendant opens important additional options:
-
Fully compatible with CGE philosophy,
-
Serialisation. If stored in an external x3d file it can be read by any x3d editing tool. Also it allows extracting the morph’s data from one model and reuse it for many - as long as indexes and vertex counts are unchanged (which is always true by-design for my parametric humans, at least the closest LOD level where morphs matters most).
I’ll have a look at it. I need it anyway, but can’t promise a short time frame.
1 Like