I haven’t had a chance to download and test the “Skinned animation on GPU” branch yet. However, form a rather quick look I see the introduction of .castle-image.
My question #1 is: Does this file format save textures as linear RGB or sRGB?
If it’s sRGB then it’ll be easy to create the file out of standard texture by ourselves, but it’ll require conversion on GPU during load (fast enough though). Which leads me to another question #2:
How does CGE or developer distinguish between non-color data, or linear v sRGB in this file format? I know CGE is smart when it comes to Normal Maps (they’re uploaded as is, without correction), but having a simple flag in the castle-image header could make it more universal in my opinion. I may be wrong.
And finally, question/suggestion #3:
At the moment all supported modes (rgb8, rgb float, grayscale) use clamping to [0..255] or [0..1]. Which is obvious. However, HDR textures can have their values exceeding 255 or 1.
I understand HDR has no support in this file format yet, but it’s only a 1 additional class away (or 2 considering integer and float). If we had another flag HDR, then implementation of HDR textures would be a breeze - without the need of updating the file header and increasing version number.
#4 I myself would also add 2 new fields at the end - ExtraDataSize and an array of the ExtraData. This will let individual developers to extend the header for non-standard things.
#5 Also the header should start with HeaderSize field. So a newer subversion of the format would add new fields behind the standard data - but before ExtraDataSize. Which means older reader could still read what it understands and skip what it doesn’t.
I think I have encountered this trick first with RIFF format in ancient times but I still think it’s quite useable. Especially for game development where backward compatibility of resources is often required.
I know it’s a bunch of random thoughts, I play guilty!
Oh! You found my secret Actually, I plan to remove the .castle-image support soon, likely even tomorrow – because yesterday evening I improved our KTX support and KTX is now better or equal, in every practical aspect, than .castle-image Luckily, I didn’t announce .castle-image support yet anywhere, so I hope noone actually uses .castle-image yet (effectively, it’s a format that exists for only about a week now in repo).
To be clear, it was a nice experiment with .castle-image, and I’m happy it pushed me to improve KTX. I have stats showing that these formats (both .castle-image and KTX) can make significant loading time speed optimization. But in the end, probably there’s no point maintaining .castle-image. The practical way forward is KTX.
The main point of .castle-image was super-efficient loading and saving, and now we have it (and more) also with KTX.
.castle-image has no such option.
In contrast, KTX has an option to distinguish between sRGB and RGB – it specifies format flags for OpenGL that tell it. So, you see that KTX is already better
But beware: right now CGE implementation of KTX ignores the sRGB vs RGB difference when reading. The color space of the texture is determined by where is it used, and nothing more. See the TODO in KTX implementation:
{ TODO: for now SRGB8 is read the same way,
we ignore information that it is in sRGB color-space.
Whether the image is assumed to be in sRGB or linear color-space
depends in CGE on the image function (whether it is placed in
e.g. TPhysicalMaterial.BaseTexture or NormalTexture)
and global @link(ColorSpace) variable. }
To expand of the previous answer: the usage of texture determines if it’s sRGB or RGB. More precisely, when ColorShape global variable (see Color Space (Gamma Correction) | Manual | Castle Game Engine ) tells the engine to perform calculations in the “linear” color space (thus, be more correct), then we assume that most textures specify colors in the “linear” color space. This applies to e.g. emissive, base textures of the TPhysicalMaterialNode. These textures are effectively accessed using shader code like this
exture_color = pow(texture2D(someTexture), 2.2)
… though it could be optimized in the future by actually using sRGB texture formats on GPU.
Other textures, like normalTexture, are just read without this conversion (they will never use sRGB texture formats on GPU either).
This is actually a standard behavior, glTF spec also says to do this:
" Any colorspace information (such as ICC profiles, intents, gamma values, etc.) from PNG or JPEG images MUST be ignored. Effective transfer function (encoding) is defined by a glTF object that refers to the image (in most cases it’s a texture that is used by a material)."
" The base color texture MUST contain 8-bit values encoded with the sRGB opto-electronic transfer function so RGB values MUST be decoded to real linear values before they are used for any computations."
“The emissive texture. It controls the color and intensity of the light being emitted by the material. This texture contains RGB components encoded with the sRGB transfer function.”
It remains to be seen whether we will want to enable customizing it going forward. And how?
honor RGB / sRGB setting at the image? As mentioned above, KTX has this. But this is not so attractive solution in general – since some image formats don’t have this flag, and some images don’t have it correctly set.
Or expose an X3D field like ColorSpace at TImageTextureNode, that could allow to configure this per-texture? And maybe say “use the color space setting from texture file, e.g. distinguish sRGB vs RGB for KTX”?
We have float-based textures. KTX supports supplying this information both as 16-bit integers (where it’s effectively still limited to 0..1 range) or as 32-bit floats (in this case, true float value is transferred to CGE and the GPU, so it can be > 1.0).
IOW, if you want to have values > 1, use family of
TGrayscaleFloatImage
TGrayscaleAlphaFloatImage
TRGBFloatImage
TRGBAlphaFloatImage
These float-based image classes also work with .castle-image format, so it also supported values > 1.
KTX format has key-value pairs that allow exactly such extensions, effectively you can add any key-value (with binary value) information to KTX.
So, another point for KTX, it already has the feature that you’re asking for
This makes sense, but in light of above – long live KTX, goodbye .castle-image Your post and questions confirmed that KTX is better
P.S. I will likely announce KTX improvements next weekend (for this weekend, I’ll post about sthg else soon), but here’s the draft news text about KTX improvements:
“”" KTX support improvements, ability to load images faster
Our support for the KTX image format was improved, to make it the best format for images for some cases (when you don’t care about image size on disk, but care about loading time).
KTX loading has been optimized for certain (often) cases when the entire image contents can be loaded from disk to memory in one Stream.ReadBuffer call. This makes reading KTX lighting-fast in many cases.
Compile your applications with CASTLE_KTX_LOG_SPEED to see whether the fast loading is used for your particular KTX files.
TODO: Change CASTLE_KTX_LOG_SPEED to global boolean CastleInternalCompositeImage.LogVerboseKtx ?
The support for 16-bit KTX images, which previously resulted in 8-bit data (we lost precision) and only worked for grayscale data, has been extended: they now result in float-based (32-bit) data, and support all channel variants (G, GA, RGB, RGBA).
Note that I didn’t have testcases for all possibilities. You’re welcome to test the code on your KTX files, and report a bug if it fails!
We can now save KTX files (without any extra libraries necessary, just pure Pascal code).
This also means we can auto-generate trivial/downscaled KTX files from your files, which is great to speedup loading time (at the cost of extra disk space), see description here.
“”"