Corrupted texture display

Am I doing something stupid?

Could be my Video driver I guess…

The grab below is a full screen (1920 x 1080) window

The scene comprises …
The ‘playmat’ (angel image) - texture image = 1920 x 1080
The cards x 72 - all card textures are 448 x 680
43 of the cards are rotated 180 in Y and stacked forming the ‘deck’

Nothing touches anything else (the deck has a tiny offset so the cards are marginally floating above each other)

The artefacts only happen at specific rotations of the scene as a whole

It seems like the playmat scene has duplicated geometry, i.e. two rectangles at the same positions. Due to z-buffer, such artifacts can occur, and be visible only at some angles.

Check the geometry of the scene.

If you generate it by code, note that you can save the current scene state to x3d file, using TCastleScene.Save, like MyScene.Save('aaa.x3d'). You can open the resulting file in a text editor and view3dscene, and investigate.

And check do you add it to Viewport.Items only once :slight_smile:

I did a Scene.Save on the board and it’s fine in view3dscene.

Is there a Viewport.Save(…) by any chance?

ATM I’m playing around so the code is very hacky. I only create one Scene object then load 73 x3dv models into it one by one each time adding the new object to the Viewport.

In case board being of different from card I tried giving that it’s own Scene object. Same results…

The code is extremely basic - A hello world with loading stuff from other thread

If you wanna see what’s going on it’s here


Forgot -’s source includes the use of etpackage

The problem is that your board.x3dv contains both front and back faces. While they don’t exactly overlap, but they are really close (one at Z = 0.000049999998736893758 , one at Z = -0.000049999998736893758 ). The GPU makes the calculations with rather poor precision, and in this case it is doing linear interpolation of depths for the front and for the back face. It seems that are detected as overlapping at some pixels, at some angles.

One solution is just to remove the back face from your model. You can just comment the last Shape from your board.x3dv. Note that the board is visible from the back anyway (since you use “solid FALSE” in your X3D file).

I would also remove the sides of the board (1st shape of board.x3dv) – they are visible only as occasional pixels, they are extremely thin.

Alternative solution would be to make the board thicker. In my tests, changing the coordinates to be 100x larger (so Z = -0.0049999998736893758 or 0.0049999998736893758) is enough to eliminate any artifacts, and doesn’t change the look of the model in any visible way.

Oh, and I’m honestly not sure why the problem is visible in TCastleControlBase (on Lazarus form), but not visible in view3dscene. While view3dscene initializes OpenGL context in a different way (using TCastleWindowBase) but the default context bit depths are the same, and they result in the same default DepthBits = 24, at least on my system. Yet, just like you, I can see the artifacts in your project1, but not in view3dscene – really don’t know why.

Some additional comments, not related to the problem at hand:

Is there a Viewport.Save(…) by any chance?

You can save TCastleUserInterface descendant using UserInterfaceSave from CastleComponentSerialize unit (this is used to save/load designs using our editor too, ). And TCastleViewport is a descendant of TCastleUserInterface. This will include the Viewport.Items, which is a hierarchy of TCastleTransform descendants (which usually contains TCastleScene instances as leafs).

Like UserInterfaceSave(Viewport, 'aaa.castle-user-interface').

You can also save TCastleTransform using TransformSave, like TransformSave(Viewport.Items, 'aaa.castle-transform').

You can open resulting files in a text editor (these are just JSON files) or CGE editor.

Although I’m afraid that this advise isn’t really useful to you – we don’t save the X3D nodes graph of each TCastleScene. We only save the TCastleScene.URL, which is empty, since you load the scene using TCastleScene.Load(TX3DRootNode, ...).

So, the answer is “yes, but it isn’t really useful to you at this point”, sorry:)

Forgot -’s source includes the use of etpackage

Note that we have a way to measure time in CastleTimeUtils already :slight_smile: See the Timer routine. It’s used throughout the engine, and is suitable for measuring times precisely, even to make a simple “manual” profiler (see TCastleProfiler, used through the singleton Profiler).

P.S. Testing your code, adding InitializeLog in Unit1 initialization, I see that you have many non-power-of-2 textures. By default engine resizes them before loading to GPU, reporting to log this:

Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/board/artcache/angel.jpg" from 1920x1090 to 2048x2048
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/70652.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/back.jpg" from 672x936 to 1024x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/69235.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/70267.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/69587.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/70764.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/70386.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/70636.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/70285.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/68576.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/70294.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/70271.jpg" from 488x680 to 512x1024
Textures: Resizing 2D texture "file:///mnt/data-linux/sources/castle-engine/castle-engine-priv/contrib/card_game_forum_peardox/test01/card/artcache/69944.jpg" from 488x680 to 512x1024

This is not a problem. But if you’d like to have faster loading times (it may matter if you add thousands of Magic the Gathering cards :slight_smile: ):

P.P.S. Note that you get nice specular highlights on the board now, because it uses Phong shading, which is implicitly requested by “solid FALSE” (two-sided lighting requires Phong shading). You can compare it by removing “solid FALSE”, the front face of the board will then be lit in a worse way.

So, to make sure you always use Phong shading, I would suggest to add shading "PHONG" inside all shapes in your board.x3dv. See .

Ahh - my models were being stupid then :slight_smile:

I want to get rid of the shiny card effect, the cards need to be matte so they’re legible. I’ll play with that when I get to lighting I guess…

The board is really just a placeholder for my initial experiments anyway. I needed it to check the card placement. If you run test01 the look along the Z axis everything is placed relative to Z = 0 - actually everything is relative to (0, 0, 0)

The cards are also very thin so I just changed the Z values in board.obj to be the same value of 0.003429 and exported board as a x3dv from view3dscene again finally modifying the x3dv to name the texture node.

Everything works now…

I know textures should be a power of 2 but there’s a problem…

The textures are publicly available. A site called makes them generally available in six sizes. They are all high quality scans of retail Magic the Gathering cards and there are thousands (177+k) of the things. I’m using a subset of 3250 English cards from a pool of 50,220 at the moment.

And a solution I’ve already planned for…

What aim to do is download the images on demand, resample them to e.g. 1024x1024 then save the result in a LRU cache on the device (storage being an issue on phones + tablets). The full set of 3250 images take about 500M (the high-res PNGS are 5G).

I also want to cater to fact that the cards are available in 11 languages (Korean has the least with 8962). The fact that I don’t speak Korean doesn’t matter to a Korean…

Tomorrow I’m gonna try “dealing” the deck - that’s gonna be fun…

I added CastleLog to the uses and a InitializeLog in the FormCreate but there’s no Log file.

I’m trying to get it working on my Mac so logging would be handy (I get a white screen)

I’ll give Linux a go as well…

Never mind

Found it (API docs say it’s elsewhere from Manual docs)

Indeed, thanks for noticing. I checked that manual is right :slight_smile: and updated API docs on to link to the manual.

I’ve made some findings that will affect some users - all Lazarus related… The problem is that if someone wants to try CGE and Laz screws up they’ll just give up (I nearly did)

I came across an issue on my main dev PC which is running Win10 regarding fpcupdeluxe. Basically fpcupdeluxe simply refused to build me a Lazarus even though I’ve done it before.

I tried fpcupdeluxe on a pc with a fresh Windows install and it worked without problem. This machine is ‘clean’ - the only dev tool on it is Lazarus

My system is extremely ‘dirty’, it’s got a LOAD of dev tools on it including Delphi, Visual Studio Community for example.

These environments definitely interfered with fpcupdate. This means that if you’re a Delphi user you’re most likely gonna have problems with CGE.

After a load of testing I’ve now got a foolproof way to install Lazarus on a dirty system.

It’s extremely simple… (stupidly so, but hard to spot unless you’re expecting it)

Just open up a command prompt (Winow -> Run -> cmd) Note don’t use PowerShell, must be cmd.

Download fpcupdeluxe + stick it in a
cd c:<temp dir>
set path=c:\windows

Install Lazarus then CGE + everything works perfectly

Setting the path to c:\windows basically makes the fpcupdeluxe you run from the command line unaware of all the crap Embracadero, M$ and others have installed on your PC. I have multiple makes all over the place and fpcupdeluxe wasn’t telling it to use the one it just built…

I’d add a note to your Lazarus installation docs if I were you - a troubleshooting section maybe. While the number of people it affects will probably be small those who get the problem will love CGE all the more for having the solution in the instructions.

This all seems to be a problem that fpcupdeluxe picks up make on $PATH, and many companies (like Embarcadero) put on $PATH their own make version that isn’t compatible with GNU make.

I would advise to submit a bugreport to . It can query make --version version to search harder for the right make.

Note that fpcupdeluxe isn’t the only way to install Lazarus. I expect that people new to the FPC/Lazarus would just go to and install a precompiled FPC/Lazarus version. Although fpcupdeluxe is cool if you want to have cross-compilers, so indeed I would like to fpcupdeluxe work as best as possible too.