Multiple viewports and sound

Actually I am experimenting with a rallye game for 2 player local. Input over keyboard with WASD for player 1 and arrow-keys for player 2.

First problem: I use an effect from CastleEffekseer and I am not able to make the effect visible in the viewport2. I want to see the effect from car 1 in the mainviewport and effect from car 2 in the viewport2. I see both in mainviewport. It’s hard to see but both effects are on the right viewport.

procedure Start:


  EffectDust := TCastleEffekseer.Create(Self);
  EffectDust.Translation:=SceneAvatarVisuell.Translation+vector3(0,0,0);
  EffectDust.Scale:=vector3(2,2,2);
  EffectDust.TimePlayingSpeed:=0.6;
  EffectDust.Loop := true;
  EffectDust.ReleaseWhenDone := false;
  EffectDust.URL := 'castle-data:/effect/dust.efk';
  MainViewport.Items.Add(EffectDust);

  Viewport2.Items:=MainViewport.Items;

  EffectDust2:= TCastleEffekseer.Create(Self);
  EffectDust2.Translation:=SceneCarVisuell2.Translation+vector3(0,0,0);
  EffectDust2.Scale:=vector3(2,2,2);
  EffectDust2.TimePlayingSpeed:=0.6;
  EffectDust2.Loop := true;
  EffectDust2.ReleaseWhenDone := false;
  EffectDust2.URL := 'castle-data:/effect/dust.efk';
  Viewport2.Items.Add(EffectDust2);

Second problem: Playing the engine sound of both cars is mixing up to something undefined. Is it posible to play the sound from car 1 on the left speaker and car 2 on the right? Maybe this could be better.

Hm, two hard questions :slight_smile:

I don’t know how, admittedly. Maybe @kagamma , developer of Effekseer integration with CGE can help? You can submit an issue for him on GitHub GitHub - Kagamma/cge-effekseer: Effekseer integration for Castle Game Engine too.

I’m afraid this is not possible with the current sound API in CGE.

And it is not possible even with current default backend (OpenAL), though maybe our alternative sound backend (FMOD, FMOD | Manual | Castle Game Engine ) allows it.

Details:

  • From CGE, we don’t control what goes into each speaker separately.
  • We set up the 3D environment, with 3D sound positions, and 3D listener.
  • Then we let sound backend, like OpenAL or FMOD, do the job of mixing sounds and “spatialization” – “spatialization” means to calculate how loud is the output of each sound, and from which speaker it comes out.
  • This means we (from CGE perspective) do not even care how many speakers you have (and any non-trivial speakers setup, with even more than 2 speakers, see Speaker Placement Guide for Best Sound (1 to 11 Speakers) , is also possible – we delegate the problem to our sound backend, like OpenAL or FMOD, to handle all these speakers).

The only exception to the above is that OpenAL has special treatment for stereo sound data (if you load WAV or OggVorbis files which are stereo): they are them assumed to be stereo music, and never spatialized at all, which means their left track plays at full volume in left speaker, and right track plays at full volume at right speaker. But this feature is not available in OpenAL in other contexts, in particular we cannot say “put in left speaker the mix of audio from viewport1, and put in right speaker the mix of audio from viewport2” – because there’s only one “listener” at a time in OpenAL (I’m not sure whether we could overcome this by opening multiple output devices at the same time, but even if we could – I never saw separate OpenAL devices for left and right speakers).

But maybe there’s a different, simpler problem to solve here. I see that our sound output is essentially undefined in case of > 1 viewport, because both viewports will try to change listener on every frame. I just added TCastleViewport.UpdateSoundListener property to be able to control it: Add UpdateSoundListener to set which viewport controls the 3D listener · castle-engine/castle-engine@c4c6f91 · GitHub . So you should set UpdateSoundListener to true on 1 viewport, false for another, and the sound should then from come camera in 1st viewport. At least it will be reliable, will reliably reflect one car.

Going forward, it is possible to introduce a behavior “sound listener” that you could attach to any TCastleTransform to play sound from that position/rotation. So then you could set UpdateSoundListener on both viewports to false, and add TCastleSoundListener to some transform that is anywhere you want – maybe somewhere between/above the 2 cars?

  • Not sure if this is reasonable.

  • If TCastleSoundListener mentioned above seems useful, let me know.

  • I actually really don’t know what other games do in this situation.

    And I happened to play a car racing game, on Xbox, on split-screen, with my daughter, just last week. The sound was coming out from TV. I admit I didn’t pay attention – where was the 3D listener? The sound was somehow sensible to both of us, yet it was coming from TV on which we saw both our cars at once. (The game supports up to 4 players even, so 2 speakers would not be enough anyway). But I don’t know how they did it. I would advise to check what other games do, and if you have conclusions let me know, because this is interesting!

I haven’t tested it myself but the effect will be rendered whenever LocalRender() is called so my wild guess is that because CGE changed the way LocalRender() from “render shape directly” to “collect shape vertices for batching and render them later” some time ago, maybe it is called for collecting shapes before the next viewport is enabled?

Edit: I just check it via nvidia nsight graphics and looks like my speculation is true. The effect in the 2nd viewport is rendered before the viewport is enabled.

Edit2: The code doesn’t deal with multiple viewport correctly and need fixes from effekseer side.

I made a few changes so now effekseer should works property with multiple viewports. Can you checkout latest code and try again?
Note that this changes come with some performance degradation since each transform node now has it’s own effekseer manager instance and effekseer renderer instance, which will affect effekseer’s own batching algorithm.

Thank you very much for the changes. The effect is now visible in both viewports.
The performance in my case is about as good as before.

The property UpdateSoundListener is available and I set it for both viewports.
Mainviewport to true and viewport2 to false. I need more time to be able to make a definitive statement here.

Update to my sound problem:

I have tested now the new property UpdateSoundListener. I’m not really sure, but I can not hear a difference between viewport1 and viewport 2 = true compared to viewport 1 = true and viewport 2 = false.

The best result until now is for me, to create from the original engine-sound (wav-file) one wav-file with the left channel mute and one wav-file with the right channel mute.
I used for this the tool Audacity.
Maybe it could be possible to read the data for every single amplitude from the wav-file and modify one channel. So no thirdparty program is needed.

The solution that came to my mind yesterday: To realize this all using existing CGE playback (so the sound is mixed real-time, so you don’t need to upload WAV files with deliberate stereo with 1 channel muted, so it also works for setups with more > 2 speakers) you can also use an “alternative 3D world” for sound setup. The idea is to

  • Show one 3D world (2 viewports, for 2 cars, sharing Items, with Multiple viewports to display one world | Manual | Castle Game Engine , both having UpdateSoundListener=false )

  • Generate audio from another (3rd) viewport, This would be invisible, and have UpdateSoundListener=true (I think I would need to introduce sthg like UpdateSoundListenerEvenWhenNotExists property, but that’s not a big thing to add in CGE). In this viewport, left car doesn’t move, stays at (-1,0,0) position. Right car also doesn’t move, stays at (1,0,0) position. You place road sounds related to the left car in X < 0, you place road sounds related to right car in X > 0, and make sure you synchronize what’s happening in “audio viewport” with 2 visible viewports.

This is a significant additional effort, I know. But, depending on what you need, it may be smaller effort than generating stereo data with 1 channel muted. It will allow real-time sound mixing in 3D.

Let me know if this is something you want to consider. I can experiment on my side how to “make audio from viewport that is invisible”, ev. adding property like UpdateSoundListenerEvenWhenNotExists or sthg with similar effect.