Recently spent some time setting up the camera system for Monster House Party.
Whilst creating the greybox and input systems for the game and testing on the PlayStation 5 with PS VR2, we discovered something strange.
If the PS VR2 headset is used on the PlayStation 5 we are able to render an image inside the headset which is also mirrored onto the connected TV. This works with a standard camera component in Unity.
However, because Monster House Party is designed to allow a player in VR to also play with 1-2 other players who use DualSense controllers, we need to render additional cameras that are output only to the TV.
Four cameras are rendered: VR left & right eyes (SinglePass) for Player 1, Player 2 for half of the TV screen, and Player 3 for the other half of the TV screen.
We found when there is a single camera rendering we are able to render at over 120fps but as soon as we add a second camera the frame rate drops to 60fps. This occurs regardless of what content is been rendered (eg an empty scene even drops the framerate!)
After some trial and error, we discovered that if we have the first camera render like normal for the PS VR2 headset but then render subsequent cameras to render textures that are then displayed onscreen as UI elements we are able to render all 3 cameras at 90fps 😃
With a solution found, we’ve created a nifty ‘screen manager’ that takes care of setting up the cameras and screen rendering based on which cameras are present (eg just the VR camera, VR with a single or both controller players, or just a single or both players but no VR player)
This is an example of one of the many technical issues developers come across during development that isn’t always obvious in the planning stages. Luckily a solution and implementation for this one didn’t impact the schedule too much!