That’s a promising degree of cross-platform compatibility already, and the hardware is still months out from release.
Whiting gave me a more detailed example of how supporting different motion controllers—SteamVR’s, the PlayStation Move, and Oculus Touch—works in UE4.
“[For] the control input, the actual buttons on the device, we have an abstraction called the motion controller abstraction. It’s like with a gamepad. With gamepads you have the left thumbstick, right thumbstick, set of buttons. We did the same thing with motion controllers. They all have some sort of touchpad or joystick on top, a grip button, a trigger button. We have an abstraction that says ‘when the left motion controller trigger is pulled, when the right motion controller trigger is pulled…’ so that it doesn’t matter what you have hooked up to it, it’ll work out of the box. For the actual motion controller tracking, there’s a little component that you attach to your actor in there that tracks around, you can say ‘I want to track the left hand, I want to track the right hand.’ If you have a Touch or Vive or Move in there, it all works the same.”
But what about accounting for the differences in performance between those controllers? If one tracks more quickly and accurately than another, does that affect its implementation in the engine? Is that something developers will have to account for? According to Whiting, prediction solves that problem.
“[The latency of the tracking] makes a huge different, but fortunately from the SDK side of it, we can handle prediction mostly equivalently on all the different things, so they behave the same,” he said. “We do what we call a late update. So when we read the input, we read it twice per frame. Once at the very beginning of the frame, so when we do all the gameplay stuff like when you pull the trigger, what direction is it looking, and then right before we render everything we also update the rendering position of all the stuff. So if you have a gun in your hand we’ll update it twice a frame, once before the gameplay interaction and once before the rendering. So it’s moving much more smoothly in the visual field but we’re simulating it.
“There’s more latency on the interaction, but the visual stuff is really what causes the feeling of presence. As long as the visual updating is really really crisp, that makes you feel like you really have something in your hand and presence in the world. Everyone’s a little bit different, so we want to handle the technical details for that so they don’t have to worry about it. They just have this motion controller component, and we’ve handled the latency, adding the late updates and doing all the rendering updates on the backend so it just works out of the box.”