It’s 3 AM. You wake, sleep broken by the water monster pounding its liquid fists against the sides of your bladder. You pull off the covers and trudge towards the restroom. You know where the toilet is. You know your hallway. But you still stub your toe and fumble like a drunk for the doorknob. Your ability to perfectly interpret your surroundings is broken by darkness.
This is why software developers, according to Anton Mikhailov, a software engineer in research & development at SCE, program sweeping gestures into their Wii games instead of precise movements. The Wii remote hardware can’t interpret space that well.
But the PlayStation Move can.
In Boston this afternoon Mikhailov tossed me a man-versus-darkness analogy to help explain what the PS Move does differently and better than the Wii Motion Plus. According to him, it comes down to a variety of factors: the remote’s 1:1 precision, its own accelerometers and gyrometers, and most importantly, the PlayStation Eye camera spatial tracking.
“The way the system works — the lit sphere is being tracked by the PlayStation Eye camera. Internally, there are accelerometers and gyroscopes. That part is very similar to the Wii Motion Plus. But the camera is the big differentiating factor because that actually lets you have a position in space.
“On the Wii Motion Plus you sort of have a gestural input. It sort of knows how you’re moving but it doesn’t really know where you end up.”
Mikhailov and I are standing in front of a decent-sized LCD TV in the middle of a hotel suite in the city tea and the Red Sox built. I’m watching something similar to the E3 tech demo — Mikhailov is holding two slim PS Move controllers. On the screen, though, he’s holding two goofy looking swords.
He’s swinging and twisting the controllers quickly, each movement recreated on the screen as close to perfect as my naked eyes can discern. He lets me try. I giggle, feel awkward as I see two the toys transferred to my grip.
“An analogy for that — [the Wii Motion’s general input] — is if you close your eyes and try to walk across the room. You know you’re going forward but you’re really not sure where you are in space. You’re kind of stumbling.
“That’s why they end up doing a lot of gestures where you swing forward and swing left. We’re more of a spatial device. We can do quick gestures. But at the same time, we also have the smooth positionings. So we can do another level of gameplay where you can fake left and then go under and low. You can do complex motions that don’t just trigger gestural inputs, but move how you move.”
The Eye does all the depth tracking based on its view of the controller.
“The camera does 3D tracking. The Wii has a camera looking at the dots. But the reason it can’t do 3D is because, as [people] turn away, [the hardware] loses sight of the dots — and the dots move around in unpredictable ways.
“Because our camera is looking at the controller, going back to the blind analogy, it’s like those are our eyes that are watching the room for us. That’s how we can tell position.”
He switches the swords to models of a PS Move motion controller. The device is perfectly rendered to the controller we’re holding in our hands, right down to the buttons as I move the controller around to face the PS Eye.
“This is the shape of the controller, overlayed on the video. You can actually see how precise this device is. If there was any error, you wouldn’t see the controller where it’s supposed to be. It’s exactly where we are.”
The sub-controller, an optional attachment, can’t be tracked like this — It’s because it doesn’t have a glowing sphere. As dorky as that thing looks, it’s vital to the experience. To illustrate, Mikhailov places a free hand over the ball. The hardware loses sight of the controller, and as his hands move, the 1:1 recreation of the remote on-screen hovers feet from the device and can only snake towards the movement.
“That’s why on the Wii, you kind of get some spatial stuff, but it doesn’t always work and it’s not always reliable.”
What’s the skinny on PS Move versus Microsoft’s Project Natal? Mikhailov boots up the “puppet” demo. The video feed turns him into a wireframe monster with long and slim fingers and a funny shaped, featureless face. The monster moves as he does, recreating his head and hand movement.
“This is tracking your head and your hands. Natal is tracking your full body, so its doing the legs too. When we did Eye Toy, we found that actually these are the most important parts because they define your body. Most of the time people aren’t doing kicks. It’s more important to know precisely where your hand is rather than roughly know where your body is.
“I have these fine finger controls. I can squeeze the trigger. With Natal what you get is more like position. You don’t actually get angles of your arms very well. We think that’s more important. We’re tracking less, but I think we’re tracking in a more comprehensive way.”
Mikhailov appears to be on the money with all his points — Move is a sharp collection of hardware that recreates movement with sometimes-unnerving near-perfect precision. The Wii and Project Natal are missing that boat.
But I still have questions. Will the Move end up trumping either of these two motion technologies? (Nintendo has had a hell of a headstart.) Will software creators actually harness the hardware or create interesting or compelling games that have us exploring what Move does? We’ll have to wait a long time for those answers.