How the next generation of depth cameras will transform our definitions of image and interface.
Developing new technologies of sight—the microscope, telescope, camera—have always changed the way we see the world. Most recently, the ubiquity of smartphone cameras has democratized photography and championed the use of video calling. But when depth cameras, the next evolution of camera technology, come into the mainstream, the changes will be revolutionary.
How depth cameras work
Intel’s RealSense, Microsoft’s Kinect, and the LeapMotion controller are all depth cameras you may have heard of. Each of them captures 3D movement and translates it into information that can be used for things like 3D scanning or gestural interface. They do this by projecting an infrared constellation of dots across their view field, and use the principle of parallax to triangulate the location of individual dot clusters in 3D space.
Bye Bye Mouse
Using a mouse isn’t intuitive. We have to learn how to use it. But as the LeapMotion is already demonstrating, depth sensors mean that moving and selecting digital objects will one day be more like manipulating physical ones. A computer that tracks your gaze and head movements might soon correspond them to the position of a video camera in a conference call so that turning to look at the speaker is as intuitive and immediate as turning your head.
Measurements in a Snap
Imagine looking at apartments online and being able to precisely calculate square footage just by analyzing the pictures. Or sending a picture of yourself to a clothing company and receiving custom-tailored garments in return. Because depth cameras work by triangulating distance, they automatically generate measurement data that can be used for a host of applications from providing exact data in crime scene photography to creating detailed 3D scans of objects.
Depth cameras capture images and video in successive layers, so that each precise layer can be refocused or edited in real-time. Video calls will finally convey the dimensionality of real life, like looking at another person through a pane of glass. We’ll also be able to filter and augment live video, incorporating animation, masking the background, or editing one’s face without obscuring the rest of the image. The perspective of the camera can be to shifted so that video callers will finally appear to look each other in the eye.
But most importantly, photographs and video will break away from the 2-dimensional paradigm like never before. Without the need for 3D glasses or VR headsets. They’ll provide new ways for us to notice, interact, and experience.
Got you hooked? Here’s more:
How the Kinect Depth Sensor Works
An easy-to-comprehend video on the tech behind the Kinect
Depth-sensing cameras will enable us to track and respond to emotion
Could depth cameras open the door for animal-controlled interfaces?