Google Shows Off Its Brand New (And Amazing) AR Features

It’s a new milestone for Augmented Reality (AR).

At first glance, this sounds highly technical and uninteresting. However, when you understand what it does, you’ll see how this will fundamentally change your augmented reality experiences.

You’ll also see how it will open up tons of new possibilities for AR in the worlds of productivity, shopping, and even gaming.

So what is the ARCore Depth API? Here’s Google’s official explanation:

Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.

If you’re confused, don’t worry – you’re not the only one. Perhaps this video from Google will explain the new features better:

The upgrades will soon allow developers to perform what’s known as occlusion, which is when an artificial object can be blocked from view by other real-world objects in a scene. Place a virtual cat in your living room, for instance, and you can see it disappear from view when you angle your camera in a way that places a bed or table or some other object in between.

Rather than requiring differential data gathered from two cameras or a dedicated time of flight sensor, the new Depth API will automatically snap multiple images as you move a single camera around and then compare the images to estimate your distance from each pixel.

Estimating depth is typically done through specialised time-of-flight sensors, which use artificial light signals to measure the distance between the camera and the subject. But Google said that the Depth API can be carried out with a smartphone – although the addition of depth sensors will improve the quality of the effect.

The update creates a real-time understanding of the scene depth that could be very useful for virtual reality experiences. The current series of virtual reality headsets keep users aware of their real-world surroundings by having them draw the playspace (Boundary) during the setup of the system. When the wearer of the headgear moves closer to the boundary, the boundary is shown thereby warning users that they are about to step outside the playspace. The occlusion technology could make this automatic and also 3-dimensional. When a user walks too close to an object, it could simply appear in the headset.

Google is hoping that developers will be excited to try out this new feature and integrate it into their AR-powered applications. It shouldn’t be too long before you start seeing better depth experiences in your current Augmented Reality apps.

Like this article?

Share on facebook
Share on Facebook
Share on twitter
Share on Twitter
Share on linkedin
Share on Linkdin
Share on pinterest
Share on Pinterest

© Copyright 2019. All Rights Reserved.