Behold, The next Gen VR : Physical world mapping

0

Physical world Mapping

So, what is this Physical world mapping all about? Even if you are a true VR evangelist with a tattooed Oculus or Vive logo on a shoulder, you have to admit current-gen headsets will never get mass adoption. Because for now, Virtual Reality means being tied up to an expensive PC in the prison cell of a Guardian system.

It’s just not that virtual freedom we were all dreaming about.However, there is one crucial thing that we forgot. It’s called “progress.” And it smashes everything on its way.

In less than 2 years of VR-consumer-version era headsets became completely wireless, a field of view doubled, and the resolution tripled.

In fact, VR technologies are evolving much faster than consumer versions of headsets can be released.

In the series of weekly articles called “Behold The Next Generation VR Technology” I will guide you through the world of the latest and most promising tech that will finally make VR the next computing platform.

Mostly on the early stage of development, all this tech will be implemented in consumer version headsets during the next 10 years. Some sooner, some later.

I divided this series into parts — one part for every vital aspect of VR technology. This one is about:

Physical world mapping

Imagine you are wearing a VR headset, playing a game. The game character offers you a seat. What would you do?

You have a couple of options. First is to find the chair by touch, and move it to the approximate location of a virtual seat(because obviously, you can’t see it while wearing the headset). Sounds complicated already.

Second, is to put off the headset and move the chair to the playable area. That’s, to say the least, not so immersive.

And the most unattractive option is to choose to play seated experiences only.

Poor game-design you say? No. It’s a poor tech. Not to mention the cases when people try to lean on the virtual table and fall.

While making a 3-rd person character interact with environmental objects in traditional PC games is a push of a button, in VR it turns to a quest. Where the one who is forced to run around in search of a chair is you.

The reason is clear: these two worlds — virtual and physical — are not yet connected.

Most VR owners play in a living room with a TV, a couch, a table and many other objects. In HTC Vive, for example, you can activate a camera to navigate through playing space, but although it solves the problem player’s immersion decreases significantly.

The solution comes from the field of computer vision.

 

Real-time 3D object reconstruction — Full video / Source: Imperial College London

The Robot Vision Group at the Department of Computing of Imperial College London has demonstrated a technology that does not only pretty accurate 3D space mapping but also reconstructs physical objects in the virtual environment.

That means it can recognize the shape of, let’s say, a chair and put a similar-shaped model of a chair into virtual space on the exact place of a physical one.

They use a depth camera that first measures the depth of a scene. Like Kinect (or iPhone X Face ID sensor) it projects thousands of light dots in the Infrared spectrum (it’s not visible to the human eye). Depending on how some dots are small or big it understands how far object’s surface is.

This carpet of dots forms a 3D depth pattern. Next, the application tries to guess what objects does it “see” on this pattern to find the similar object in the database and place it in a game.

Imagine chairs, tables and any other objects can be paired with the virtual ones so you or even virtual characters can interact with them.

Oculus is already experimenting with these technologies due to its acquisition of Surreal Vision — “one of the top computer vision teams in the world focused on real-time 3D scene reconstruction”, who are… Tadaaam! also from the Imperial College London.

 

Website scaling compared to VR environment scaling — Full video / Source: Oculus

Another exciting concept that is meant to blur out boundaries between physical and virtual was presented on the last Oculus Connect.

It’s a concept of scaling: like a website scales to the different screen resolutions the virtual space can scale to the playable room space.

As a result, virtual environment template can be resized by rearranging player’s gameplay objects. If you have a small playable area it can make a virtual spaceship cockpit a bit smaller or, for example, change the spot from where the enemies are spawn.

 

Oasis’ technology procedurally generates virtual environment based on player’s physical space — Full video / Source: www.media.mit.edu

MIT Media Lab researchers went even further and presented a technology called Oasis.

Based on Google’s Project Tango(very similar to Kinect-like tech described above) they made it possible to scan the physical environment and generate a walkable virtual world with the same exact contour and shape.

That’s how they describe it:

The system captures indoor scenes in 3D, detects obstacles like furniture and walls, and maps walkable areas to enable real-walking in the generated virtual environment. Depth data is additionally used for recognizing and tracking objects during the VR experience. The detected objects are paired with virtual counterparts to leverage the physicality of the real world for a tactile experience. Our system allows a casual user to easily create and experience VR in any indoor space of arbitrary size and shape without requiring specialized equipment or training.

About Author

IT and video games are Bryan's topics of interest since a very early age. Video games, the Internet, game consoles and computers became his normal toys, as a result, writing about the infancy of the Web, Virtual Reality/Augmented Reality/Mixed Reality, the games industry and hardware in general. Writing, along with his other interests: programming, hardware, photography, and traveling. Technology, in general, makes him tick.