Augmented & Mixed Reality

Shared Mixed Reality experiences at high precision

Mixed Reality has been around for a while now, and there are already a lot of Mixed Reality applications available. However, surprisingly few of these apps make use of a capability that many consider to be pivotal for the future success of Mixed Reality: Sharing. The ability for several people to see the same virtual content at the same real-world location enables entirely new ways of collaborating. You can see this feature being used in countless films, and it almost feels natural by now.

With Microsoft HoloLens, Mixed Reality Sharing is already possible, and Microsoft provides you with the tools to develop shared applications. All it takes is two HoloLenses in the same room connected to the same WiFi. Using the Mixed Reality Toolkit, you can establish a connection between the instances of your app running on the two Lenses and have them negotiate a Shared World Anchor. This is basically a coordinate system whose origin and orientation in space is the same on both HoloLenses. All you need to do then is to make sure contents inside this coordinate system are the same on all Lenses – which can also be easily achieved using the tools provided by the Mixed Reality Toolkit. Microsoft’s Holograms 240 Tutorial explains how to implement sharing this way.

Deviation of up to two centimeters

Up until this point, the spatial coordination between the two HoloLenses works smoothly without having to put in a lot of effort. But one problem remains: The shared coordinate systems usually aren’t at the exact same location in space on all HoloLenses in the sharing session. There can be a deviation of up to one or two centimeters. With holograms freely floating in space, this hardly noticeable. But it may become a problem when the virtual content is meant to align with real world objects such as a tracked object or image marker.

 

We need precision!

So how do you go about improving the precision of the alignment of the shared spaces? One approach is to use an image marker that is tracked by all HoloLenses in the sharing session. There are several frameworks that can be used for this purpose, such as Vuforia or OpenCV. As soon as each HoloLens has acquired the marker’s location and orientation in space, the resulting coordinate system can be used instead of the shared World Anchor. Well, almost.

The issue with image-based markers is that their location as perceived by the camera is usually very precise for directions perpendicular to the viewing direction but can vary widely in the viewing direction (i.e. in depth). So, when the HoloLenses detect the marker from different viewpoints, we may basically have the same problem as before. In order to compensate for that, we need to look at the marker from different directions with each HoloLens. Then we can take the detection ray from both viewing perspectives to get the marker’s exact location in space at the intersection point of both rays.

 

More precision with three markers

But what about orientation (i.e. rotation)? Marker detection frameworks usually do a fairly good job at detecting an image marker’s orientation in space as long as the image is large enough (ideally roughly letter-sized for HoloLens). But “fairly good” might not be good enough: If we have a deviation of just 1°, an object placed 2 meters away from the anchor will be offset by 3.5 cm.

To exactly align the anchor’s orientations, we can simply use three markers. These need to be placed sufficiently far apart so that they form a triangle with an angle close to 90° at the central marker. Using the vectors from the central marker to the other two, we can calculate a perfectly exact rotation for the coordinate system.

 

Now we have a coordinate system that is exactly in alignment between all participating HoloLenses. Any content that has the same position relative to this coordinate system will appear at the exact same position in space for all session participants.

A few more things…

To make sure your shared experience runs stable and works for everyone, there are some more things you’ll need to take care of.

The HoloLens has pretty solid head tracking capabilities, but they heavily depend on your surroundings. The Inertial Measurement Unit of the lens can follow your head’s motion to some extent, but like any gyroscope or accelerometer, it will drift over time. Therefore, you need to have a sufficient number of visual features in your room (pictures on the wall, wooden surfaces, etc.) that can be picked up by the Environment Understanding Cameras so that the lens can solidly track its position in space. With just white walls in your office, the spatial tracking of the HoloLens will work poorly.

In order to get the most out of the spatial hologram stabilization, make sure to use World Anchors with the locations of all markers once they’ve been detected and fixed. This helps HoloLens compensate for drift of the holograms in space and ensures everything stays exactly where it’s supposed to be.

Make people believe in magic!

And last but not least, make sure the pupillary distance (IPD) is properly set on the HoloLens so that holograms are perceived at the right distance from the viewer. You can achieve this by either running the calibration app or by measuring the IPD and setting it through the HoloLens device portal.

With all these measures combined, you can perfectly augment real world objects with virtual features in your shared Mixed Reality experience – and make people believe in magic!

Lead Engineer

Michael Sattler

Comments (0)

×

Sign up for our Updates

Sign up now for our updates.

This field is required
This field is required
This field is required

I'm interested in:

Select at least one category
You were signed up successfully.