iOptics Lab researchers combat VR eye strain with new display method

7/3/2017 Julia Sullivan, ECE ILLINOIS

Assistant Professor Gao and graduate student Wei Cui are using subpanels to overcome the vergence-accomodation conflict.

Written by Julia Sullivan, ECE ILLINOIS

Virtual reality (VR) and augmented reality (AR) developers promise that the technology is only limited by imagination, but wearing VR goggles for even a short period of time can be challenging. Eye strain, motion sickness, and fatigue are frequent physical complaints that limit the time that can be spent in a VR environment.

Wei Cui
Wei Cui
However, a new breakthrough at the iOptics Lab (Intelligent Optics Lab) at Illinois is poised to change that. ECE ILLINOIS Assistant Professor Liang Gao and graduate student Wei Cui, both affiliated with the Beckman Institute, have created a new optical mapping 3D display that makes VR viewing more comfortable.  

According to the pair’s report published in Optics Letters, most current 3D VR/AR displays present two images that the viewer’s brain uses to construct an impression of the 3D scene. This stereoscopic display method can cause eye fatigue and discomfort because of an eye focus problem called the vergence-accomodation conflict.

Liang Gao
Liang Gao
When you look at an object, your eyes point toward the object and your lenses focus on the object. Depending on the distance between you and the object, your eyes converge or diverge, and then the lenses accommodate. Vergence and accommodation automatically work together, but when you’re presented with a rendered 3D scene, the conflict arises.

The two images that make up stereoscopic 3D images are displayed on a single surface that is the same distance from your eyes. But these images are slightly offset to create the 3D effect. Your eyes have to work differently than usual, converging to a distance that seems further away, but keeping your lenses focused on the image that is centimeters from your face. (Learn more about the vergence-accomodation conflict in the Journal of Vision March 2008, Vol.8, 33. doi:10.1167/8.3.33.)

The operating principle of the OMNI three-dimensional display. Illustration by Prof. Liang Gao, ECE ILLINOIS.
The operating principle of the OMNI three-dimensional display. Illustration by Prof. Liang Gao, ECE ILLINOIS.
To overcome these stereoscopic limitations, Cui and Gao created an optical mapping near-eye (OMNI) three-dimensional display method. Their method divides the digital display into subpanels. A spatial multiplexing unit (SMU) shifts these subpanel images to different depths with correct focus cues for depth perception. But unlike the offset images from the stereoscopic method, the SMU also aligns the centers of the images to the optical access. An algorithm blends the images together, making a seamless image.

“People have tried methods similar to ours to create multiple plane depths, but instead of creating multiple depth images simultaneously, they changed the images very quickly,” Gao said in an OSA news release. “However, this approach comes with a trade-off in dynamic range, or level of contrast, because the duration each image is shown is very short.”

The researchers are continuing work on the display, increasing power efficiency and reducing weight and size. “In the future, we want to replace the spatial light modulators with another optical component such as a volume holography grating,” said Gao. “In addition to being smaller, these gratings don’t actively consume power, which would make our device even more compact and increase its suitability for VR headsets or AR glasses.”

Read the original report in Optics Letters and coverage from New Atlas and Electronics 360.


Share this story

This story was published July 3, 2017.