New Technology Makes Real-Time 3D Holograms a Reality

We’re closer than ever to Star Trek holograms.

Brad Bergan
New Technology Makes Real-Time 3D Holograms a Reality
An abstract hologram landscape.koto_feja / iStock

Advanced holographic technology is tremendously close to reality.

In the last decade, VR and AR headset hype has sprawled across our timelines, but they have yet to gain more traction than TVs or computer screens as the conventional interface for digital media. Besides the cost, a major reason for this is simply the disorienting nature of wearing a device that simulates a 3D environment, which makes a lot of people sick. But the tides of technology are rapidly revamping a 60-year-old technology for the screaming 2020s: holograms.

Holograms you can touch and feel

Most recently, MIT researchers devised a new way of generating holograms with near real-time fidelity, using a learning-based method with ultra-high efficiency. Efficiency is key to this discovery, because its new neural-net system allows holograms to run on a laptop, and possibly even a newer smartphone.

Researchers have worked to create viable computer-generated holograms for a long time, but most models called for a supercomputer to slug through the physics simulations. This takes a lot of time, and typically produces holograms of underwhelming fidelity. So the MIT researchers’ work focused on overcoming these obstacles. “People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” said the study’s lead author Liang Shi, who is also a doctoral student at MIT’s department of electrical engineering and computer science (EECS), in an MIT blog post. “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”

Shi thinks the new method, called “tensor holography,” will make the near-future promise of holograms finally bear fruit. If the researchers’ new approach works, the advance might create a technological revolution in fields like 3D printing and VR. And it’s been a long time coming. In 2019, scientists created a “tactile hologram” that humans can see and hear. The system, called a Multimodel Acoustic Trap Display (MATD), employs an LED projector, a foam bead, and a speaker array. The speakers emit waves in ultrasound levels that hold the bead in the air, and move it fast enough to appear as if it moves and reflects light from the projector. Humans can’t hear it, but the mechanical motion of the bead can be captured and focused to stimulate the human ears for audio, “or stimulate your skin to feel content,” explained Martinez Plasencia, co-creator of the MATD and a researcher of 3D user interfaces at the University of Sussex, in a University of Sussex blog post.

In conventional, lens-based photography, the brightness  of each light wave is encoded, enabling a photo to yield high-fidelity of the colors of a scene, but this only gives us a flat, 2D image. By contrast, holograms encode the brightness and the phase of every light wave, which provides a more faithful depiction of the depth and parallax of a scene. For example, a hologram could transform Monet’s “Water Lilies” into a singular 3D texture, capturing each plush brushstroke, instead of highlighting the color palette of the artwork. While this may sound impressive, it’s tremendously difficult to create and share holograms.

Holograms can remove living beings from dangerous roles

To overcome the time-consuming process of inputting advanced physics, Shi’s team of the more recent study decided to let the computer teach itself physics. They drastically accelerated computer-generated holography with deep-learning AI, designing their own convolutional neural network. Neural networks use a chain of trainable tensors to mimic the way humans perceive visual information, and this typically calls for a large, high-quality dataset. And the researchers built their own database of 4,000 pairs of computer-generated images — where each pair matched with a picture, according to depth and color information on every pixel, with a partnered hologram. Varying and complex shapes and colors were used, disbursing pixels evenly between the foreground and background. Occlusion was overcome with physics-based calculations. With all of this, the algorithm saw great success, creating holograms orders of magnitude faster than physics-based calculations.

“We are amazed at how well it performs,” said Matusik, in the blog post. After only milliseconds, tensor holography successfully generated holograms from images using depth information. This was pulled from images encoded with depth information, generated by conventionally computer-generated images engineers may calculate with a multicamera or LiDAR sensor (newer smartphones already have these). This is an incredible development, not least of which because the new 3D holographic system uses less than 1 MB of memory to run its compact tensor network. “It’s negligible, considering the tens of hundreds of gigabytes available on the latest cell phone.”

In other words, we’re tremendously close to putting high-fidelity holograms in the hands of ordinary, market products, in what feels to human eyes like real-time. VR and 3D printing are in for a major upgrade, and this could have boundless applications. In February, a Germany-based circus troupe called Circus Roncalli announced it would use holographic technology to replace its animals, removing the possibility of animal abuse. Eventually, holograms might serve as a possible replacement not only for entertainment, but for “no strings attached” relationships between humans and holograms. The future is strange, and holograms are likely to take an increasingly central stage in it.