An AI tool called Volume (volume.gl) is "reimagining" 2D images in 3D space. It analyzes a flat image, processes the pixels and tries to "predict depth" in order to make a realistic 3D augmented reality (AR) translation. Volume is the result of an artistic collaboration between Or Fleisher and Shirin Anlen. The pair uses a convolutional neural network - a deep learning artificial neural network - to train the AI to carry out this reconstruction.
"Our experiment with Pulp Fiction allows users to step inside one the film's scenes in Augmented Reality, using Apple's ARKit framework [(an AR framework for iPhones and iPads)] on an iPad. This experiment is one of a few we are conducting at the moment, which illustrate the power of being able to reconstruct 3D scenes from 2D images. The possibilities of being able to reconstruct archival and static footage into 3D environments are one of the main motivations behind the development of the tool used to create these experiments called Volume," Fleisher tells The Next Web (thenextweb.com).
Fleisher and Anlen hope to make the tech easily accessible for the general public so that anyone with a tablet, laptop or smartphone can convert their own videos and images into AR form.
While the results are not seamless or perfect yet, it's certainly an inspired and interesting step in the evolution of virtual reality (VR) and AR, and may serve as a window into their future integration into our everyday lives and environments.