Researchers from the Division of Sustainable Energy and Environmental Engineering at Osaka University have used deep learning to boost mobile generations of mixed reality. The research identifies that a video game engine can dynamically remove the occluding objects recognized by the algorithm.
Mixed Reality (MR)
Mixed reality (MR) is a visual augmentation that alters the real-time images of existing objects or landscapes digitally. Looking at a smartphone screen may feel like magic when characters appear alongside natural landmarks. One can apply this technique for more critical projects, such as visualizing a new building on removing the existing structure and added trees. However, this type of digital erasure was previously perceived to be computationally intensive to generate in real-time on a mobile device.
Osaka University researchers demonstrate a new system to construct an MR landscape visualization faster using deep learning. They train the algorithm with hundreds and thousands of labeled images to swiftly identify occlusions, such as walls and fences, allowing the automatic semantic segmentation of the view into elements to be kept and others masked. Additionally, the program quantitatively measures the Green View Index (GVI), the fraction of greenery areas, including plants and trees in a person’s visual field.
Then the Live video is given to a semantic segmentation server. The obtained results render the final view with a game engine on the mobile device. It is possible to display the proposed structures and greenery even when the viewing angle is changed.
The team believes that the research may lead to a revolution in green architecture and city revitalization and hopes that it will help the stakeholders understand the importance of GVI in urban planning.