Google Launches rǝ: A Browser-Based Toolset To Reconstruct The 3D Structure Of Cities Using Deep Learning and Crowdsourcing


Google launched a browser-based toolset: rǝ (pronounced as re”turn“). rǝ is an open-source and scalable system running on Google Cloud and Kubernetes to reconstruct cities. This is done with the help of old maps and photos using deep learning.

rǝ consists of three components:


• A crowdsourcing platform: To allow the users to upload historical maps of cities and match them to real-world coordinates, thereby vectorizing them.

• A temporal map server: To shows the change in maps of cities over time.

• A 3D experience platform (runs on top of the rǝ map server): To create the 3D experience by reconstructing buildings in 3D with deep learning. 

With rǝ, the company aims to provide a platform for history enthusiasts that enables them to virtually experience historical cities around the world and to help researchers, policymakers, and educators.

Crowdsourcing Data from Historical Maps

The rǝ maps module is a package of open-source tools that work together to create a map server with a time dimension. Hence, the users can jump back and forth between the time periods (with a slider).

The tool allows users to upload scanned pictures of historical maps, match them with the real-world coordinates, and finally convert them into the vector format. After being served on a tile server, the vectorized maps are rendered as slippy maps, letting the users zoom in/out and pan around.

Warper, the entry point of the rǝ maps module, is a web app that allows the users to upload scans of historical maps and then geo-rectify them. The editor (next app) will enable users to load these geo-rectified historical scans of plans as the background to trace their geographic features. An OpenStreetMap (OSM) vector format is used to store this geographic data, converted to vector tiles, and served from a vector tile server called the Server app.

Taking the process towards an end, Kartta, the map renderer, visualizes the spatiotemporal vector tiles and making it possible for the users to navigate through space and time on historical maps.

Street level view of 3D-reconstructed Chelsea, Manhattan. Source:

3D Experience

The 3D Model reconstructs the detailed 3D structures of historic buildings using the data. It also organizes these 3D models in one repository and then renders the 3D structure on the historical maps with a time dimension.

There is only one historical image available for a building in many cases, which makes the 3D reconstruction an extremely challenging problem. To tackle the challenge due to a single image’s availability, a coarse-to-fine reconstruction-by-recognition algorithm is developed.

The footprint of one building is extruded, and its coarse 3D structure is generated. The extruding height is set to the number of floors as per the metadata in the maps database.

In parallel, the 3D reconstruction pipeline is used to recognize all individual constituent components like windows, stairs, etc. Their 3D structures are reconstructed separately based on the categories.

Finally, these detailed 3D structures are combined with the coarse one to achieve the final 3D mesh. The results we receive are stored in a 3D repository and are sent for 3D rendering.

rǝ boasts of developing tools that help crowdsource and tackle the immense difficulty of a lack of historical data when recreating the virtual cities. With future updates, the company aims to improve the 3D experience, which is still a work-in-progress. Google hopes for rǝ to act as a nexus for the community of enthusiasts and casual users.




Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.