Traveling in time has always been a dream of mankind. Imagine being able to experience ancient Rome first hand or to move back to the days of our ancestors to experience how people were living at that time. Obviously, such time travel goes well beyond our physical limitations of today.
Still, despite the impossibility of real-time travel, a consortium of more than 225 European research institutions from 32 countries is planning to build a virtual Time Machine. This Time Machine is meant to be a large database that can store, interpret, and connect various kinds of historical information ranging from text and images over maps and 3D models to music and other sensory information. The role of the Time Machine is then to link all this information and to reconstruct plausible views into the past. Finally, it should allow us to navigate all these data to move in time and space as easily as we do on the Internet today
To achieve this ambitious goal, numerous breakthroughs must be made. Therefore, these researchers are forming a consortium to create a huge European Large-Scale Research Initiative (LSRI). Such projects have been funded by the European Commissions in the past and are supported by significant resources. One example of a previously funded initiative is the Human Brain Project which aims to build an electronic replica of the human brain.
The main challenges that the researchers face can be organized into three categories: data and digitization, knowledge extraction and modeling, and limitations and chances of such digital epistemology. Obviously, there are many more challenges on the path towards such a Time Machine, such as licensing and legal issues, that would go far beyond the scope of this article. As such we will only look into the main challenges above at this point.
For modern data and information, we have a huge advantage that almost all information is available electronically. However, the more we move to the past, the less information is available in an electronic format that would be suited as input towards processing with the Time Machine. Even for cultural heritage, meaning information that we regard as highly important for our cultural identity, only 15% are presently available in a digital format. For archives and libraries, the percentage is even lower. As such, one initial goal is massive digitization. In contrast to traditional scanners, which involve turning of pages, this process could be done at dramatically increased rates using volumetric acquisition technologies such as computed tomography. Mobile scanners such as the scan tent will also play important roles for high-quality digitization in the field. Furthermore, massive scanning of 3D objects on an assembly line is already within our technological reach at the present day.
Yet, these massive amounts of data also require long-term storage methods that will be able to house this information over periods of millennia. Researchers at Twist Bioscience are developing technologies for storing digital information in strands of DNA, which is the most compact representation of information known to mankind, as the molecules themselves carry the information which enables storage that is orders of magnitude more compact than any present-day-use digital memory. Note that this type of storage is also suited for long-term preservation, as we know examples of DNA findings that have survived 10.000 and more years without loss of their data.
Even if we manage to digitize and store all of the data that we can recover from more than 2000 years of European history, we immediately encounter two additional problems: we must be able to process the data and that large amounts of data did not survive over such a long period of time. For the data processing challenge, we have to unify processing of text, images, audio, maps, 3D objects, and their interpretations. Today, most systems employed for such purpose employ graphs and symbolic representations, yet we have already seen that the power of deep learning can outperform any symbolic system in many applications as evidenced recently in the task of language translation. Hence, one aim of the project is to create a universal representation space that will enable us to convert all of the above into each other. Still, such a system has the large disadvantage that it does not allow linking of observations towards a chain of inference, as we would be able to do in symbolic deduction. Another important goal is to enable the fusion of symbolic graph-based and fuzzy neural-network-based approaches. Based on these advances, we still need to be able to generate historical reconstructions. Traditional methods employ computer graphics for such purposes. However, machine and deep learning are also on the rise in this discipline. Therefore, we will need methods that can generate complex scenes from rather simple descriptions. Subsequent information interpretation and analysis will still be done by humans that will interact with the Time Machine. By doing so, users ranging from historical experts to citizen scientists as well as lay users will be able to operate the Time Machine for their purposes.
A third important aspect of the Time Machine is how it can be used to generate new insights. As in all observations, the information content and whether to trust it must be determined. This requires extended approaches of epistemology – digital epistemology – that can handle different versions of historical truth at the same time. In fact, being able to create different reconstructions of the past easily requires close reflection, as results might be generated and used to push a certain present-day political opinion. Such attempts are well known throughout history and are a major reason why the interpretation of historical knowledge is such a difficult task. Furthermore, all reconstructions of the past – including the traditional methods – typically serve a certain purpose, so, as such, when looking at these reconstructions the original intention must be kept in mind. For example, an archeologist might want to clearly indicate that the original color of a temple is unknown by using muted and grey colors. In contrast, a tourist office would prefer a plausible and detail-enriched reconstruction of the same temple to create a more immersive experience for their audience. A new historical insight is also an interpretation of data and therefore must be linked to the original data and the chain of observations that led to this insight. In humanities and philosophy, this has already been done for centuries by means of text and language. Yet, we must build a digital workflow to do the same to allow a higher degree of collaboration and to enable faster progress of science. General artificial intelligence will also be an important factor enabling the success of Time Machine, as it will allow the creation of virtual agents that can inhabit our virtual image of the past. Furthermore, labor-intensive tasks such as database queries will be tackled by modern AI methods aimed at automatic question response and language interpretation.
The project aims at generating reconstructions of the past at a level of detail that is unknown at present day. As such, it is no coincidence that major industrial players such as Ubisoft, who are well-known for their Assassin’s Creed Series, have joined this consortium. In their vision of a time machine – the Animus – they already come very close to the goal of the Time Machine Project, as Animus enables an immersive experience of the past by traveling back to the lives of one’s own ancestors. While this goal is still far from being achieved today, researchers in the Time Machine Project believe that such a Time Machine will be revolutionary for research and also drive commercial applications in the field of history.
The article and content are published under Creative Commons License 4.0 Attribution
Note: This is a guest post, and opinion in this article is of the guest writer. If you have any issues with any of the articles posted at www.marktechpost.com please contact at email@example.com
Prof. Dr.-Ing. habil. Andreas Maier
Head of the Pattern Recognition Lab of the Friedrich-Alexander-Universität Erlangen-Nürnberg