A digital exhibition on Beijing’s central axis and urban planning Photo: Yang Lanlan/CSST
Wuhan University’s Intellectual Computing Laboratory for Cultural Heritage has unveiled a cultural heritage digital theater based on interdisciplinary collaborative research. This theater leverages cutting-edge image modeling, large screen data visualization, 3D immersive projection, XR virtual storytelling, and other digital technologies. By integrating multidisciplinary approaches and technologies from history, literature and art, information management, and artificial intelligence, it has established a complete innovative chain comprising digital incubation, virtual and real integration, synergetic exhibition, immersive experience, and smart services.
Technological support
Inside the theater, with the aid of large screen data visualization, 3D models of renowned collections from domestic and international museums can be enlarged and rotated. Certain artifacts even allow viewers to open lids and explore inscriptions inside. By wearing VR glasses, one can instantly find oneself immersed in the virtual space of the Dunhuang Caves, where Buddha sculptures and murals are vividly brought to life.
Cultural heritage serves as both the carrier and embodiment of human civilization. In the digital era, digital collection and recording technologies can be utilized to process various forms of tangible and intangible cultural heritage, resulting in diverse forms of digital documentation. Through processes such as extraction, classification, and indexing, these different types of digital resources are transformed into structured master and meta data. Furthermore, by applying methods such as semantic modeling, associative annotation, and integration, fine-grained encoding and cross-origin linking of conceptual entities, knowledge units, and cultural genes contained within cultural heritage digital resources can be achieved. With the help of technologies like knowledge graphs and blockchain, credible, cross-modal, semantically rich, traceable, and reusable smart data resource collections are constructed. Ultimately, this completes the data-driven transformation of cultural heritage.
In the process of converting physical artifacts into intelligent data, a wide range of equipment and technologies are necessary for support. The laboratory’s equipment is incredibly diverse, encompassing not only regular cameras and camcorders but also specialized devices such as OCR equipment for ancient books, depth cameras, laser scanners, multispectral scanners, six-axis mechanical arms, low-altitude drones, and drones equipped with infrared cameras.
In addition to utilizing existing technology and equipment, the research team is dedicated to achieving new technological innovations. Huang Xianfeng, the lab’s deputy director, has independently developed a super-large-scale 3D real scene modeling software called “GET3D Cluster.” This software effectively addresses the technical challenges associated with handling large volumes of data and slow rendering during the construction of 3D real scene models. It features high-quality, efficient, and intelligent semantic 3D reconstruction capabilities, breaking the long-standing monopoly of foreign software, such as ContextCapture, in the field of real scene modeling. With the launch of the Luojia 3-01 satellite, the software enables rapid and accurate modeling of large cityscapes and cultural heritage sites worldwide using satellite data.
Revitalizing cultural heritage
The GET3D Cluster has already been successfully utilized in major national events like the Beijing Winter Olympics and the Chengdu Universiade. It has also been used for the construction of 3D digital bases in the Great Wall preservation and in over a hundred cities. Take the Jiankou Great Wall in suburban Beijing for example, which is located on a perilous cliff more than 1,000 meters above sea level. Due to erosion and damage, certain sections of the ramparts, beacon towers, and staircases are in urgent need of protection and restoration. The team used self-developed cameras and photogrammetric technology to capture more than 15,000 multi-angle images and carried out real-time 3D reconstruction of the wall section. Supplementary ground shooting helps achieve close-ups with millimeter precision, presenting infinitesimally small details of missing bricks, wall cracks, and other forms of damage.
Once the 3D modeling was completed, the model enabled a magnified and detailed presentation of the surface damage. Through a series of cascaded neural network processing of multi-form data using deep learning networks and models, they were able to detect the damaged sections and structural defects of the Jiankou Great Wall, monitor and identify cracks and missing bricks, and generate the corresponding 3D shapes and brick walls. This enables more rapid, efficient, and accurate restoration work.
To address issues such as texture and model misalignment, Huang’s team has developed a professional software called “Model Painter” that provides an all-in-one solution for the texture mapping of 3D models. This software achieves precise mapping from two-dimensional images to three-dimensional geometric models. Currently, this technology has been implemented in numerous cultural museums and institutions, including the National Museum of China and the Palace Museum, for various digital projects.
Zhang Fan, a professor from the lab, has led the development of technology for enhancing the content and quality of digital cultural heritage. This technology can triple the resolution of low-quality two-dimensional images, improve the accuracy of three-dimensional model reconstruction by 20%, and triple the resolution of texture information in low-quality three-dimensional models. This advancement helps ensure the ongoing enhancement of data quality for digital cultural heritage, improving the digital spatiotemporal perception of cultural heritage entities. Zhang’s technology has already been implemented at the Yungang Grottoes.
Edited by YANG LANLAN