USING VIRTUAL OR AUGMENTED REALITY FOR THE TIME-BASED STUDY OF COMPLEX UNDERWATER ARCHAEOLOGICAL EXCAVATIONS

: Cultural Heritage (CH) resources are partial, heterogeneous, discontinuous, and subject to ongoing updates and revisions. The use of semantic web technologies associated with 3D graphical tools is proposed to improve access, exploration, exploitation and enrichment of these CH data in a standardized and more structured form. This article presents the monitoring work developed for more than ten years on the excavation of the Xlendi site. Around an exceptional shipwreck, the oldest from the Archaic period in the Western Mediterranean, we have set up a unique excavation at a depth of 110m assisted by a rigorous and continuous photogrammetry campaign. All the collected results are modelled by an ontology and visualized with virtual and augmented reality tools that allow a bidirectional link between the proposed graphical representations and the non-graphical archaeological data. It is also important to highlight the development of an innovative 3D mobile app that lets users study and understand the site as well as experience sensations close to those of a diver visiting the site.


INTRODUCTION
Archaeological sites are complex and evolving systems, where heterogeneous components coexist in a delicate balance that is constantly being questioned by the excavation activities themselves. As the excavation progresses, new information is acquired and enriches and sometimes revises the knowledge base under construction. This complex dynamic requires an appropriate knowledge and information management system, which must meet a number of requirements: (i) deal with heterogeneous data, (ii) be flexible to integrate newly acquired information and update or revise previous knowledge accordingly, (iii) be intelligible, exploitable and shareable by and between those involved as well as interested researchers. The link between knowledge base and visualisation tools is very promising and this is definitely the direction we have taken and present in this article (Dris et al., 2018), (Kim et al., 2016).
Building Information Modeling (BIM) and especially Heritage BIM (HBIM) can partially meet some of these criteria. For example, these approaches have the considerable advantage of being based on an ontological model which is well suited to our problem (Cheng et al., 2021).
However, a critical step in HBIM is the geometric modelling of architectural features, which requires substantial geometric simplification through parametric modelling (Scianna et al., 2020), (Kıvılcım and Duran, 2021). Such a simplification is often prohibitive for the in-depth analysis of artefacts in an underwater context where the objects, after decades under water, are heavily eroded or covered by the local fauna. Indeed, in underwater or naval archaeology, the aim of geometric modelling is often to * Corresponding author report on the progress of the excavation and to propose a model supporting the geometry surveyed in order to evaluate the divergences of the observed artefacts from the theoretical models. The aim of this geometric modelling is to foster the development of new archaeological hypotheses to better understand the site.
These considerations led us to choose a representation and knowledge information system developed ad-hoc for the complex application of our interest. Our focus is on a very complex archaeological excavation, being an underwater wreck at a depth of 110m.
The developed system is based on two independent back-end blocks and a single front-end that allows their integration. Specifically, the first back-end is a knowledge base constrained by a domain ontology relative to photogrammetry and archaeological concepts concerned with the study of the wreck as detailed in Section 4. The second back-end is used to visualize the geometry of the archaeological site, which can be done using virtual or augmented reality applications, as discussed in Sections 5. and 6. The interactive 3D visualization tool based on virtual and augmented reality techniques allows, with a limited budget, a comparison of the surveys over time, a visualization of the modifications of the site and an access to archaeological information related to the present and modelled artifacts. The adoption of immersive visualisation techniques is consistent with the new holistic paradigm for cultural heritage (CH) management (Gustafsson, 2019), (Aliprantis and Caridakis, 2019), where CH assets and sites are recognised as precious resources. This new paradigm requires an interdisciplinary and integrated approach, to properly understand and exploit the value of the heritage asset, for one side, and embed the knowledge and its cultural value on the other.
Virtualization is the process of producing a digital replica of the asset of interest, aggregating data from different sources of information and knowledge, providing a unique representation accessible to all the possible interested actors. Communication tools based on a multimedia approach, i.e. the use of new and combined communication and dissemination media, have proved to foster the diffusion and exploitation of CH (Bekele et al., 2018). Today, virtual (VR), augmented (AR), and mixed-reality (MR) technologies can be found in many different applications including education, exhibition enhancement, exploration, reconstruction, and virtual museums (Bekele et al., 2018). VR technologies have been widely investigated as a means to improve public awareness about underwater CH (Chapman et al., 2010), (Bruno et al., 2017), (Bruno et al., 2019), (Cejka et al., 2020), (Cejka et al., 2021). Here we explore the use of advanced visualization methods for a three-fold aim: (i) to communicate and share the virtualized site with the project's partners and interested parties; (ii) to collect all the meaningful information (knowledge) about the site and its assets; (iii) to monitor the excavation process maintaining an updated knowledge system on the basis of surveys carried out over time.
This paper starts by presenting the archaeological site and its surveys carried out on a temporal span of more than ten years. This part is instrumental in understanding the complexity of the site, the archaeological interest, the evolution of surveying techniques and the understanding of the site itself (see section 2. and 3.). All of them are important parts that benefit from the knowledge, visualization, and sharing system developed and presented here.

XLENDI WRECK, THE FIRST SURVEY
This work is based on the excavation of the oldest shipwreck discovered from the Archaic period, a mixed cargo from Phoenicia named Xlendi after the small town on the coast of Gozo in Malta where it was found. The wreck was discovered by Aurora Trust, a company specializing in the inspection of offshore installations, during surveys conducted in 2009 (Gambin, 2015).
The archaeological site is located near a coastline famous for its limestone cliffs plunging into the sea, the bases of which rest on a continental shelf at a depth of approximately 100 m. The first layer of amphorae shows a mixed cargo of western Phoenician and Tyrrhenian vessels, both well suited to the period between the end of the 8th century and the first half of the 7th century BC.
Two aspects of this exceptional wreck, the purely archaeological point of view as well as its state of conservation, have led the University of Malta to push its research further to test and develop new approaches of 3D survey and archaeological excavations at much greater depths. The first survey campaign took place in 2014 with the cooperation of COMEX 1 and CNRS. This work was funded by the French Agence Nationale de la Recherche (ANR) as part of the GROPLAN project led by CNRS. A publication presents the results obtained then (Drap et al., 2015).
The 2014 photogrammetric survey was performed with the prototype of the current COMEX ORUS3D photogrammetry system operated from the Remora 2000 submarine.
The photogrammetric surveys were of particularly good quality and the use of the submarine allowed us to manoeuvre over the whole area to be able to survey the few isolated amphorae present at a few tens of meters from the wreck. The use of the COMEX trifocal system allowed us to obtain a scale result without requiring any contact with the wreck. Nevertheless, while the results Likewise, to work at this depth, the use of an ROV would have required the presence of a large surface vessel equipped with dynamic positioning as well as a team specialized in handling the system. Finally, the main obstacle to the use of robotic systems for the 3D survey was the clear desire of Prof. Timmy Gambin, University of Malta, to carry out a real archaeological excavation in order to learn more about the ship and its cargo (see Figure 3). Such an excavation with artifact removal and sediment clearing is not possible at this depth from remotely operated vehicles. The University of Malta therefore assembled a team of highly qualified professional divers to excavate the site with particular emphasis on 3D photogrammetric documentation at each stage of the archaeological excavation. Since then, excavations and surveys have been carried out every year by a team of exceptional divers who carry out a unique task: the excavation of the site at a depth of more than 100 meters and a daily and exhaustive photogrammetric survey documenting in detail the evolution of the work. The following section presents in more detail the evolution of the surveys and the archaeological excavation over time.

EXCAVATION AND SURVEYS OVER THE TIME
It is thanks to the work and determination of the University of Malta that we have since 2009 more than thirty photogrammetric surveys representing the site and its evolution for over ten years. This constitutes a unique documentation on an exceptional site. The surveys of 2009 and 2014 are distinctive; the first one because it is a partial photographic coverage made from the Aurora Trust ROV during the finding of the site, and the second one because it is an operation dedicated only to the photogrammetric survey. The other photogrammetric surveys from 2017 to 2020 are carried out by divers and accompanying and documenting the archaeological excavation. A daily survey is carried out before the excavation operations on site. In the framework of this work, we have used only one survey per year, the one carried out on the last day of the excavation and showing the work done by the team during this annual mission. Since the first photogrammetric surveys in 2014 using COMEX's (then prototype) ORUS3D underwater photogrammetry solution, we have used the Agisoft Metashape photogrammetric processing software, benefiting from their improvements over the years. Flexible and sufficiently efficient for our application, the Metashape software is highly customizable and can easily be automated thanks to the availability of Python and Java APIs. At the same time, it is easy to use, a crucial factor in a multidisciplinary project such as this one, where non-experts in topography must be able to manipulate and understand the results of the photogrammetric process. In practical terms, the University of Malta team is currently completely self-sufficient in terms of photogrammetric surveys. Photogrammetry has become one of the many tools that this team masters.
The solutions implemented by the University of Malta solved several problems identified during the first campaign. First, the underwater divers drastically reduced the cost of the mission and the simultaneous presence of a team of four professional divers allowed the acquisition of high-quality images with optimal light management (see Figure 1). Cement blocks with coded targets were placed around the excavation site to provide a stable local reference system; a stable tripod with a spirit level and coded targets was also installed as a vertical reference (see Figure 2).
The final survey was conducted in the summer of 2020, a true archaeological excavation was conducted at a depth of 110 meters with a a water dredge (with submerged pump at 20 meters), 2x2 meter quadrants, photogrammetric tracking, artifact removal, and the access to lower layers (see Figure 3). Using this method, we were able to obtain consistent surveys and excavations over the years (2009 -2020) (Gambin et al., 2018). The photogrammetric campaigns have led to the creation of 2D and 3D models expressed in a single reference system and currently visible on the project's website .
The alignment of photogrammetric surveys over time required the development of an ad-hoc procedure. In an ever-changing, uncertain, and challenging environment such as the ocean floor, one cannot rely solely on conventional survey approaches. An approximate recalibration was therefore carried out using the reference system with the concrete blocks whenever they were visible. For the other campaigns and to improve the quality of the recorded data, a model approach was preferred. The geometry of recognizable artifacts (amphorae and grinding stone) was used to define the significant points, i.e. their centers of gravity, on which a rigid transformation was adjusted. The next step of the project was the formalization of an ontology modeling the multi- temporal surveys in 2D and 3D. This ontology considers the manufactured objects studied, as well as the method used to measure them, in this case, photogrammetry (photogrammetric data in the form of oriented photographs, cameras, 3D points and their projections, as well as camera distortion and precision estimators). The surveyed features are thus represented from the point of view of the measurement and are linked to all photogrammetric data that contributed to their measurement in space. A 2D and 3D web interface, accessible on the original project website, is available to access all these data and to perform semantic queries. Moreover, in this paper, we also present a multi-user augmented reality mobile app that lets users access these geometric and qualitative data from an Android device.

LINK WITH KNOWLEDGE
Cultural heritage data is inherently heterogeneous, incomplete, and subject to revision, and due to the presence of actors from different disciplines it may have different and ambiguous descriptions and definitions. Providing a common conceptualization to all actors will probably be the most difficult task that metadata developers must face in the context of cultural heritage. This shared conceptual model can be used to provide a knowledge representation on which data mining systems can interact by aggregating or inferring new knowledge. This requires a conceptualization intelligible to experts from different domains; in other words, an ontology. As reported by Nigam Shah and Mark Musen :"The challenge then is to bridge the conceptual framework and the ontology to create the formal representation." (Shah and Musen, 2009).
An ontology is a set of data elements within a domain that are linked together to denote the types, properties, and relationships between them. Ontologies can be used to cover different terminologies and to represent a clear specification of the different meanings. Hence, having an associated ontology where each term has a corresponding construct in the conceptual framework allows this distinction to be made in the conceptual model as well (Shah and Musen, 2009). This type of conceptual framework along with the associated ontology is the optimal way to create a formal representation fitting different abstraction levels.
We have developed an ontology to manage photogrammetry and an aligned domain ontology to manage heritage data related to the Xlendi wreck. A fine-grained description of these ontologies has been published by Ben Ellefi, where the archaeological part can be cited in ( The ontology dedicated to the archaeological aspects used on Xlendi is aligned with modeling photogrammetry and the 'Arpenteur' ontology developed at the CNRS. Arpenteur is aligned itself with the well-known CIDOC-CRM ontology often used in the CH context (Niccolucci, 2017), (Niccolucci and Hermon, 2017) and (Gaitanou et al., 2016). The Xlendi artifact dataset is made available as open data on the datahub under the name Xlendi Amphorae (XlendiDataHub, 2020).
Since an ontology enables the unambiguous representation of the entities and relationships among cultural heritage resources, it can guide the design of the knowledge bases that store the various experimental data as well as the measurement process in a knowledge manner. Furthermore, the use of ontologies will help in maintaining a strict distinction between observable data and an interpretation based on the data.
The presented knowledge base is in the form of a Linked Open Data (LOD) dataset, also known as a knowledge graph (Hogan et al., 2020). Dedicated to the excavation of the Xlendi shipwreck, this dataset contains morphological data of the artifacts individualized at the site as well as all the geometric data that led to the restitution of the site over the years. The artifacts are classified into two main types of morphological categories: either the object has been seen and recorded on the site by photogrammetry, or it has been only partially seen and is defined by a set of geometric attributes consisting of measurements made by photogrammetry as well as others that are deduced based on previous observations. The unobservable attributes are deduced from the objective measurements made by photogrammetry and from the hypothesis made by the archaeologists on the typology of the object. Deductions are based on numerous previous works done by the CNRS team (Drap et al., 2003), (Drap, 2012). For the artifacts that have been brought to the surface, an exhaustive survey is carried out by photogrammetry and structured light scanning in order to deduce all the observable geometric attributes.
Successive dives on the Xlendi wreck have resulted in seven temporal datasets corresponding to the survey dates listed in (Ben-Ellefi et al., 2018b). For each annual survey, a LOD dataset is generated containing all the geometric data involved in the calculation of the 3D model of the site: photographs, camera calibration, 2D points measured on the photographs, 3D points calculated from the 2D points, quality estimator of the 3D points. This represents approximately 20 million triples per survey.
The LOD dataset is published following the best practices of the semantic web (Rudolph et al., 2013) (Loscio et al. 2017, and the principles of Linked Data (invented by Tim Berners-Lee): (i) URIs to identify "things" in your data, (ii) HTTP:// URIs for people (and machines) to search for them on the web, (iii) when a URI is searched, return a description of the "thing" in the W3C's semantic web format (typically RDF, RDF-Schema, OWL), (iiii) include links to related things. We used the Apache Jena Fuseki 2 as an open-source storage system for the different Xlendi LOD datasets. This storage system also offers an accessible SPARQL endpoint. (The URI at which a SPARQL Protocol service 3 listens for requests from SPARQL Protocol clients.) A YASGUI-SPARQL client (Rietveld and Hoekstra, 2016) is made available online 4 allowing querying the Xlendi artifact dataset via a user interface. The LOD datasets are accessible from the Virtual Reality and Augmented Reality applications using the SPARQL protocol. Arpenteur Ontology visualization by http://www.visualdataweb.de/webvowl/#iri=http: //www.arpenteur.org/ontology/Arpenteur.owl Arpenteur Ontlology location : http://www.arpenteur.org/ ontology/Arpenteur.owl

VIRTUAL REALITY
A previous work on Xlendi VR was achieved in the framework of the Imareculture project (imareculture, 2018) funded by the EU. It is possible to visit the Xlendi shipwreck while staying dry via the project's website.
The Virtual Reality tour of the Xlendi site, carried out by the team of Prof. Fabio Bruno (Bruno et al., 2017) was based on the photogrammetric surveys conducted in 2014 by COMEX and CNRS. These surveys were chosen because they cover the largest area of the site as the overflight was done with the Remora 2000 submarine. On the other hand, this visit does not show the reconstructed artifacts, does not allow interaction with the ontology servers, nor allows seeing the site evolution over the time. Furthermore, we proposed in a precedent work (Ben-Ellefi et al., 2019) a web interface that visualizes the site either as a 3D model or as an orthophoto, a double interaction between the system and the user is possible. The graphical interface reacts to textual requests and it also reacts to mouse picking and selection. An interaction with the ontology describing the site is also possible thanks to the YASGUI client (XlendiKBAccess, 2021).
However, we have thought about other types of interaction in order to reach a wider audience. Indeed, the studied site presents a remarkably high archaeological interest and it is essential to propose innovative and more attractive virtual exploration tools. At the same time, the use of immersive tools, for example, must not in any way overshadow the knowledge component related to such a site. We have therefore developed two tools allowing a virtual exploration of the site using VR and AR techniques, both related to the archaeological data.
The VR tool is based on the most recent version of Epic Games's Unreal Engine (Unreal Engine, 2021). This choice was justified by Epic's acquisition of CapturingReality and their very good photogrammetry software, Reality Capture (Reality Capture, 2021). Indeed, the virtual reality engine Unreal Engine, proposed by EPIC, will integrate from this year, 2021, the support of very large 3D point clouds generated by photogrammetry. Until now, the visualization of a scene in Virtual Reality required a significant reduction in the number of points obtained, a meshing phase and a good texturing phase in order to obtain a sensation close to reality. This was done at the drastic expense of the geometry which was extremely reduced and simplified. The new approach is to visualize colored point clouds without any notion of surface or texture. If the cloud is dense enough, the impression of continuity works well and this without loss of geometry (see Figure 5). We have close examples with the Potree library (Adimoolam et al., 2019) but the real advantage of using Unreal is the performance of its VR engine. The official version using these 3D point clouds obtained by photogrammetry is not yet available at the time of writing this article but a plugin already allows to import these point clouds in the current version. We think that this approach is very promising and we have already used it to visualize the various photogrammetric surveys carried out on the Xlendi shipwreck: managing several tens of millions of 3D points is not a problem. The tests we have done with more than a billion points have remained acceptable in terms of performance.
Listing 1. A SPARQL query to retrieve the name and height (in meters) for arp:Amphorae1029128976. This query can be performed on the UI interface http://www.arpenteur.org/ ontology/sparql.html Name Height Amphore A77 0 . 4 1 3 7 5 6 7 3 1 7 0 0 0 4 7 4 5 ¥ We rely on Unreal Engine platform to deploy our VR solution, using a Vive HTC headset. Users can interact with the artifacts through its virtual laser pointer's selection feature; point the laser at an object (here amphora and grinding stones) to display related archaeological information. The archaeological data are retrieved from the LOD temporal datasets via the SPARQL protocol (an HTTP-based protocol for performing SPARQL operations against data via SPARQL Endpoints). Each artifact in Xlendi is identified by its unique name which allows to map the artifact from the VR to its correspondence in the LOD dataset. For example, Amphore A77 in the VR is the name of the OWL instance identified by the URI arp:Amphorae1029128976 where the information of this amphorae can be retrieved via the SPARQL query in Listing 1 and the response is in Listing 2 . Mapping the VR to the LOD datasets via the SPARQL protocol is the master key in the presented knowledge-based VR system. Hence the mapped resources in the VR system can be queried in the knowledge. A realistic representation of diving at this depth has been made possible using multiple features of the Unreal Engine. Nevertheless, users can modify the level of underwater visibility effects to display the site in its entirety, something that divers can only dream about doing. This is indeed an aspect that we wanted to highlight, it is a tool to study and understand the site, but also to experience sensations close to those experienced by a diver going to the site. The three-dimensional VR navigation technique was designed to represent as close as possible a diver's movements at the Xlendi site. The vertical movement due to the lung-ballast effect is rendered by a single action on one controller while the horizontal movement, in X, Y is activated by the second controller.
As teleportation is not possible in the VR environment and the fact that the site is relatively small (30m long), the user explores the shipwreck site like a deepsea diver. In addition to displaying information about each annual photogrammetric survey, a model of the amphorae and other artefacts present is available in order to obtain a complete 3D model (see Figure 5). This model is based on the typology of the amphora, determined by archaeologists' findings, and the partial measurements made on the photogrammetric survey. Several projects have been performed over the years to obtain these 3D models of artifacts where physical measurements and archaeological knowledge coexist (Drap et al., 2003) (Drap et al., 2015) (Pasquet et al., 2017). This modeling of the artefacts present on the site and studied by the archaeologists makes it possible to create a dynamic and bidirectional link between the 3D representations of the artefacts in the scene and the archaeological and photogrammetric data modeled in the ontology. Simple queries in SPARQL are accessible from the RV interface from the simple interaction using virtual pointing techniques such as raycasting obtained from a controller to predefined queries accessible from a tablet in the scene.

AUGMENTED REALITY
"In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography." (Borges, 1944).
Often commented on by geographers, this short text entitled "On Exactitude in Science", which Borges attributes to Suarez Miranda, a fictional author from the 17th century, underlines the fact that maps are never the mimetic representation of a territory. Maps use symbols and are an interpretation of the world represented. Even the remnants of 1:1 scale maps that exist in our daily lives are symbols. For example, the milestones along the Roman roads marking the kilometers are relative and symbolic traces of a 1:1 scale map, but there are also absolute traces, the Chinese wall running for thousands of kilometres to mark a border, and still in our world too many walls are physically representing borders between states on a 1:1 scale. Here we are using AR not to make a pre-calculated model appearing in a representation of reality that we would be positioned correctly thanks to coded targets, a recognition mechanism or another approach but here we are completely superimposing the calculated model on the model observed by the device. The calculated model is a complete substitute for the reality observed by the camera and the device, a tablet or smartphone, and shows the calculated model in place of the observed reality. The tablet behaves like a window on the Borges's map, showing a virtual space that overlaps point by point with the real space in which the operator is moving.
In order to be able to move around as if one were on the site, a capture of the real space and the tracking of the user's camera are essential. The movement in the virtual world must be perfectly aligned with the real world. It is therefore necessary to accurately estimate the position and orientation of the digital camera. The purpose of the tracking system is to determine the position of the camera in real time. Each time the user moves the camera, the tracking system recalculates the new position in real time and the virtual content must therefore remain consistent with the movement in the real world. The camera pose is calculated with six degrees of freedom, three translation parameters x, y and z, and three orientation parameters yaw, pitch and roll. Vision-based tracking is a widely used AR method for camera tracking. This method calculates the camera pose from the information read from the camera images. Some methods are based on the detection of coded targets previously positioned in the scene but here we use the detection of natural features in a scene coupled with several sensors present on the device. The camera and the various sensors in the device are used for SLAM (simultaneous localization and mapping) which allows to evaluate the movements of the user and then adjusting accordingly the point of view in the 3D model of the scene (Yeh and Lin, 2018).
The visualization tool being at the same time an entertainment tool and a collaborative work tool, it was important to propose a non-immersive approach to be able to exchange information in real time on the observed scene for users in the same room for example. We have therefore developed an application running on Android (tablet or mid-range cell phone) (Blanco-Pons et al., 2019). There is a wide range of libraries available for the development of AR applications using natural features (Amin and Govilkar, 2015) (Lotfi et al., 2019).We have chosen the ARCore SDK (Google Play Services for AR) in order to detect the ground plane as a reference and then the use of SLAM for tracking movements. A client-server plugin was developed and inserted into the application in order to allow parallel communication between the connected users. This aspect will be developed in the next section.

Interaction with the virtual world
Once a user is logged in, he can interact with Xlendi models. First of all, a pseudo-realistic representation (at the scale of the graphic capacities of the device) reproduces the poor light that a diver could have at a depth of 100m and thus the low visibility that he can have of the site. This is adjustable with a simple slider in order to observe the site in its entirety. Then a drop-down list allows it to display the state of the site according to the year in which the surveys were carried out. Seven years are available, and the excavated areas can be seen. We have chosen to leave all the artefacts in place during the excavation, only the terrain evolves graphically. However, once an amphora has been brought up during an excavation, it will appear completely textured when a more recent year is selected. Having been restored and scanned, we now have a more complete model which is substituted for the original theoretical model (see Figure 6). Figure 6. Timeline visualization. Site evolution between 2014 and 2018: Four amphorae were recovered from the wreck before 2018, scanned in laboratory and are now represented fully textured. The other amphorae, still partially covered with sediment are not textured.
6.1.1 Interaction with the SPARQL server A virtual reality or augmented reality application must interact with the user and seek to go further than simply naming or reading labels or metadata related to the objects represented. In recent years, more and more applications are linking the 3D models used in AR to Artificial Intelligence, Deep Learning or Semantic Network approaches (Lampropoulos et al., 2020). This is mainly the case when the application behaves like a dashboard, an instrument to help with decision-making and steering.
The amphorae represented in the scene are selectable, either as a plain object if they have not been brought up during the excavation and therefore whose geometry is based on the hypotheses of the archaeologists' typologies, or textured when they have been brought up, analyzed and measured. (An Internet connection may be required to obtain additional data from the server.) Moreover, the user can launch predefined parameterized requests such as (a small pop-up screen may be displayed to enter certain values): 1. "Show me the amphorae of the same typology in the scene" 2. "Select similar artifacts having a Hausdorff distance to the source artifact = x" 3. "Select similar artifacts having a <height, length, width, volume> difference to the source artifact = x" 4. "Similar concepts in external linked open datasets" Linked Open Data (LOD) projects are expanding around the world and are spreading to the field of cultural heritage, gradually changing the way we access and share our knowledge of this heritage (Marden et al., 2013) (Simou et al., 2017).
Even if LODs are currently mainly used by libraries, museums and archives, they tend to broaden the way we access cultural heritage. We are part of this dynamic (Ben-Ellefi et al., 2018a) even if the particular case study of this work does not easily lend itself to exchanges and parallels with other sites or museums, indeed the exceptional character of this wreck means that the site alone probably contains more amphorae from the Archaic period than in all the museums in the world. This explains why Query 4, concerning the external LOD, will almost always return a null answer.

Interaction between users
Originally designed to help the dive team during the daily debriefing, the tool makes it easy to observe the work done during the day and to plan the activities for the next day. It is possible to calculate the new survey during the day and publish it locally for the team in the evening. Finally, we decided to open the application to the general public as well and it is also an easy way to share data between experts. The team itself is very international and this approach makes sense to continue working together after the mission. The application is therefore connected to two servers, the first one manages the SPARQL queries and the second one manages the user access. It is possible to create an account and join certain predefined chat rooms depending on your login but also on your device's unique identifier. Once in a room, messages can be sent to other users to share information or meet at a given location to observe the same area or discuss the same artifact.
These last aspects are recent, it is only after a few months of use that we will decide on implementing future improvements to the application.

CONCLUSIONS
This work highlights the recent advances in our interdisciplinary approach dedicated to the monitoring of a complex underwater excavation, unique in the world both by the site's depth and by the accumulation of photogrammetric surveys over the last ten years. This work is in constant evolution, and the next campaign of underwater excavations is already being organized and the next surveys will be integrated. We are working on different aspects: the user interface for managing and writing SPARQL queries on the ontologies, management of user accounts and the way they communicate with each other, the evolution of the Unreal Engine platform for managing the huge 3D point clouds that will be provided by the photogrammetric surveys. As this work is based on the development of a domain ontology modeling the photogrammetric process, we decided to support the OpenCV library in order to be able to include future application resources such as mobile robotics, for example. Moreover, by extending the ontology to this library, it will be easier to migrate from one platform to another.