The quiz game was realized in different scenes.
The questions are based on the informative text provided to the player during the virtual tour.
In case of a wrong answer a new window with the right answer is loaded and the button is momentarily colored with red [13].
The User Interface tool was used to create the quiz game and the UI buttons were realized with the use of a simple C# script in.
The second option is actually a virtual museum depicting the south part of the Stoa in which the user is able to move freely and interact with the exhibits which are displayed in 3D, rotate them and read all the informative text appearing as the player clicks on them.
In order to obtain the necessary data to create the 3D models of the exhibits specialized methodology was implemented.
Image based methods were used for that purpose along with simple instrumentation [14]. For the data acquisition a Nikon D3200 with a CMOS sensor 23.2x15.4mm of 24 Mpixel was used, equipped with a NIKKOR AF-S DX 18-55mm zoom lens.
The number of images for each exhibit varied according to its size and part of the Stoa in which the user is able to move freely and interact with the exhibits which are displayed in 3D, rotate them and read all the informative text appearing as the player clicks on them. In order to obtain the necessary data to create the 3D models of the exhibits specialized methodology was implemented.
Image based methods were used for that purpose along with simple instrumentation [14]. For the data acquisition a Nikon D3200 with a CMOS sensor 23.2x15.4mm of 24 Mpixel was used, equipped with a NIKKOR AF-S DX 18-55mm zoom lens. The number of images for each exhibit varied according to its size and complexity.
1208 images were taken in total in 11 hours.
The average amount of images for the exhibits was 60-90.
A steel ruler was used for scaling the model of the exhibit.
PhotoScan Professional 1.1 image based modelling software by Agisoft was used for processing the images and producing the 3D
models.
This software applies computer vision and photogrammetric algorithms in order to automate the forming the 3D models.
Initially it applies a point operator; SIFT (Scale Invariant Feature Transformation) in this case, in order to identify points on each image. [15].
Using these points and their counterparts in the adjacent images a relative orientation is achieved, thus determining the relative positions of the images in space, but also creating a sparse set of 3D points.
Subsequently the dense point cloud is produced by applying dense image matching for practically each pixel.
For the above procedures the user has rather limited possibilities of intervention, which makes the process rather dangerous in case of ignorance.
The 3D models built were accurate and realistic enough, with low number of polygons and thus easily importable in the programming environment for the development of the virtual museum.
The dense point cloud was checked and edited in order to delete the unnecessary points and reduce the noise.
Then, the software reconstructs a 3D polygonal mesh that represents the exhibit’s surface based on the dense point cloud (Fig. 4). After the geometry was reconstructed and checked,the mesh was textured. All the appropriate parameters were selected as mentioned above in order to have the best results.
Table I presents the quality that was chosen for every step of the 3D modelling process of each exhibit, the number of points and faces of the dense cloud and mesh respectively, as well as the file size of each 3D model. For the development of the virtual environment of the exhibits for every .obj, i.e. 3D information and .tiff, i.e. texture information file that was imported, a corresponding material was added in order to describe every 3D model.
As far as the Main Camera is concerned, the components that were added, adjusted the ambience and the depth of field in order to have a more clear and realistic view of the exhibits.
Furthermore, the proper C# scripts, were added as components to every exhibit in order to achieve their manipulation and rotation around a central axis.
A script was written for the visitor to be able to rotate each exhibit by simply pressing a button, while at the same time a panel appears on the right of the screen with all the information about the exhibit selected.
Particular attention was given to the formation and design of the panel that displays the instructions of the virtual museum at the beginning of the virtual experience so as to guide and help the visitor move around and explore the environment (Fig. 5).