**Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value**

Giovanni Saggio1 and Davide Borra2 *1University of Rome "Tor Vergata" 2No Real, Virtuality & New Media Applications Italy* 

## **1. Introduction**

26 Will-be-set-by-IN-TECH

58 Augmented Reality – Some Emerging Application Areas

Ni, T., Schmidt, G., Staadt, O., Livingston, M., Ball, R. & May, R. (2006). A survey of large

Ozuysal, M., Fua, P. & Lepetit, V. (2007). Fast keypoint recognition in ten lines of code,

Ringel, M. (2001). Barehands: implement-free interaction with a wall-mounted display, *ACM*

Rosten, E. & Drummond, T. (2003). Rapid rendering of apparent contours of implicit surfaces

Rosten, E. & Drummond, T. (2005). Fusing points and lines for high performance tracking., *IEEE International Conference on Computer Vision*, Vol. 2, pp. 1508–1511. Sandin, D. J., Margolis, T., Ge, J., Girado, J., Peterka, T. & DeFanti, T. A. (2005). The

Sandstrom, T. A., Henze, C. & Levit, C. (2003). The hyperwall, *CMV '03: Proceedings of*

Segen, J. & Kumar, S. (1998). Gesture vr: vision-based 3d hand interace for spatial interaction,

Taylor, C. & Pasquale, J. (2010). Towards a Proximal Resource-based Architecture to

Terriberry, T. B., French, L. M. & Helmsen, J. (2008). Gpu accelerating speeded-up robust

Wagner, D., Schmalstieg, D. & Bischof, H. (2009). Multiple target detection and tracking with

Wallace, G., Anshus, O. J., Bi, P., Chen, H., Chen, Y., Clark, D., Cook, P., Finkelstein, A.,

Wu, C. (2007). SiftGPU: A GPU implementation of scale invariant feature transform (SIFT),

Segen, J. & Kumar, S. (2000). Look ma, no mouse!, *ACM Communications* 43(7): 102–109. Shi, J. & Tomasi, C. (1994). Good features to track, *1994 IEEE Conference on Computer Vision and*

for realtime tracking, *British Machine Vision Conference*, pp. 719–728.

Qualcomm (2011). Alljoyn, https://www.alljoyn.org/, last accessed June 2011.

*2005 Papers*, ACM, New York, NY, USA, pp. 894–903.

Computer Society, Washington, DC, USA, p. 124.

ACM, New York, NY, USA, pp. 455–464.

*Pattern Recognition (CVPR'94)*, pp. 593 – 600.

*Convergence for Virtual Reality (CMCVR)*, Waltham, MA.

guaranteed framerates on mobile phones, *ISMAR*, pp. 57–64.

UPnP (2011). Upnp, http://www.upnp.org/, last accessed June 2011.

*Computer Graphics and Applications* 25(4): 24–33.

http://cs.unc.edu/~ccwu/siftgpu.

*IEEE Virtual Reality 2006*, pp. 223–236.

*CHI*, Press, pp. 367–368.

–8.

features.

high-resolution display technologies, techniques, and applications, *Proceedings of*

*Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on*, pp. 1

varrier autostereoscopic virtual reality display, *SIGGRAPH '05: ACM SIGGRAPH*

*the conference on Coordinated and Multiple Views In Exploratory Visualization*, IEEE

*MULTIMEDIA '98: Proceedings of the sixth ACM international conference on Multimedia*,

Support Augmented Reality Applications, *Proceedings of the Workshop on Cloud-Mobile*

Funkhouser, T., Gupta, A., Hibbs, M., Li, K., Liu, Z., Samanta, R., Sukthankar, R. & Troyanskaya, O. (2005). Tools and applications for large-scale display walls, *IEEE* The artistic or historical value of a structure, such as a monument, a mosaic, a painting or, generally speaking, an artefact, arises from the novelty and the development it represents in a certain field and in a certain time of the human activity. The more faithfully the structure preserves its original status, the greater its artistic and historical value is. For this reason it is fundamental to preserve its original condition, maintaining it as genuine as possible over the time. Nevertheless the preservation of a structure cannot be always possible (for traumatic events as wars can occur), or has not always been realized, simply for negligence, incompetence, or even guilty unwillingness. So, unfortunately, nowadays the status of a not irrelevant number of such structures can range from bad to even catastrophic.

In such a frame the current technology furnishes a fundamental help for reconstruction/ restoration purposes, so to bring back a structure to its original historical value and condition. Among the modern facilities, new possibilities arise from the Augmented Reality (AR) tools, which combine the Virtual Reality (VR) settings with real physical materials and instruments.

The idea is to realize a virtual reconstruction/restoration before materially acting on the structure itself. In this way main advantages are obtained among which: the manpower and machine power are utilized only in the last phase of the reconstruction; potential damages/abrasions of some parts of the structure are avoided during the cataloguing phase; it is possible to precisely define the forms and dimensions of the eventually missing pieces, etc. Actually the virtual reconstruction/restoration can be even improved taking advantages of the AR, which furnish lots of added informative parameters, which can be even fundamental under specific circumstances. So we want here detail the AR application to restore and reconstruct the structures with artistic and/or historical value.

## **2. Reality vs. Virtuality**

With the *Virtuality-Reality Continuum* is intended a scale ranging from a complete real world to a complete virtual world, passing through intermediate positions (Fig. 1, Milgram &

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 61

Generally speaking the *VR* can be "self consistent", in a sense that some virtual scenario remain within its boundaries and utilized as such, let's think about some playstation's games for instance. On the contrary, when we deal with restoration and/or reconstruction of architectural heritages or historical artefacts, it is commonly adopted a criteria for which the Virtuality-Reality Continuum is crossed (see Fig. 3). This happens in the sense that we start from the real status of matters (RE step), perform the analysis of the current status (*AR* step), run a virtual restoration/reconstruction of artefacts and materials (*AV* step) and produce a complete virtual representation of reconstructed scenario (*VE* step). All of these steps are finalized to accurately describe, analyze and indicate the exact passages to be

**2.1 The reconstruction/restoration cycle** 

executed in reality to obtain the best possible operational results.

Fig. 3. V-R Continuum, and the flow diagram for architectural heritages

Virtuality-Reality Continuum offers some advantages discussed in a while.

exploration of underwater archaeological sites (Haydar et al., 2010).

At first glance, the concept to operate in reality passing through the virtuality seems to be not so practical. After all we change our domain (the reality) for another one (the virtuality) but with the final aim to return to the starting one (the reality). Conversely to cross the

We can report a similar occurrence in the electronic field, for which the circuital analysis, necessary in the *time* domain, is realized in the *frequency* domain and then returning to the *time* as an independent variable. This way of proceeding is applied because of its advantages (less time consuming procedures, algorithms with minor complexity, filters which can be

The idea to exploit the potentialities of the VR and AR domains in archaeology is not new. Examples come from "Geist" (Ursula et al., 2001) and "Archeoguide" projects (Vassilios et al., 2001; Vlahakis et al., 2002). The first allows the users to see the history of the places while walking in the city, and was tested in the Heidelberg castle. The second wants to create a system to behave like an electronic guide during the tours made by the visitors in cultural sites, and was used in the archaeological site of Olympia in Greece. After these early examples, many other projects come in the latest years, as the one that regards a virtual

reconstruction/restoration.

more easily implemented).

Kishino, 1994). So we can refer to *Reality* or *Real Environment* (*RE*), *Augmented Reality* (*AR*), *Augmented Virtuality* (*AV*), *Virtual Environment* (*VE*) or *Virtuality* or *Virtual Reality* (*VR*). Intuitively the *RE* is defined as the world how is perceived by our senses, and the *VE* defines totally constructed or reconstructed scenario with computers. The intermediate values of the scale are generally referred as *Mixed Reality* (*MR*), which can be made with different "percentages" of reality vs. virtuality.

Fig. 1. Virtuality-Reality Continuum.

So, *AV* refers to scenarios where the virtual part is predominant, but where the physical parts (real objects, real subjects) are integrated too, with the possibility for them to dynamically interact with the virtual world (preferably in real-time), so the scenarios to be considered as "immersive" as, for instance, a "Cave Automatic Virtual Environment" can be (Cruz-Neira et al., 1992).

On the other hand, the term *AR* refers to scenarios where the real part is predominant, for which artificial information about the environment and its objects are overlaid on the real world, thanks to a medium such as a computer, a smart phone, or a simply TV screen, so additional information directly related to what we are seeing are easily obtained (see Fig. 2 as an example).

Fig. 2. Virtual information overlaid on a real image.

#### **2.1 The reconstruction/restoration cycle**

60 Augmented Reality – Some Emerging Application Areas

Kishino, 1994). So we can refer to *Reality* or *Real Environment* (*RE*), *Augmented Reality* (*AR*), *Augmented Virtuality* (*AV*), *Virtual Environment* (*VE*) or *Virtuality* or *Virtual Reality* (*VR*). Intuitively the *RE* is defined as the world how is perceived by our senses, and the *VE* defines totally constructed or reconstructed scenario with computers. The intermediate values of the scale are generally referred as *Mixed Reality* (*MR*), which can be made with

So, *AV* refers to scenarios where the virtual part is predominant, but where the physical parts (real objects, real subjects) are integrated too, with the possibility for them to dynamically interact with the virtual world (preferably in real-time), so the scenarios to be considered as "immersive" as, for instance, a "Cave Automatic Virtual Environment" can be

On the other hand, the term *AR* refers to scenarios where the real part is predominant, for which artificial information about the environment and its objects are overlaid on the real world, thanks to a medium such as a computer, a smart phone, or a simply TV screen, so additional information directly related to what we are seeing are easily obtained (see Fig. 2

different "percentages" of reality vs. virtuality.

Fig. 2. Virtual information overlaid on a real image.

Fig. 1. Virtuality-Reality Continuum.

(Cruz-Neira et al., 1992).

as an example).

Generally speaking the *VR* can be "self consistent", in a sense that some virtual scenario remain within its boundaries and utilized as such, let's think about some playstation's games for instance. On the contrary, when we deal with restoration and/or reconstruction of architectural heritages or historical artefacts, it is commonly adopted a criteria for which the Virtuality-Reality Continuum is crossed (see Fig. 3). This happens in the sense that we start from the real status of matters (RE step), perform the analysis of the current status (*AR* step), run a virtual restoration/reconstruction of artefacts and materials (*AV* step) and produce a complete virtual representation of reconstructed scenario (*VE* step). All of these steps are finalized to accurately describe, analyze and indicate the exact passages to be executed in reality to obtain the best possible operational results.

Fig. 3. V-R Continuum, and the flow diagram for architectural heritages reconstruction/restoration.

At first glance, the concept to operate in reality passing through the virtuality seems to be not so practical. After all we change our domain (the reality) for another one (the virtuality) but with the final aim to return to the starting one (the reality). Conversely to cross the Virtuality-Reality Continuum offers some advantages discussed in a while.

We can report a similar occurrence in the electronic field, for which the circuital analysis, necessary in the *time* domain, is realized in the *frequency* domain and then returning to the *time* as an independent variable. This way of proceeding is applied because of its advantages (less time consuming procedures, algorithms with minor complexity, filters which can be more easily implemented).

The idea to exploit the potentialities of the VR and AR domains in archaeology is not new. Examples come from "Geist" (Ursula et al., 2001) and "Archeoguide" projects (Vassilios et al., 2001; Vlahakis et al., 2002). The first allows the users to see the history of the places while walking in the city, and was tested in the Heidelberg castle. The second wants to create a system to behave like an electronic guide during the tours made by the visitors in cultural sites, and was used in the archaeological site of Olympia in Greece. After these early examples, many other projects come in the latest years, as the one that regards a virtual exploration of underwater archaeological sites (Haydar et al., 2010).

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 63

The *original* model concerns the parts of the monument, mosaic, painting, ancient structures,.. or, generally speaking, artefacts, which survive intact today without being subjected to

The *state* model just regards the current situation of the artefacts, with its *original* model but

The *restoration* model consists of the *original* model with manual interventions of addictions of what has been destroyed over time, so to bring back the artefacts to their native status. The *reconstruction* model is defined when we are not limited to "simply" manual intervention of "addictions", because of so little remains that even an *original* model is difficult to define. So the interventions are quite addressed to built something almost from beginning, taking

So, *restoration* model can be visualized for the *Colosseum* (also known as *Coliseum*) originally the *Flavian Amphitheatre* in the centre of the city of Rome (Italy), while *reconstruction* model is for the Jewish Second Temple practically destroyed by the Roman legions under Titus.

The *restoration* and *reconstruction* models can be realized by means of mixed reality in the V-

We know pretty well that the AR term refers to the fact that a viewer observes a view of the real world upon which is superimposed computer generated graphics. But the viewer can have a direct view of the real world, or can experience a mediated observation of the reality via video coupling or, finally, can experience an observation of post-processing real images. We will refer to this latter case. Being not completely virtual and not fully real, AR has quite extreme requirements to be suitably adopted. But its great potentiality makes AR both an

At one side we have the reality, or its digital representation (which we understand as real data), and at the other we have a system of representation of such a form of informative multimedia database (the digital information). The connection between these two worlds is the geo-referentiation, understood in a broad sense, that is the need to place the information

The information can be represented in various shapes: a written text, floating windows with images, graphics, videos or other multimedia item, or rendering of the 3D virtual reconstruction, mono or stereoscopic, generated in a real time by trough a virtual model.

The contributions can come from specific files "*ad hoc*" prepared, structured database, search engines or blog generated by users. The databases can be off-line or on-line. The AR applications can restrict the user in exploring the informative setting by using the eyes movement (directly or through a device) or in offering different interactions, some of which we later detail according to our experiences, that allow to choose among the informative set

The geo-referencing can be accomplished by GPS detectors (outdoor), position/motion tracking systems (indoor), until the simplest recognising shaping system based on graphic

interesting and challenging subject from scientific and business perspectives.

coherently upon the data element, in the three-dimensional space.

markers framed by a webcam (AR desktop).

tampering, just as they were in the past.

after being integrated with "addictions".

R continuum.

available.

**3. AR applications** 

account only of really "skinny" original parts of the artefact.

But we want here point out that the AR can be successfully used for restoration and/or reconstruction purposes, so can play an *active role*, rather than be utilized for mere tutorial reasons, so to be confined in a *passive part*. To this aim, it is really useful to start from the mere real side of the problem, to cross the Virtuality-Reality Continuum, passing through *AR* and *AV*, till the mere virtual side, and to come back to the origin, as already stressed. This is for several reasons:


and so on.

VR played an *active role* in a project concerning the recovery of some artifacts that were buried in the Museum of the Terra Cotta Warriors and Horses, Lin Tong, Xi'an, China (Zheng & Li, 1999). Another example comes from a project to assemble a monument, like the Parthenon at the Acropolis of Athens (Georgios et al., 2001), for which one of the motivation to utilize the VR was the size and height of the blocks and the distance between one block and a possible match, so VR helps the archaeologists in reconstructing monuments or artifacts avoiding the manual test of verifying if a fragment match with another.

Archaeologists can spend even several weeks drawing plans and maps, taking notes and pictures of the archaeological findings. But VR offers systems to create a 3D reconstruction by simply take several pictures by which it is possible to get a 3D model of the artifacts (Pollefeys et al., 2003).

Among all the possibilities, we want here to point out that VR and, especially, AR can furnish a further meaningful help if joined with Human-Computer Interaction (HMI) possibilities. To do so, we will further detail new acquisition systems capable to measure human movements and translating them into actions, useful for the user to virtually interact with an AR scenario where archaeological artifacts are visualized (see paragraphs 4.3 and 4.4).

#### **2.2 The models**

In addition to the previous flow diagram regarding the restoration cycle (Fig. 3), it makes sense to define also the evolving situation of the artefacts to be restored during that cycle. So we can distinguish: *original*, *state*, *restoration*, and *reconstruction* models.

The *original* model concerns the parts of the monument, mosaic, painting, ancient structures,.. or, generally speaking, artefacts, which survive intact today without being subjected to tampering, just as they were in the past.

The *state* model just regards the current situation of the artefacts, with its *original* model but after being integrated with "addictions".

The *restoration* model consists of the *original* model with manual interventions of addictions of what has been destroyed over time, so to bring back the artefacts to their native status.

The *reconstruction* model is defined when we are not limited to "simply" manual intervention of "addictions", because of so little remains that even an *original* model is difficult to define. So the interventions are quite addressed to built something almost from beginning, taking account only of really "skinny" original parts of the artefact.

So, *restoration* model can be visualized for the *Colosseum* (also known as *Coliseum*) originally the *Flavian Amphitheatre* in the centre of the city of Rome (Italy), while *reconstruction* model is for the Jewish Second Temple practically destroyed by the Roman legions under Titus.

The *restoration* and *reconstruction* models can be realized by means of mixed reality in the V-R continuum.

## **3. AR applications**

62 Augmented Reality – Some Emerging Application Areas

But we want here point out that the AR can be successfully used for restoration and/or reconstruction purposes, so can play an *active role*, rather than be utilized for mere tutorial reasons, so to be confined in a *passive part*. To this aim, it is really useful to start from the mere real side of the problem, to cross the Virtuality-Reality Continuum, passing through *AR* and *AV*, till the mere virtual side, and to come back to the origin, as already stressed.

 the costs for restoration and/or reconstruction can be reduced: manpower and machinery are utilized only at the real final step, so even the energy consumption is saved; some potential breakages or risk of destruction of the archeological, often fragile but

it is possible to establish forms and dimensions of the parts which are eventually

it is possible to assemble the artifacts without damage its remains and even cause

it is possible to preview the possibilities of assembling more easily, reducing errors and

the 3D scanning procedure is also useful to create a database, for cataloging reasons, for

in cases where the structural stability of a monument is not in danger, nonintrusive

VR played an *active role* in a project concerning the recovery of some artifacts that were buried in the Museum of the Terra Cotta Warriors and Horses, Lin Tong, Xi'an, China (Zheng & Li, 1999). Another example comes from a project to assemble a monument, like the Parthenon at the Acropolis of Athens (Georgios et al., 2001), for which one of the motivation to utilize the VR was the size and height of the blocks and the distance between one block and a possible match, so VR helps the archaeologists in reconstructing monuments or

Archaeologists can spend even several weeks drawing plans and maps, taking notes and pictures of the archaeological findings. But VR offers systems to create a 3D reconstruction by simply take several pictures by which it is possible to get a 3D model of the artifacts

Among all the possibilities, we want here to point out that VR and, especially, AR can furnish a further meaningful help if joined with Human-Computer Interaction (HMI) possibilities. To do so, we will further detail new acquisition systems capable to measure human movements and translating them into actions, useful for the user to virtually interact with an AR scenario

In addition to the previous flow diagram regarding the restoration cycle (Fig. 3), it makes sense to define also the evolving situation of the artefacts to be restored during that cycle. So

valuable, artifacts to be restored and/or reconstructed can be avoided; some potential abrasions/changes in colors of the artifacts can be avoided;

visual reconstructions should be preferred to physical reconstruction;

artifacts avoiding the manual test of verifying if a fragment match with another.

where archaeological artifacts are visualized (see paragraphs 4.3 and 4.4).

we can distinguish: *original*, *state*, *restoration*, and *reconstruction* models.

This is for several reasons:

the time spent in those tasks;

and so on.

(Pollefeys et al., 2003).

**2.2 The models** 

restoration and/or reconstruction time can be reduced;

incomplete so to rebuilt the artifact in an exact manner;

tourism promotion aims, for comparison studies, etc.;

damages in the excavation site where the artifact was found;

We know pretty well that the AR term refers to the fact that a viewer observes a view of the real world upon which is superimposed computer generated graphics. But the viewer can have a direct view of the real world, or can experience a mediated observation of the reality via video coupling or, finally, can experience an observation of post-processing real images. We will refer to this latter case. Being not completely virtual and not fully real, AR has quite extreme requirements to be suitably adopted. But its great potentiality makes AR both an interesting and challenging subject from scientific and business perspectives.

At one side we have the reality, or its digital representation (which we understand as real data), and at the other we have a system of representation of such a form of informative multimedia database (the digital information). The connection between these two worlds is the geo-referentiation, understood in a broad sense, that is the need to place the information coherently upon the data element, in the three-dimensional space.

The information can be represented in various shapes: a written text, floating windows with images, graphics, videos or other multimedia item, or rendering of the 3D virtual reconstruction, mono or stereoscopic, generated in a real time by trough a virtual model.

The contributions can come from specific files "*ad hoc*" prepared, structured database, search engines or blog generated by users. The databases can be off-line or on-line. The AR applications can restrict the user in exploring the informative setting by using the eyes movement (directly or through a device) or in offering different interactions, some of which we later detail according to our experiences, that allow to choose among the informative set available.

The geo-referencing can be accomplished by GPS detectors (outdoor), position/motion tracking systems (indoor), until the simplest recognising shaping system based on graphic markers framed by a webcam (AR desktop).

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 65

that the cognitive activity is embodied, that not separable from the corporeal perception and that it can come out only in a well defined context through the direct action of the user with the context and with the other users. Interactivity, immersion, embodiment and enactivity are the key words of the new VR paradigm that, in the case of AR, shows all its power.

the need to have power of calculation and the speed of data transmission, to grant

 the need to have a rendering software able to get high level of photorealism in real-time; the need to create virtual light, audio and haptic condition as close as possible to the reality, all calculated in real-time. The trend is to measure the lighting condition through the construction of a HDRI map (Hight Dynamic Range Images on the "use of" in computer graphics, that visit the site of their greatest contemporary), by means of

the need to increase the effectiveness of the tracking, in different environmental

the adoption of video, audio and haptic hardware that can be worn easily even by

the need to propose usable interfaces when the state of application changes (language,

It is certainly not an exhaustive list, but he wants to be significant analysis of the importance

Apart from the well known computer graphic techniques that get into the AR family, like the bi-dimensional superimposing, the green/blue screen used in the cinema (augmented reality), the television virtual set (augmented virtuality) or the sophisticated technologies used by James Cameron for the known "Avatar" (mixed reality), we will try to offer a point of view about the AR used in a general purposes or in the specific VCH (Virtual Cultural Heritage) field (meaning the discipline that studies and proposes the application of digital technologies of virtual cultural experience understood in the broad sense, including photos, architectural walktrough, video anastylosis, virtual interactive objects, 3D websites, mobile

The represented technology is not immune by criticalness anyway:

web-cam images, immediately applied to the virtual set;

of the current state of the art, when you design an application of AR.

**3.2 Evolution of the AR techniques in the computer graphic** 

applications, virtual reality and, of course, mixed reality), detailing:

people with a physical or cognitive limitations;

detail of the elaboration, historic period, etc..)

effective performances in the interaction;

**3.1 Criticalness** 

conditions;

1. AR desktop marker-based 2. AR desktop marker-less 3. AR freehand marker-less

7. Stereo & Auto-Stereoscopic AR

4. AR by mobile 5. AR by projection 6. Collaborative AR

8. Physics AR 9. Robotic AR

Although AR is usually used to identify a strongly oriented images technology, its general sense is the benchmark also for the description of the audio or tactile/haptic AR experiences.

The virtual audio panorama generation, the geo-places audio notice and the virtual strengthening of the tact, have to be included in the general AR discipline; these aspects play an important and innovative role for the CH improvement. Therefore, the overall added value of this technology is the contextualization of real data and virtual information, and that value increases because of two main factors: the real-time interaction and the multidimensionality.

With real-time interaction, we can understand both the need to have a visual rendering at 25fps, to ensure a good visual exploration of the real/virtual space and also the ability to make queries and actions in the informative database so that determine changes in a status of the system (human in the loop).

For example, if an architect could choose the maximum and minimum values of a static deformation's map of a facade of a historic building under restoration (derived from a realtime simulation), and if the results of this query will be visible superimposed on this facade, certainly would be a better understanding of the variables to control to make decisions.

Within this frame, the multi-dimensionality furnish an improvement, leading to the possibility to use images in stereoscopy (or stereophony/holophony, the third dimension anyway) and to scroll the information displayed along the time (the fourth dimension). In paragraph 4.2 we will furnish some new elements that we experienced in our labs.

These technological extensions become strategic in every operative field dealing with the space and matter like architecture, cultural heritage, engineering, design etc. In fact, thanks to the stereoscopic vision, is possible to create an effective spatial representation of the deepness that is fundamental in an AR application that allow to wander in the ruins of an archaeological site visualising the 3D reconstruction hypothesis upon the real volumes, as already highlighted in this article with the "Archeoguide" project.

If we could choose among a series of virtual *anastylosis* from different historical periods, we would obtain the best representation of a time machine available in a very natural way.

This type of experience allow to come in contact with the information linked to a place, architecture, ruin or object in a way completely different than in the past, thanks to the possibility to explore the real-virtual space in a real scale (immersion) and in a high level of *embodiment* that is produced, i.e. in being in the space, actively, offering the moving body to the cognitive process (in fact "embodied" or "embodiment" is the concepts that identify the body and immersion in a subject's perceptual and experiential information).

It's the paradigm of the *enaction*, or *enactive*, i.e. the *knowledge, enactive interfaces, enactive didactic*, which are terms coined by Bruner and later by H. Maturana and F. Varela to identify interactive processes put in place between the subject and significance of the subject in which the action is fundamental to the process of learning (Maturana & Varela 1980; Varela et al., 1991; Bruner, 1996). *Enaction* can be proposed as a theoretical model to understand the way of development of knowledge starting from the perceptual-motion interaction with the environment. This neuro-physiological approach is based on the idea that the cognitive activity is embodied, that not separable from the corporeal perception and that it can come out only in a well defined context through the direct action of the user with the context and with the other users. Interactivity, immersion, embodiment and enactivity are the key words of the new VR paradigm that, in the case of AR, shows all its power.

#### **3.1 Criticalness**

64 Augmented Reality – Some Emerging Application Areas

Although AR is usually used to identify a strongly oriented images technology, its general sense is the benchmark also for the description of the audio or tactile/haptic AR

The virtual audio panorama generation, the geo-places audio notice and the virtual strengthening of the tact, have to be included in the general AR discipline; these aspects play an important and innovative role for the CH improvement. Therefore, the overall added value of this technology is the contextualization of real data and virtual information, and that value increases because of two main factors: the real-time interaction and the multi-

With real-time interaction, we can understand both the need to have a visual rendering at 25fps, to ensure a good visual exploration of the real/virtual space and also the ability to make queries and actions in the informative database so that determine changes in a status

For example, if an architect could choose the maximum and minimum values of a static deformation's map of a facade of a historic building under restoration (derived from a realtime simulation), and if the results of this query will be visible superimposed on this facade, certainly would be a better understanding of the variables to control to make decisions.

Within this frame, the multi-dimensionality furnish an improvement, leading to the possibility to use images in stereoscopy (or stereophony/holophony, the third dimension anyway) and to scroll the information displayed along the time (the fourth dimension). In

These technological extensions become strategic in every operative field dealing with the space and matter like architecture, cultural heritage, engineering, design etc. In fact, thanks to the stereoscopic vision, is possible to create an effective spatial representation of the deepness that is fundamental in an AR application that allow to wander in the ruins of an archaeological site visualising the 3D reconstruction hypothesis upon the real volumes, as

If we could choose among a series of virtual *anastylosis* from different historical periods, we would obtain the best representation of a time machine available in a very natural way.

This type of experience allow to come in contact with the information linked to a place, architecture, ruin or object in a way completely different than in the past, thanks to the possibility to explore the real-virtual space in a real scale (immersion) and in a high level of *embodiment* that is produced, i.e. in being in the space, actively, offering the moving body to the cognitive process (in fact "embodied" or "embodiment" is the concepts that identify the

It's the paradigm of the *enaction*, or *enactive*, i.e. the *knowledge, enactive interfaces, enactive didactic*, which are terms coined by Bruner and later by H. Maturana and F. Varela to identify interactive processes put in place between the subject and significance of the subject in which the action is fundamental to the process of learning (Maturana & Varela 1980; Varela et al., 1991; Bruner, 1996). *Enaction* can be proposed as a theoretical model to understand the way of development of knowledge starting from the perceptual-motion interaction with the environment. This neuro-physiological approach is based on the idea

paragraph 4.2 we will furnish some new elements that we experienced in our labs.

already highlighted in this article with the "Archeoguide" project.

body and immersion in a subject's perceptual and experiential information).

experiences.

dimensionality.

of the system (human in the loop).

The represented technology is not immune by criticalness anyway:


It is certainly not an exhaustive list, but he wants to be significant analysis of the importance of the current state of the art, when you design an application of AR.

#### **3.2 Evolution of the AR techniques in the computer graphic**

Apart from the well known computer graphic techniques that get into the AR family, like the bi-dimensional superimposing, the green/blue screen used in the cinema (augmented reality), the television virtual set (augmented virtuality) or the sophisticated technologies used by James Cameron for the known "Avatar" (mixed reality), we will try to offer a point of view about the AR used in a general purposes or in the specific VCH (Virtual Cultural Heritage) field (meaning the discipline that studies and proposes the application of digital technologies of virtual cultural experience understood in the broad sense, including photos, architectural walktrough, video anastylosis, virtual interactive objects, 3D websites, mobile applications, virtual reality and, of course, mixed reality), detailing:


Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 67

The technology is useful to solve a very important problem of the new archaeological museums: the will to carry on the emotion even after the visit has finished. Thanks to the AR marker-based, the user can connect to the museum's website, switch on the web-cam, print the graphic markers and start the interaction with the AR applications, realizing a very effective home edutainment session and customer loyalty in the action of the latest techniques museum marketing require. Some examples come from the Paul Getty Museum

Another advantage of the AR marker-based is the possibility to highlight it, in a clear way, the contributions in AR, printing the graphic symbol everywhere. This freedom is very useful in technical paper, interactive catalogues, virtual pop-up books, interactive books, that link the traditional information to the new one. "ARSights" is a software that allow to associate 3D Sketch-Up model to an AR marker so that you can visualize almost all the

You can realize AR applications even without a geometric marker through a process that recognize graphically a generic image or a portion of it, after which the tracking system will be able to recognize identity and orientation. In this way, a newspaper page, or part of it, will become the marker and the AR contribution will be able to integrate the virtual content

From another side, you need to make people understand that that page has been created to receive an AR contribution, as it is not in the symbol. You use the same interaction as in the marker-based applications, with one or multiple AR objects linked to one or different part of the printed image. There are many applications for the VCH: see, for example, the "AR-Museum" realized in 2007 by a small Norwegian company (bought a year later by Metaio), where you can visualize virtual characters interacting with the real furniture into the real

Fig. 5. A frame of the AR-Museum application that superimpose a character animation to the real museum's rooms. The characters, in real scale, offer an immediate sense of presence.

directly in the graphic context of the paper, without resorting to a specific symbol.

in Los Angeles or cited in the "Augmented reality Encyclopaedia".

Google Earth buildings, working with a big worldwide CH database.

**3.2.2 AR desktop marker-less** 

space of the museum room.

## **3.2.1 AR desktop marker-based**

It's the best known AR application as it is the predecessor of the following evolutions. It, basically, is based upon the space trends tracking of a graphical marker, usually a geometric black&white symbol called "confidence marker" on which you can hook in real time a bi/three-dimensional input. The trace is possible using common web-cam with a simple shape recognition software by color contrast. The marker can be printed on a rigid support to allow an easily handling, but some more flexible supports are already available.

The most common application published so far allow the user to explore a 3D model object, experiencing 360 degree vision, as he holds in his hand (see Fig. 4). The model can be static, animated or interactive. The application can have many markers associated to different objects displayed one by one or all at the same time.

In the VCH area, the best known applications have been realized flanking the archaeological piece showed in the case side by side both to the marker (printed on the catalogue, on the invitation card or on special rigid support) and to the multimedia totem that hosts the AR application. In this way the user can simulate the extraction of the object from the case and see it from every point of view. He can also use many markers at the same time to compare the archaeological evolution of a building in the time.

The NoReal.it company we are dealing with, showed many experiences during some scientific exhibit like the "XXI Rassegna del Cinema Archeologico di Rovereto" (TN-Italy) and "Archeovirtual stand" into the "Borsa Mediterranea del Turismo Archeologico di Paestum" (SA – Italy).

Fig. 4. An example of marker-based AR desktop session with a simplified 3D model of an ancient woman's head. The model was created using data derived from 3D laser scanner.

The technology is useful to solve a very important problem of the new archaeological museums: the will to carry on the emotion even after the visit has finished. Thanks to the AR marker-based, the user can connect to the museum's website, switch on the web-cam, print the graphic markers and start the interaction with the AR applications, realizing a very effective home edutainment session and customer loyalty in the action of the latest techniques museum marketing require. Some examples come from the Paul Getty Museum in Los Angeles or cited in the "Augmented reality Encyclopaedia".

Another advantage of the AR marker-based is the possibility to highlight it, in a clear way, the contributions in AR, printing the graphic symbol everywhere. This freedom is very useful in technical paper, interactive catalogues, virtual pop-up books, interactive books, that link the traditional information to the new one. "ARSights" is a software that allow to associate 3D Sketch-Up model to an AR marker so that you can visualize almost all the Google Earth buildings, working with a big worldwide CH database.

#### **3.2.2 AR desktop marker-less**

66 Augmented Reality – Some Emerging Application Areas

It's the best known AR application as it is the predecessor of the following evolutions. It, basically, is based upon the space trends tracking of a graphical marker, usually a geometric black&white symbol called "confidence marker" on which you can hook in real time a bi/three-dimensional input. The trace is possible using common web-cam with a simple shape recognition software by color contrast. The marker can be printed on a rigid support

The most common application published so far allow the user to explore a 3D model object, experiencing 360 degree vision, as he holds in his hand (see Fig. 4). The model can be static, animated or interactive. The application can have many markers associated to different

In the VCH area, the best known applications have been realized flanking the archaeological piece showed in the case side by side both to the marker (printed on the catalogue, on the invitation card or on special rigid support) and to the multimedia totem that hosts the AR application. In this way the user can simulate the extraction of the object from the case and see it from every point of view. He can also use many markers at the same time to compare

The NoReal.it company we are dealing with, showed many experiences during some scientific exhibit like the "XXI Rassegna del Cinema Archeologico di Rovereto" (TN-Italy) and "Archeovirtual stand" into the "Borsa Mediterranea del Turismo Archeologico di

Fig. 4. An example of marker-based AR desktop session with a simplified 3D model of an ancient woman's head. The model was created using data derived from 3D laser scanner.

to allow an easily handling, but some more flexible supports are already available.

**3.2.1 AR desktop marker-based** 

objects displayed one by one or all at the same time.

the archaeological evolution of a building in the time.

Paestum" (SA – Italy).

You can realize AR applications even without a geometric marker through a process that recognize graphically a generic image or a portion of it, after which the tracking system will be able to recognize identity and orientation. In this way, a newspaper page, or part of it, will become the marker and the AR contribution will be able to integrate the virtual content directly in the graphic context of the paper, without resorting to a specific symbol.

From another side, you need to make people understand that that page has been created to receive an AR contribution, as it is not in the symbol. You use the same interaction as in the marker-based applications, with one or multiple AR objects linked to one or different part of the printed image. There are many applications for the VCH: see, for example, the "AR-Museum" realized in 2007 by a small Norwegian company (bought a year later by Metaio), where you can visualize virtual characters interacting with the real furniture into the real space of the museum room.

Fig. 5. A frame of the AR-Museum application that superimpose a character animation to the real museum's rooms. The characters, in real scale, offer an immediate sense of presence.

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 69

During "Siggraph 2008", the IGD Fraunhofer with ZGDV, presented their first version of "InstantReality", a framework for VR and AR that used historical and archaeological images as a markers to apply the 3D model of the reconstruction hypothesis. Another example of VCH AR application, made for the interactive didactics, has been carried out in Austria (see

The 3D video-mapping techniques has to be included in the AR applications. It is known because of its application to the entertainment, to the facade of historical and recent buildings, but also used to augmented the physical prototypes used the same way as three-

The geo-referencing is not subjected by a tracking technique, as in the previous cases, but through the perfect superimpose setting of the lighting projection on the building's wall with the virtual projection on the same wall, virtually duplicated. The video is not planar but is distorted, in compliance with the 3D volumes of the wall. The "degree of freedom" of the visitor is total, but in some case the projection could be of a lower quality cause of the

Today we don't know many 3D video-mapping applications for the edutainment, but many entertainment experiences can be reported. 3D video-mapping can be used to expose different hypothesis solution of a facade building in different historical ages or to simulate a virtual restoration, as it is for the original color simulation of the famous "Ara Pacis" in

Fig. 6. A frame of the Ara Pacis work. On the main facade of the monument, a high-

The "Second Life" explosion, during the 2008-2009 years, bring the light on the meta-verses like 3D collaborative on-line engines, with many usable capabilities and a high level of production of new 3D interactive contents and the avatar customization. The aesthetic

definition projection presents the hypothesis of ancient colors.

results, and the character animation, obtained a very high technical level.

www.youtube.com/watch?v=denVteXjHlc).

unique direction of the lighting beam.

**3.2.5 AR by projection** 

dimensional displays.

Rome (Fig. 6).

**3.2.6 Collaborative AR** 

#### **3.2.3 AR freehand marker-less**

We can include in the "AR freehand marker-less" category:


The main difference, within the previous two categories, is the user role: in the first case the user interacts with his own body, in the second he activates a bound visualizing device spatially traced. In both cases the application control experiences through natural interactive system are becoming very common thanks to Microsoft "Kinetc" or the "Wavi Xtion" by PrimeSense with Asus3D, or even the Nintendo "Wii-mote" and the Sony "PlayStation Move", that still require a pointing device.

But the most extraordinary applications, even in the VCH, are the ones where the user can walk in an archaeological site, visualizing the reconstruction hypothesis superimposed to the real ruins. The first experience is from an European project called "Archeoguide" (http://archeoguide.intranet.gr), where a rudimentary system made of a laptop and stereoscopic viewer allowed to alternate the real vision to the virtual one. Recently, an Italian company named "Altair4" realized a 3D reconstruction of the "Tempio di Marte Ultore", temple in the "Foro di Augusto" in Rome, although the hardware is still cumbersome and the need to have a real time calculation remains fundamental.

A third field of application could be to calculate the structural integrity of architectural building with a sort of wearable AR device that permit a x-ray view from the inside of the object around. This kind of application has been presented at the IEEE VR 2009 by Avery et al. (2009) (IEEE VR is the event devoted to Virtual Reality, organized by the IEEE Computer Society, a professional organization, founded in the mid-twentieth century, with the aim of enhancing the advancement of new technologies).

#### **3.2.4 AR by mobile**

The latest frontier of the AR technology involves the personal mobile devices, like the mobile phones. Using the GPS, the gyroscope and the standard web-cam hosted in the mobile devices, you can create the optimal conditions for the geo-reference of the information, comparing them with the maps and the satellite orthophotos available by the most common geo-browser (Google Maps, Microsoft Bing, etc).

The possibility to be connected, from every place in the world, permits to choose the type of information to superimpose. In the last five years the first testing were born for Apple mobile devices (iPhone, iPad, iPod), Android, and Windows CE devices. The applications today available tend to exploit the Google Map geo-referencing to suggest informative tags located in the real space, following the mobile camera. You can see, for example, "NearestWiki" for iPhone or "Wikitude" for iPhone, Android and Symbian OS, but many other application are coming out. A particular application VCH is "Tuscany+" that publishes AR informative tags in AR, specific for a virtual tour in the the tuscanies cities with various types of 2D and 3D contributes.

During "Siggraph 2008", the IGD Fraunhofer with ZGDV, presented their first version of "InstantReality", a framework for VR and AR that used historical and archaeological images as a markers to apply the 3D model of the reconstruction hypothesis. Another example of VCH AR application, made for the interactive didactics, has been carried out in Austria (see www.youtube.com/watch?v=denVteXjHlc).

## **3.2.5 AR by projection**

68 Augmented Reality – Some Emerging Application Areas

 applications that trace position, orientation and direction of the user look, using tracking systems and various movement sensors, associated to a gesture recognition

The main difference, within the previous two categories, is the user role: in the first case the user interacts with his own body, in the second he activates a bound visualizing device spatially traced. In both cases the application control experiences through natural interactive system are becoming very common thanks to Microsoft "Kinetc" or the "Wavi Xtion" by PrimeSense with Asus3D, or even the Nintendo "Wii-mote" and the Sony "PlayStation

But the most extraordinary applications, even in the VCH, are the ones where the user can walk in an archaeological site, visualizing the reconstruction hypothesis superimposed to the real ruins. The first experience is from an European project called "Archeoguide" (http://archeoguide.intranet.gr), where a rudimentary system made of a laptop and stereoscopic viewer allowed to alternate the real vision to the virtual one. Recently, an Italian company named "Altair4" realized a 3D reconstruction of the "Tempio di Marte Ultore", temple in the "Foro di Augusto" in Rome, although the hardware is still

A third field of application could be to calculate the structural integrity of architectural building with a sort of wearable AR device that permit a x-ray view from the inside of the object around. This kind of application has been presented at the IEEE VR 2009 by Avery et al. (2009) (IEEE VR is the event devoted to Virtual Reality, organized by the IEEE Computer Society, a professional organization, founded in the mid-twentieth century, with the aim of

The latest frontier of the AR technology involves the personal mobile devices, like the mobile phones. Using the GPS, the gyroscope and the standard web-cam hosted in the mobile devices, you can create the optimal conditions for the geo-reference of the information, comparing them with the maps and the satellite orthophotos available by the

The possibility to be connected, from every place in the world, permits to choose the type of information to superimpose. In the last five years the first testing were born for Apple mobile devices (iPhone, iPad, iPod), Android, and Windows CE devices. The applications today available tend to exploit the Google Map geo-referencing to suggest informative tags located in the real space, following the mobile camera. You can see, for example, "NearestWiki" for iPhone or "Wikitude" for iPhone, Android and Symbian OS, but many other application are coming out. A particular application VCH is "Tuscany+" that publishes AR informative tags in AR, specific for a virtual tour in the the tuscanies cities

cumbersome and the need to have a real time calculation remains fundamental.

**3.2.3 AR freehand marker-less** 

Move", that still require a pointing device.

enhancing the advancement of new technologies).

with various types of 2D and 3D contributes.

most common geo-browser (Google Maps, Microsoft Bing, etc).

software;

**3.2.4 AR by mobile** 

We can include in the "AR freehand marker-less" category:

applications that recognize the spatial orientation of the visual device.

The 3D video-mapping techniques has to be included in the AR applications. It is known because of its application to the entertainment, to the facade of historical and recent buildings, but also used to augmented the physical prototypes used the same way as threedimensional displays.

The geo-referencing is not subjected by a tracking technique, as in the previous cases, but through the perfect superimpose setting of the lighting projection on the building's wall with the virtual projection on the same wall, virtually duplicated. The video is not planar but is distorted, in compliance with the 3D volumes of the wall. The "degree of freedom" of the visitor is total, but in some case the projection could be of a lower quality cause of the unique direction of the lighting beam.

Today we don't know many 3D video-mapping applications for the edutainment, but many entertainment experiences can be reported. 3D video-mapping can be used to expose different hypothesis solution of a facade building in different historical ages or to simulate a virtual restoration, as it is for the original color simulation of the famous "Ara Pacis" in Rome (Fig. 6).

Fig. 6. A frame of the Ara Pacis work. On the main facade of the monument, a highdefinition projection presents the hypothesis of ancient colors.

## **3.2.6 Collaborative AR**

The "Second Life" explosion, during the 2008-2009 years, bring the light on the meta-verses like 3D collaborative on-line engines, with many usable capabilities and a high level of production of new 3D interactive contents and the avatar customization. The aesthetic results, and the character animation, obtained a very high technical level.

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 71

based/free AR applications is Linceo VR by Seac02, that manage the real-time binoculars

Mobile devices like "Nintendo3D" and the "LG Optimus 3D" will positively influence the

Recently, AR techniques have be applied to the interactive simulation processes, pursuant to the user actions or reactions as happens, for instance, with simulation of the physical behaviour of objects (as the DNA ellipses can be). The AR physical apparatus simulation can be used in the VCH to visualise tissues, elastic elements, ancient earthquake or tsunami,

Robotics and AR, so far, match together for some experiments to put a web-cam on different drone and then create AR session with and without markers. There are examples of such experiments which realized esacopter and land vehicle. "AR Drone" is the first UAV (Unmanned Aerial Vehicle) Wi-Fi controlled through iPhone or iPad that transmit in real time images of the flights and augmented information (see www.parrot.com). "Liceo VR" that include SDK to control the Wowee Rovio. With "Lego Mindstorm NXT Robot" and the

There are new ways for an AR virtual visit, from high level point of view, dangerous for a real visit or underwater. The greatest difficulties are the limited Wi-Fi connection, the public securities laws (because of flying objects), the flight autonomy and/or robot movement.

Fig. 8. A frame of a AR Drone in action. On the iPhone screen you can see the camera frames, in real time, with superimposed information. Currently it is used as a game device.

We can list the most popular AR software currently in use. The commercial company are the German Metaio (Munich, San Francisco), the Italian SEAC02 (Turin), the French Inglobe

In a future we will use for VCH aerial explorations.

**3.3 Commercial software and open-source** 

use for a museum interactive didactics, both indoor and outdoor experiences.

rendering using a prismatic lens up the ordinary monitors.

"Flex HR" software, you can build robotic domestic experiences.

**3.2.8 Physics AR** 

**3.2.9 Robotic AR** 

fleeing crowd, and so on.

Other useful aspects of the generic MMORG (*Massively Multiplayer Online Role-Playing Game*, a type of computer role-playing game within a virtual world) allow to develop and gather together big content-making communities and content generators, in particular script generators that extend the applicative possibilities and the usable devices. "Second Life" and "Open Sim", for instance, have been used by T. Lang & B. Macintyre, two Researchers of Georgia Institute of Technology in Atlanta and of Ludwig-Maximilians Universitat in Monaco, as a rendering engine able to publish avatar and objects in AR (see http://arsecondlife.gvu.gatech.edu). This approach makes the AR experience not only interactive, but also shareable by the users and pursuant to the 2.0 protocol. We can stand that it's the first home low cost tele-presence experience.

Fig. 7. The SL avatar in AR, in real scale, and controlled by a brain interface.

An effective use of the system was obtained in the movie "Machinima Futurista" by J. Vandagriff, who reinvent the Italian movie "Vita Futurista" in AR. Other recent experiment by D. Carnovale, link Second Life, the AR and a Brain Control Interface to interact at home with his own avatar, in real scale, with anything but his own thoughts (Fig. 7).

We don't know yet a specific use in the VCH area, but we surely think about an avatar in AR school experiences, a museum guide, an experimental archaeology co-worker, a cultural entertainer for kids, a reproduction of ancient characters and so on.

#### **3.2.7 Stereo & auto-stereoscopic AR**

The Stereoscopic/Auto-Stereoscopic visualization is used in VCH as an improving instrument in the spatial exploration of objects and environments, both addressed to the entertainment and the experimental research, and it draw on the industrial research, applied since 10 years, that consider the AR stereoscopic use as fundamental.

The ASTOR project has been presented at Siggraph 2004, by some Royal Institute of Technology of Stockholm-Sweden (Olwal et al., 2005) teaching fellows, but thanks to the new auto-stereoscopic devices, we will increase it's possibilities in the near future. Currently, the only suite that include Stereoscopy and Auto-Stereoscopy in the markerbased/free AR applications is Linceo VR by Seac02, that manage the real-time binoculars rendering using a prismatic lens up the ordinary monitors.

Mobile devices like "Nintendo3D" and the "LG Optimus 3D" will positively influence the use for a museum interactive didactics, both indoor and outdoor experiences.

## **3.2.8 Physics AR**

70 Augmented Reality – Some Emerging Application Areas

Other useful aspects of the generic MMORG (*Massively Multiplayer Online Role-Playing Game*, a type of computer role-playing game within a virtual world) allow to develop and gather together big content-making communities and content generators, in particular script generators that extend the applicative possibilities and the usable devices. "Second Life" and "Open Sim", for instance, have been used by T. Lang & B. Macintyre, two Researchers of Georgia Institute of Technology in Atlanta and of Ludwig-Maximilians Universitat in Monaco, as a rendering engine able to publish avatar and objects in AR (see http://arsecondlife.gvu.gatech.edu). This approach makes the AR experience not only interactive, but also shareable by the users and pursuant to the 2.0 protocol. We can stand

that it's the first home low cost tele-presence experience.

Fig. 7. The SL avatar in AR, in real scale, and controlled by a brain interface.

with his own avatar, in real scale, with anything but his own thoughts (Fig. 7).

entertainer for kids, a reproduction of ancient characters and so on.

since 10 years, that consider the AR stereoscopic use as fundamental.

**3.2.7 Stereo & auto-stereoscopic AR** 

An effective use of the system was obtained in the movie "Machinima Futurista" by J. Vandagriff, who reinvent the Italian movie "Vita Futurista" in AR. Other recent experiment by D. Carnovale, link Second Life, the AR and a Brain Control Interface to interact at home

We don't know yet a specific use in the VCH area, but we surely think about an avatar in AR school experiences, a museum guide, an experimental archaeology co-worker, a cultural

The Stereoscopic/Auto-Stereoscopic visualization is used in VCH as an improving instrument in the spatial exploration of objects and environments, both addressed to the entertainment and the experimental research, and it draw on the industrial research, applied

The ASTOR project has been presented at Siggraph 2004, by some Royal Institute of Technology of Stockholm-Sweden (Olwal et al., 2005) teaching fellows, but thanks to the new auto-stereoscopic devices, we will increase it's possibilities in the near future. Currently, the only suite that include Stereoscopy and Auto-Stereoscopy in the markerRecently, AR techniques have be applied to the interactive simulation processes, pursuant to the user actions or reactions as happens, for instance, with simulation of the physical behaviour of objects (as the DNA ellipses can be). The AR physical apparatus simulation can be used in the VCH to visualise tissues, elastic elements, ancient earthquake or tsunami, fleeing crowd, and so on.

#### **3.2.9 Robotic AR**

Robotics and AR, so far, match together for some experiments to put a web-cam on different drone and then create AR session with and without markers. There are examples of such experiments which realized esacopter and land vehicle. "AR Drone" is the first UAV (Unmanned Aerial Vehicle) Wi-Fi controlled through iPhone or iPad that transmit in real time images of the flights and augmented information (see www.parrot.com). "Liceo VR" that include SDK to control the Wowee Rovio. With "Lego Mindstorm NXT Robot" and the "Flex HR" software, you can build robotic domestic experiences.

There are new ways for an AR virtual visit, from high level point of view, dangerous for a real visit or underwater. The greatest difficulties are the limited Wi-Fi connection, the public securities laws (because of flying objects), the flight autonomy and/or robot movement.

Fig. 8. A frame of a AR Drone in action. On the iPhone screen you can see the camera frames, in real time, with superimposed information. Currently it is used as a game device. In a future we will use for VCH aerial explorations.

#### **3.3 Commercial software and open-source**

We can list the most popular AR software currently in use. The commercial company are the German Metaio (Munich, San Francisco), the Italian SEAC02 (Turin), the French Inglobe

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 73

al., 2009) or, so called, Unmanned Aerial Vehicles or UAVs (van Blyenburg, 1999), capable to

With the laser-scanner can be obtained the spatial coordination of the surface points of the objects under investigation (Mancera-Taboada et al., 2010, Costantino et al., 2010) as, for example, we did for the pieces of an ancient column of the archeological site of Pompeii (see Fig. 9). Anyway, according to our experience, the data acquired with laser-scanner are not so useful when it is necessary to share them on the web, since the huge amount of data generated, especially for large objects with many parts to be detailed. In addition, laserscanning measurements can often be characterized by errors of different nature, so analytical model must be applied to estimate the differential terms necessary to compute the object's curvature measures. So statistical analyses are generally adopted to overcome the

(a)

(b) (c) Fig. 9. (a) A picture from Pompeii archeological site (Italy) with parts of an ancient column

on the right, (b) pieces of a laser scanned column, (c) the column after the virtual

allow a high resolution image acquisition.

problem (Crosilla et al., 2009).

reconstruction.

technologies (Ceccano, FR), the American Total Immersion (Los Angeles, Paris, London, Hong Kong). Every software house proposes different solutions in term of rendering performances, 3D editing tools, format import from external 3d modelling software, type and quantity of control devices, etc. The commercial offer is based on different user licence. Some open/free suite are available with different GNU licences, and the most known is "ARToolKit", usable in its original format or Flash-included ("FLARToolKit") or with the more user friendly interface by "FLARManager". But more and more other solution are coming.

The VCH natural mission is the spatial exploration, seen as an open environment (the ground, the archaeological site, the ancient city), as a close environment (the museum or a building), and as a simple object (the ruin, the ancient object). The user can live this experience in the real time, perceiving himself as a unit ad exploiting all his/her senses and the movement to complete and improve the comprehension of the formal meaning he/she's surrounded by. The mobile technologies allow to acquire information consistent to the spatial position, while the information technologies make the 3D virtual stereoscopic rendering reconstruction very realistic. The VCH will pass through MR technologies, intended as an informative cloud that involve, complete and deepen the knowledge and as possibility to share virtual visit with people online.

#### **4. New materials and methods**

For the aim of the restoration/reconstruction of structures with artistic or historical values, architectural heritages, cultural artefacts and archaeological materials, three steps have been fundamentals since now: *acquiring* the real images, *representing* the real (or even virtualized) images, *superimposing* the virtual information on the real images represented keeping the virtual scene in sync with reality. But with the enhanced latest technologies, each of these steps can find interesting improvements, and a even a fourth step can be implemented. We refer to the possibility that the images can be acquired and represented with autostereoscopic technologies, that an user can see his/her gestures mapped on the represented images, and that these gestures can be used to virtually interact with the real or modelled objects. So after a brief description of the standard techniques of image acquisition (paragraph 4.1), we will discuss, on the basis of our experiences, on the new autostereoscopy possibilities (paragraph 4.2), on the new low-cost systems to record human gestures (paragraph 4.3) and to convert them into actions useful to modify the represented scenario (paragraph 4.4), for an immersive human-machine interaction.

#### **4.1 3D Image acquisition**

Nowadays the possibility to obtain 3D data images of object appearance and to convert them into useful data, comes mainly from manual measuring (Stojakovic and Tepavcevica, 2009), stereo-photogrammetry surveying, 3D laser-scanner apparatuses, or an mixture of them (Mancera-Taboada et al., 2010).

The stereo-photogrammetry allows the determination of the geometric properties of objects from photographic images. Image-based measurements have been carried out especially for huge architectural heritages (Jun et al., 2008) and with application of spatial information technology (Feng et al., 2008) for instance by means of balloon images assisted with terrestrial laser scanning (Tsingas et al., 2008) or ad-hoc payload model helicopter (Scaioni et 72 Augmented Reality – Some Emerging Application Areas

technologies (Ceccano, FR), the American Total Immersion (Los Angeles, Paris, London, Hong Kong). Every software house proposes different solutions in term of rendering performances, 3D editing tools, format import from external 3d modelling software, type and quantity of control devices, etc. The commercial offer is based on different user licence. Some open/free suite are available with different GNU licences, and the most known is "ARToolKit", usable in its original format or Flash-included ("FLARToolKit") or with the more user friendly interface

The VCH natural mission is the spatial exploration, seen as an open environment (the ground, the archaeological site, the ancient city), as a close environment (the museum or a building), and as a simple object (the ruin, the ancient object). The user can live this experience in the real time, perceiving himself as a unit ad exploiting all his/her senses and the movement to complete and improve the comprehension of the formal meaning he/she's surrounded by. The mobile technologies allow to acquire information consistent to the spatial position, while the information technologies make the 3D virtual stereoscopic rendering reconstruction very realistic. The VCH will pass through MR technologies, intended as an informative cloud that involve, complete and deepen the knowledge and as

For the aim of the restoration/reconstruction of structures with artistic or historical values, architectural heritages, cultural artefacts and archaeological materials, three steps have been fundamentals since now: *acquiring* the real images, *representing* the real (or even virtualized) images, *superimposing* the virtual information on the real images represented keeping the virtual scene in sync with reality. But with the enhanced latest technologies, each of these steps can find interesting improvements, and a even a fourth step can be implemented. We refer to the possibility that the images can be acquired and represented with autostereoscopic technologies, that an user can see his/her gestures mapped on the represented images, and that these gestures can be used to virtually interact with the real or modelled objects. So after a brief description of the standard techniques of image acquisition (paragraph 4.1), we will discuss, on the basis of our experiences, on the new autostereoscopy possibilities (paragraph 4.2), on the new low-cost systems to record human gestures (paragraph 4.3) and to convert them into actions useful to modify the represented

Nowadays the possibility to obtain 3D data images of object appearance and to convert them into useful data, comes mainly from manual measuring (Stojakovic and Tepavcevica, 2009), stereo-photogrammetry surveying, 3D laser-scanner apparatuses, or an mixture of

The stereo-photogrammetry allows the determination of the geometric properties of objects from photographic images. Image-based measurements have been carried out especially for huge architectural heritages (Jun et al., 2008) and with application of spatial information technology (Feng et al., 2008) for instance by means of balloon images assisted with terrestrial laser scanning (Tsingas et al., 2008) or ad-hoc payload model helicopter (Scaioni et

scenario (paragraph 4.4), for an immersive human-machine interaction.

by "FLARManager". But more and more other solution are coming.

possibility to share virtual visit with people online.

**4. New materials and methods** 

**4.1 3D Image acquisition** 

them (Mancera-Taboada et al., 2010).

al., 2009) or, so called, Unmanned Aerial Vehicles or UAVs (van Blyenburg, 1999), capable to allow a high resolution image acquisition.

With the laser-scanner can be obtained the spatial coordination of the surface points of the objects under investigation (Mancera-Taboada et al., 2010, Costantino et al., 2010) as, for example, we did for the pieces of an ancient column of the archeological site of Pompeii (see Fig. 9). Anyway, according to our experience, the data acquired with laser-scanner are not so useful when it is necessary to share them on the web, since the huge amount of data generated, especially for large objects with many parts to be detailed. In addition, laserscanning measurements can often be characterized by errors of different nature, so analytical model must be applied to estimate the differential terms necessary to compute the object's curvature measures. So statistical analyses are generally adopted to overcome the problem (Crosilla et al., 2009).

(a)

Fig. 9. (a) A picture from Pompeii archeological site (Italy) with parts of an ancient column on the right, (b) pieces of a laser scanned column, (c) the column after the virtual reconstruction.

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 75

Our efforts are currently devoted respect to four main aspects: user comfort, amount of data to process, image realism, deal both with real objects or graphical models. In such a view, our collaboration involves the Alioscopy company (www.alioscopy.com) regarding their 3D AutoStereoscopy visualization system which, despite non completely satisfy all the requirements, remain one of the most affordable systems, in terms of cost and time efforts. The 3D monitor is based on the standard Full HD LCD and its feature back 8 points of view is called "multiscope". Each pixel of the panel combines three sub-pixel colour (red, green and blue) and the arrays of lenticular lenses cast different images onto each eye, since magnify different point of view for each eye viewed from slightly different

(a) (b) Fig. 10. (a) LCD panel with lenticular lenses, (b) Eight points of view of the same scene from

This results in a state of the art visual stereo effect rendered with typical 3D software such as 3ds Max, Maya, Lightwave, and XSI. The display uses 8 interleaved images to produce the AutoStereoscopic 3D effect with multiple viewpoints. We realized 3D images and videos, adopting two different approaches for graphical and real model. The graphical model is easily managed thanks to the 3D Studio Max Alioscopy plug-in, which is not usable for real

(a) (b)

Fig. 11. The eight cameras with (a) more or (b) less spacing between them, focusing the

images, and for which it is necessary a set of multi-cameras to recover 8 view-points.

angles (see Fig. 10).

eight cameras.

object at different distances

In any case, the 3D data images can be useful adopted both for VR than for AR. In fact, in the first case data are used to build virtual environments, more or less detailed and even linked to a cartographic model (Arriaga & Lozano, 2009), in the second occurrence data are used to superimpose useful information (dimensions, distances, virtual "ghost" representation of hidden parts,...) over the real scene, beyond mere presentation purposes towards being a tool for analytical work. The superimposed information must be deducted from analysis of the building materials, structural engineering criteria and architectural aspects.

The latest technological possibilities allow online image acquisition for auto-stereoscopic effects. These present the fundamental advantage that 3D vision can be realized without the need for the user to don special glasses as currently done. In particular we refer to a system we have adopted and improved, detailed in the following paragraph.

#### **4.2 Auto-stereoscopy**

The "feeling as sensation of present" in a AR scene is a fundamental requirement. The movements, the virtual interaction with the represented environment and the use of some interfaces are possible only if the user "feels the space" and understands where all the virtual objects are located. But the level of immersion in the AR highly depends on the display devices used. Strictly regarding the criteria of the representation, the general approach of the scenario visualization helps to understand the dynamic behaviour of a system better as well as faster. But the real boost in the representation comes, in the latest years, from a 3D approach which offers help in communication and discussion of decisions with non-experts too. The creation of a 3D visual information or the representation of a "illusion" of depth in a real or virtual image is generally referred as Stereoscopy. A strategy to obtain this is through eyeglasses, worn by the viewer, utilized to combine separate images from two offset sources or to filter offset images from a single source separated to each eye. But the eyeglass based systems can suffer from uncomfortable eyewear, control wires, cross-talk levels up to 10% (Bos, 1993), image flickering and reduction in brightness. On the other end, AutoStereoscopy is the technique to display stereoscopic images without the use of special headgear or glasses on the part of the viewer. Viewing freedom can be enhanced: presenting a large number of views so that, as the observer moves, a different pair of the views is seen for each new position; tracking the position of the observer and update the display optics so that the observer is maintained in the AutoStereoscopic condition (Woodgate et al., 1998). Since AutoStereoscopic displays require no viewing aids seem to be a more natural long-term route to 3D display products, even if can present loss of image (typically caused by inadequate display bandwidth) and cross-talk between image channels (due to scattering and aberrations of the optical system. In any case we want here to focus on the AutoStereoscopy for realizing what we believe to be, at the moment, the more interesting 3D representations for AR.

Current AutoStereoscopic systems are based on different technologies which include lenticular lens (array of magnifying lenses), parallax barrier (alternating points of view), volumetric (via the emission, scattering, or relaying of illumination from well-defined regions in space), electro-holographic (a holographic optical images are projected for the two eyes and reflected by a convex mirror on a screen), and light field displays (consisting of two layered parallax barriers).

74 Augmented Reality – Some Emerging Application Areas

In any case, the 3D data images can be useful adopted both for VR than for AR. In fact, in the first case data are used to build virtual environments, more or less detailed and even linked to a cartographic model (Arriaga & Lozano, 2009), in the second occurrence data are used to superimpose useful information (dimensions, distances, virtual "ghost" representation of hidden parts,...) over the real scene, beyond mere presentation purposes towards being a tool for analytical work. The superimposed information must be deducted from analysis of the building materials, structural engineering criteria and architectural

The latest technological possibilities allow online image acquisition for auto-stereoscopic effects. These present the fundamental advantage that 3D vision can be realized without the need for the user to don special glasses as currently done. In particular we refer to a system

The "feeling as sensation of present" in a AR scene is a fundamental requirement. The movements, the virtual interaction with the represented environment and the use of some interfaces are possible only if the user "feels the space" and understands where all the virtual objects are located. But the level of immersion in the AR highly depends on the display devices used. Strictly regarding the criteria of the representation, the general approach of the scenario visualization helps to understand the dynamic behaviour of a system better as well as faster. But the real boost in the representation comes, in the latest years, from a 3D approach which offers help in communication and discussion of decisions with non-experts too. The creation of a 3D visual information or the representation of a "illusion" of depth in a real or virtual image is generally referred as Stereoscopy. A strategy to obtain this is through eyeglasses, worn by the viewer, utilized to combine separate images from two offset sources or to filter offset images from a single source separated to each eye. But the eyeglass based systems can suffer from uncomfortable eyewear, control wires, cross-talk levels up to 10% (Bos, 1993), image flickering and reduction in brightness. On the other end, AutoStereoscopy is the technique to display stereoscopic images without the use of special headgear or glasses on the part of the viewer. Viewing freedom can be enhanced: presenting a large number of views so that, as the observer moves, a different pair of the views is seen for each new position; tracking the position of the observer and update the display optics so that the observer is maintained in the AutoStereoscopic condition (Woodgate et al., 1998). Since AutoStereoscopic displays require no viewing aids seem to be a more natural long-term route to 3D display products, even if can present loss of image (typically caused by inadequate display bandwidth) and cross-talk between image channels (due to scattering and aberrations of the optical system. In any case we want here to focus on the AutoStereoscopy for realizing what we believe to be, at the moment, the

Current AutoStereoscopic systems are based on different technologies which include lenticular lens (array of magnifying lenses), parallax barrier (alternating points of view), volumetric (via the emission, scattering, or relaying of illumination from well-defined regions in space), electro-holographic (a holographic optical images are projected for the two eyes and reflected by a convex mirror on a screen), and light field displays (consisting

we have adopted and improved, detailed in the following paragraph.

aspects.

**4.2 Auto-stereoscopy** 

more interesting 3D representations for AR.

of two layered parallax barriers).

Our efforts are currently devoted respect to four main aspects: user comfort, amount of data to process, image realism, deal both with real objects or graphical models. In such a view, our collaboration involves the Alioscopy company (www.alioscopy.com) regarding their 3D AutoStereoscopy visualization system which, despite non completely satisfy all the requirements, remain one of the most affordable systems, in terms of cost and time efforts. The 3D monitor is based on the standard Full HD LCD and its feature back 8 points of view is called "multiscope". Each pixel of the panel combines three sub-pixel colour (red, green and blue) and the arrays of lenticular lenses cast different images onto each eye, since magnify different point of view for each eye viewed from slightly different angles (see Fig. 10).

Fig. 10. (a) LCD panel with lenticular lenses, (b) Eight points of view of the same scene from eight cameras.

This results in a state of the art visual stereo effect rendered with typical 3D software such as 3ds Max, Maya, Lightwave, and XSI. The display uses 8 interleaved images to produce the AutoStereoscopic 3D effect with multiple viewpoints. We realized 3D images and videos, adopting two different approaches for graphical and real model. The graphical model is easily managed thanks to the 3D Studio Max Alioscopy plug-in, which is not usable for real images, and for which it is necessary a set of multi-cameras to recover 8 view-points.

Fig. 11. The eight cameras with (a) more or (b) less spacing between them, focusing the object at different distances

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 77

The human-machine interaction has historically been realized by means of conventional input devices, namely keyboard, mouse, touch screen panel, graphic tablet, trackball, penbased input in 2D environment, and three dimensional mouse, joystick, joypad in 3D space. But new advanced user interfaces can be much more user friendly, can ensure higher user mobility and allow new possibilities of interaction. The new input devices for advanced interactions take advantage from the possibility of measuring human static postures and body motions, translating them into actions in an AR scenario. But there are so many different human static and dynamic posture measurement systems that a classification can be helpful. For this, a suggestion comes from one of our work (Saggio & Sbernini, 2011), completing a previous proposal (Wang, 2005), which refers of a schematization, based on

*Outside-In Systems*: the sensors are somewhere in the world, the sources are attached to

*Inside-Out Systems*: the sensors are positioned on the body, the sources are somewhere

*Outside-Out Systems*: both sensors and sources are not (directly) placed on the user's

The *Outside-In Systems* typically involve optical techniques with markers, which are the sources, strategically placed on the wearer's body parts which are to be tracked. Cameras, which are the sensors, capture the wearer's movement, and the motion of those markers can be tracked and analyzed. An example of application can be found in the "Lord of the Rings" movie productions to track movements for the CGI "Gollum" character. This kind of system is widespread adopted (Gavrila, 1999) since it is probably the oldest and perfected ones, but the accuracy and robustness of the AR overlay process can be greatly influenced by the quality of the calibration obtained between camera and camera-mounted tracking markers

Fig. 14. Classifications of systems for measuring postures and kinematics of the human

position of the sensors and the sources (see Fig. 14). Specifically:

*Inside-In Systems*: the sensors and sources are on the user's body.

**4.3 Input devices** 

the body.

body

(Bianchi et al., 2005).

body.

else in the world.

Fig. 12. (a) schematization of the positions of the cameras among them and from the scene, (b) the layout we adopted for them

The virtual or real captured images are then mixed, by means of OpenGL tools, in groups of eight to realize AutoStereoscopic 3D scenes. We paid special attention in positioning the cameras to obtain a correct motion capture of a model or a real image, in particular the cameras must have the same distance apart (6.5 cm is the optimal distance) and each camera must "see" the same scene but from a different angle.

Fig. 13 reports screen capture of three realized videos. The images show blur effects when reproduced in a non AutoStereoscopic way. In particular, the image in Fig. 13c reproduces eight numbers (from 1 to 8), and the user see only one number at time depending on his/her angular position with respect the monitor.

Fig. 13. Some of realized images for 3D full HD LCD monitors based on lenticular lenses. (c) image represents 8 numbers seeing on at time by the user depending from his/her angle of view.

The great advantage of the AutoStereoscopic systems consists of an "immersive" experience of the user, with unmatched 3D pop-out and depth effects on video screens, and this without the uncomfortable eyewear of any kind of glasses. On the other end, an important amount of data must be processed for every frame since it is formed by eight images at the same time. Current personal computers are, in any case, capable to deal with these amount of data since the powerful graphic cards available today.

#### **4.3 Input devices**

76 Augmented Reality – Some Emerging Application Areas

(a) (b)

Fig. 12. (a) schematization of the positions of the cameras among them and from the scene,

The virtual or real captured images are then mixed, by means of OpenGL tools, in groups of eight to realize AutoStereoscopic 3D scenes. We paid special attention in positioning the cameras to obtain a correct motion capture of a model or a real image, in particular the cameras must have the same distance apart (6.5 cm is the optimal distance) and each camera

Fig. 13 reports screen capture of three realized videos. The images show blur effects when reproduced in a non AutoStereoscopic way. In particular, the image in Fig. 13c reproduces eight numbers (from 1 to 8), and the user see only one number at time depending on his/her

(a) (b) (c) Fig. 13. Some of realized images for 3D full HD LCD monitors based on lenticular lenses. (c) image represents 8 numbers seeing on at time by the user depending from his/her angle of

The great advantage of the AutoStereoscopic systems consists of an "immersive" experience of the user, with unmatched 3D pop-out and depth effects on video screens, and this without the uncomfortable eyewear of any kind of glasses. On the other end, an important amount of data must be processed for every frame since it is formed by eight images at the same time. Current personal computers are, in any case, capable to deal with these amount

(b) the layout we adopted for them

must "see" the same scene but from a different angle.

of data since the powerful graphic cards available today.

angular position with respect the monitor.

view.

The human-machine interaction has historically been realized by means of conventional input devices, namely keyboard, mouse, touch screen panel, graphic tablet, trackball, penbased input in 2D environment, and three dimensional mouse, joystick, joypad in 3D space. But new advanced user interfaces can be much more user friendly, can ensure higher user mobility and allow new possibilities of interaction. The new input devices for advanced interactions take advantage from the possibility of measuring human static postures and body motions, translating them into actions in an AR scenario. But there are so many different human static and dynamic posture measurement systems that a classification can be helpful. For this, a suggestion comes from one of our work (Saggio & Sbernini, 2011), completing a previous proposal (Wang, 2005), which refers of a schematization, based on position of the sensors and the sources (see Fig. 14). Specifically:


The *Outside-In Systems* typically involve optical techniques with markers, which are the sources, strategically placed on the wearer's body parts which are to be tracked. Cameras, which are the sensors, capture the wearer's movement, and the motion of those markers can be tracked and analyzed. An example of application can be found in the "Lord of the Rings" movie productions to track movements for the CGI "Gollum" character. This kind of system is widespread adopted (Gavrila, 1999) since it is probably the oldest and perfected ones, but the accuracy and robustness of the AR overlay process can be greatly influenced by the quality of the calibration obtained between camera and camera-mounted tracking markers (Bianchi et al., 2005).

Fig. 14. Classifications of systems for measuring postures and kinematics of the human body.

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 79

In the latest year our research group realized systems capable to measure human postures and movements and to convert them into actions or commands for pc based applications. This can be intended an example of Human Computer Interaction (HCI) which, more specifically, can be defined as the study, the planning and the design of the interaction between users and computers. We designed and realized different systems which can be framed into both the *Inside-Out* and *Inside-In ones*. An example comes from our "data glove", named *Hiteg glove* since our acronym (*Health Involved Technical Engineering Group*). The glove is capable to measure all the degree of freedom of the human hand, and the recorded movements can be then simply represented on a screen (Fig. 15a) or adopted to virtually

(a) (b) Fig. 15. Hiteg Data Glove measures human hand movements (a) simple reproduced on a

(a) (b)

(c) (d) Fig. 16. The restoration/reconstruction of an ancient column. From (a) to (d) the passages to

interact, manipulate, handle object on a VR/AR scenario (Fig. 15b).

computer screen or (b) utilized to handle a virtual robotic arm.

integrate a last piece

The *Inside-Out Systems* deal with sensors attached to the body while sources are located somewhere else in the world. Examples are the systems based on accelerometers (Fiorentino et al., 2011; Mostarac et al., 2011; Silva et al., 2011), MEMS (Bifulco et al., 2011), ensemble of inertial sensors such as accelerometers, gyroscopes and magnetometers (Benedetti, Manca et al., 2011), RFID, or IMUs which we applied to successfully measure movements of the human trunk (Saggio & Sbernini, 2011). Within this frame, same research groups and commercial companies have developed sensorized garments for all the parts of the body, over the past 10-15 years, obtaining interesting results (Giorgino et al., 2009; Lorussi, Tognetti et al., 2005; Post et al., 2000).

The *Inside-In Systems* are particularly used to track body part movements and/or relative movements between specific parts of the body, having no knowledge of the 3D world the user is in. Such systems are for sensors and sources which are for the most part realized within the same device and are placed directly on the body segment to be measured or even sewed inside the user's garment. The design and implementation of sensors that are minimally obtrusive, have low-power consumption, and that can be attached to the body or can be part of clothes, with the employ of wireless technology, allows to obtain data over an extended period of time and without significant discomfort. Examples of the *Inside-In Systems* come from application of strain gauges for stress measurements (Ming et al., 2009), conductive ink based materials (Koehly et al., 2006) by which it is possible to realize bend / touch / force / pressure sensors, piezoelectric materials or PEDOT:PSS basic elements for realizing bend sensors (Latessa et al., 2008) and so on.

The *Outside-Out Systems* consider both sensors and sources not directly placed on the user's body but in the surrounding world. Let's consider, for instance, the new Wireless Embedded Sensor Networks which consist of sensors embedded in object such as an armchair. The sensors detect the human postures and, on the basis of the recorded measures, furnish information to modify the shape of the armchair to best fit the user body (even taking into account the environment changes). Another application is the tracking of the hand's motions utilized as a pointing device in a 3D environment (Colombo et al., 2003).

In a next future it is probable that the winning rule will be played by a technology which will take advantages from mixed systems, i.e. including only the most relevant advantages of *Outside-In* and/or *Inside-Out* and/or *Inside-In* and/or *Outside-Out Systems*. In this sense an interesting application comes from the Fraunhofer IPMS, where the researchers have developed a bidirectional micro-display, which could be used in Head-Mounted Displays (HMD) for gaze triggered AR applications. The chips contain both an active OLED matrix and therein integrated photo-detectors, with a Front brightness higher than 1500 cd/m². The combination of both matrixes in one chip is an essential possibility for system integrators to design smaller, lightweight and portable systems with both functionalities.

#### **4.4 Human-computer interaction**

As stated at the beginning of paragraph 4, we want here to point out on the utilization of the measurements of the human postures and kinematics in order to convert them into actions useful to somehow virtually interact with represented real or virtual scenario. We are representing here the concept of Human-Computer Interaction (HCI), with functionality and usability as its major issues (Te'eni et al., 2007). In literature there are several examples of HMI (Karray et al., 2008).

78 Augmented Reality – Some Emerging Application Areas

The *Inside-Out Systems* deal with sensors attached to the body while sources are located somewhere else in the world. Examples are the systems based on accelerometers (Fiorentino et al., 2011; Mostarac et al., 2011; Silva et al., 2011), MEMS (Bifulco et al., 2011), ensemble of inertial sensors such as accelerometers, gyroscopes and magnetometers (Benedetti, Manca et al., 2011), RFID, or IMUs which we applied to successfully measure movements of the human trunk (Saggio & Sbernini, 2011). Within this frame, same research groups and commercial companies have developed sensorized garments for all the parts of the body, over the past 10-15 years, obtaining interesting results (Giorgino et al., 2009; Lorussi,

The *Inside-In Systems* are particularly used to track body part movements and/or relative movements between specific parts of the body, having no knowledge of the 3D world the user is in. Such systems are for sensors and sources which are for the most part realized within the same device and are placed directly on the body segment to be measured or even sewed inside the user's garment. The design and implementation of sensors that are minimally obtrusive, have low-power consumption, and that can be attached to the body or can be part of clothes, with the employ of wireless technology, allows to obtain data over an extended period of time and without significant discomfort. Examples of the *Inside-In Systems* come from application of strain gauges for stress measurements (Ming et al., 2009), conductive ink based materials (Koehly et al., 2006) by which it is possible to realize bend / touch / force / pressure sensors, piezoelectric materials or PEDOT:PSS basic elements for

The *Outside-Out Systems* consider both sensors and sources not directly placed on the user's body but in the surrounding world. Let's consider, for instance, the new Wireless Embedded Sensor Networks which consist of sensors embedded in object such as an armchair. The sensors detect the human postures and, on the basis of the recorded measures, furnish information to modify the shape of the armchair to best fit the user body (even taking into account the environment changes). Another application is the tracking of the hand's motions utilized as a pointing device in a 3D environment (Colombo et al., 2003). In a next future it is probable that the winning rule will be played by a technology which will take advantages from mixed systems, i.e. including only the most relevant advantages of *Outside-In* and/or *Inside-Out* and/or *Inside-In* and/or *Outside-Out Systems*. In this sense an interesting application comes from the Fraunhofer IPMS, where the researchers have developed a bidirectional micro-display, which could be used in Head-Mounted Displays (HMD) for gaze triggered AR applications. The chips contain both an active OLED matrix and therein integrated photo-detectors, with a Front brightness higher than 1500 cd/m². The combination of both matrixes in one chip is an essential possibility for system integrators to

As stated at the beginning of paragraph 4, we want here to point out on the utilization of the measurements of the human postures and kinematics in order to convert them into actions useful to somehow virtually interact with represented real or virtual scenario. We are representing here the concept of Human-Computer Interaction (HCI), with functionality and usability as its major issues (Te'eni et al., 2007). In literature there are several examples

design smaller, lightweight and portable systems with both functionalities.

Tognetti et al., 2005; Post et al., 2000).

**4.4 Human-computer interaction** 

of HMI (Karray et al., 2008).

realizing bend sensors (Latessa et al., 2008) and so on.

In the latest year our research group realized systems capable to measure human postures and movements and to convert them into actions or commands for pc based applications. This can be intended an example of Human Computer Interaction (HCI) which, more specifically, can be defined as the study, the planning and the design of the interaction between users and computers. We designed and realized different systems which can be framed into both the *Inside-Out* and *Inside-In ones*. An example comes from our "data glove", named *Hiteg glove* since our acronym (*Health Involved Technical Engineering Group*). The glove is capable to measure all the degree of freedom of the human hand, and the recorded movements can be then simply represented on a screen (Fig. 15a) or adopted to virtually interact, manipulate, handle object on a VR/AR scenario (Fig. 15b).

Fig. 15. Hiteg Data Glove measures human hand movements (a) simple reproduced on a computer screen or (b) utilized to handle a virtual robotic arm.

Fig. 16. The restoration/reconstruction of an ancient column. From (a) to (d) the passages to integrate a last piece

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 81

(a) (b) Fig. 18. (a) Lateral and (b) front measured trunk movements are replicated on a pc screen.

Thanks to the measured movements, one can see him-herself directly immersed into an AR scenario, and his/her gestures can virtually manipulate the virtual represented objects. But the mixed reality can be even addressed to more sophisticated possibilities. In fact, adopting systems which furnish feedback to the user, it is possible to make the action of virtually touch a real object seen in a pc screen, but having the sensation of the touch as being real. So, the usual one-way communication between man and computer, can now become a bidirectional information transfer providing a user-interface return channel (see

We can have sensations to the skin and muscles through touch, weight and relative rigidity of non existing objects. Generally speaking we can treat of *force* or *haptic* feedbacks, properties which can be integrated into VR and AR scenario. The *force* feedback consists in equipment to furnish the physical sensation of resistance, to trigger kinetic stimuli in the user, while the *haptic* consists in apparatus that interfaces with the user through the sense of touch (for its calibration, registration, and synchronization problems see Harders et al., 2009). In particular the so called *affective haptic* involves the study and design of devices and systems that can elicit, enhance, or influence emotional state of a human just by means of sense of touch. Four basic haptic (tactile) channels governing our emotions can be distinguished: *physiological changes* (e.g., heart beat rate, body temperature, etc.), *physical stimulation* (e.g., tickling), *social touch* (e.g., hug, handshake), *emotional haptic design* (e.g., shape of device, material, texture) (Wikipedia,

Fig. 19. Exchange information between user and computer

Fig. 19).

Fig. 17. (a) the pieces of an ancient amphora are (b) virtual reproduced in a VR scenario and the hand gestures measured with the hiteg glove are utilized (c) to virtually interact or (d) to retrieve additional information of the selected piece

On the basis of the *Hiteg glove* and of a home-made software, we realized a project for the virtual restoration/reconstruction of historical artifacts. It starts from acquiring the spatial coordinates of each part of the artifact to be restored/reconstructed by means of laser scanner facilities. Each of these pieces are then virtually represented in a VR scenario, and the user can manipulate them to virtually represent the possibilities of a restoration/ econstruction steps detailing each single maneuver (see Fig. 16 and Fig. 17).

We developed noninvasive systems to measure human trunk static and dynamic postures too. The results are illustrated in Fig. 18a,b where sensors are applied to a home-made dummy, capable to perform the real trunk movements of a person, and the measured positions are replicated by an avatar on a computer screen.

80 Augmented Reality – Some Emerging Application Areas

(a) (b)

(c) (d)

Fig. 17. (a) the pieces of an ancient amphora are (b) virtual reproduced in a VR scenario and the hand gestures measured with the hiteg glove are utilized (c) to virtually interact or (d) to

On the basis of the *Hiteg glove* and of a home-made software, we realized a project for the virtual restoration/reconstruction of historical artifacts. It starts from acquiring the spatial coordinates of each part of the artifact to be restored/reconstructed by means of laser scanner facilities. Each of these pieces are then virtually represented in a VR scenario, and the user can manipulate them to virtually represent the possibilities of a restoration/

We developed noninvasive systems to measure human trunk static and dynamic postures too. The results are illustrated in Fig. 18a,b where sensors are applied to a home-made dummy, capable to perform the real trunk movements of a person, and the measured

econstruction steps detailing each single maneuver (see Fig. 16 and Fig. 17).

retrieve additional information of the selected piece

positions are replicated by an avatar on a computer screen.

Fig. 18. (a) Lateral and (b) front measured trunk movements are replicated on a pc screen.

Thanks to the measured movements, one can see him-herself directly immersed into an AR scenario, and his/her gestures can virtually manipulate the virtual represented objects. But the mixed reality can be even addressed to more sophisticated possibilities. In fact, adopting systems which furnish feedback to the user, it is possible to make the action of virtually touch a real object seen in a pc screen, but having the sensation of the touch as being real. So, the usual one-way communication between man and computer, can now become a bidirectional information transfer providing a user-interface return channel (see Fig. 19).

Fig. 19. Exchange information between user and computer

We can have sensations to the skin and muscles through touch, weight and relative rigidity of non existing objects. Generally speaking we can treat of *force* or *haptic* feedbacks, properties which can be integrated into VR and AR scenario. The *force* feedback consists in equipment to furnish the physical sensation of resistance, to trigger kinetic stimuli in the user, while the *haptic* consists in apparatus that interfaces with the user through the sense of touch (for its calibration, registration, and synchronization problems see Harders et al., 2009). In particular the so called *affective haptic* involves the study and design of devices and systems that can elicit, enhance, or influence emotional state of a human just by means of sense of touch. Four basic haptic (tactile) channels governing our emotions can be distinguished: *physiological changes* (e.g., heart beat rate, body temperature, etc.), *physical stimulation* (e.g., tickling), *social touch* (e.g., hug, handshake), *emotional haptic design* (e.g., shape of device, material, texture) (Wikipedia,

Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 83

Bos P.J. (1993) Liquid-crystal shutter systems for time multiplexed stereoscopic displays, in:

Bruner J., (1996) "Toward a theory of instruction," Harvard University Press: Cambridge,

Burdea, G.C., Zhuang, J.A., Rosko, E., Silver, D. & Langrama, N. (1992), "A portable

Colombo C., Del Bimbo A. & Valli A., (2003) Visual Capture and Understanding of Hand

Costantino D., Angelini M.G. & Caprino G., (2010) Laser Scanner Survey Of An

Crosilla F., Visintini D. & Sepic F., (2009) Automatic Modeling of Laser Point Clouds by

Cruz-Neira C., Sandin D.J., DeFanti T.A., Kenyon R.V. & Hart J.C., (1992) "The CAVE:

Feng M., Ze L., Wensheng Z., Jianxi H. & Qiang L., (2008) The research and application of

Gavrila D.M., (1999) "The visual analysis of human movement: a survey", *Computer Vision* 

Georgios P., Evaggelia-Aggeliki K &, Theoharis T., (2001) "Virtual Archaeologist: Assembling the Past," *IEEE Comput. Graph. Appl.*, vol. 21, pp. 53-59, 2001 Giorgino T., Tormene P., Maggioni G., Capozzi D., Quaglini S &, Pistarini C., (2009)

stroke physical rehabilitation" *Eur. J. Phys. Rehabil. Med.* 2009;45:75-84 Harders M, Bianchi G., Knoerlein B. & Székely G., "Calibration, Registration, and

Princeton University Press, Princeton, pp. 90-118.

*Cybernetics—part B: Cybernetics*, vol. 33, no. 4, august 2003

Commission V Symposium, Newcastle upon Tyne, UK., 2010

*ACM*, vol. 35(6), 1992, pp. 64-72. DOI:10.1145/129888.129892

MA, 1996

2011

*Environments*, vol. 1, pp. 18-28

*Complex Architectures*, Trento, Italy

*and Image Understanding*, 73(1), pp 82-98, 1999

D.F. McAllister (Ed.), *Stereo Computer Graphics and Other true 3D Technologies*,

dextrous master with force feedback", *Presence: Teleoperators and Virtual* 

Pointing Actions in a 3-D Environment, *IEEE Transactions on Systems, Man, and* 

Archaeological Site: Scala Di Furno (Lecce, Italy), *International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences*, Vol. XXXVIII, Part 5

Statistical Analysis of Surface Curvature Values, *Proceedings of the 3rd ISPRS International Workshop 3D-ARCH 2009, 3D Virtual Reconstruction and Visualization of* 

Audio Visual Experience Automatic Virtual Environment," *Communications of the* 

spatial information technology in cultural heritage conservation - case study on Grand Canal of China, The International Archives of the Photogrammetry, *Remote Sensing and Spatial Information Sciences*, Vol. XXXVII-B5. Beijing 2008, pp. 999-1005 Fiorentino M., Uva A.E. & Foglia M.M., (2011) "Wearable rumble device for active

asymmetry measurement and corrections in lower limb mobility," *Proceedings of IEEE Int. Symposium on Medical Measurements and Applications,* Bari, Italy, May

"Assessment of sensorized garments as a flexible support to self-administered post-

Synchronization for High Precision Augmented Reality Haptics," *IEEE Transactions on Visualization and Computer Graphics*, vol. 15 no. 1, Jan/Feb 2009, pp. 138-149 Haydar M., Roussel D., Maidi M., Otmane S. & Mallem M., (2010) "Virtual and augmented

reality for cultural computing and heritage: a case study of virtual exploration of underwater archaeological sites," *Virtual Reality*, DOI 10.1007/s10055-010-0176-4

2011). In our context frame we refer only on the *physical stimulation* part, since it is the less emotional but the only that makes sense for our purposes. Within it the *tactile sensation* is the most relevant and includes pressure, texture, puncture, thermal properties, softness, wetness, friction-induced phenomena such as slip, adhesion, and micro failures, as well as local features of objects such as shape, edges, embossing and recessed features (Hayward et al., 2004). But also vibro-tactile sensations, in the sense of the perception of oscillating objects in contact with the skin can be relevant for HCI aspects. A possibility to simulate the grasping of virtual objects can be realized by means of small pneumatic pistons in a hand worn solution, which make it possible to achieve a low weight and hence portable device (an example is the force-feedback glove from the HMI Laboratory at the Rutgers University, Burdea et al., 1992)

## **5. Conclusion**

Stands the importance of restoration and/or reconstruction of structures with artistic or historical values, architectural heritages, cultural artefacts and archaeological materials, this chapter discussed about the meaning and importance of AR applications within this frame. So, after a brief overview, we focused on the *restoration cycle*, underlining the Reality and Virtuality cross relations and how them support AR scenarios. An entire paragraph was devoted to AR applications, its criticalness, developments, and related software. Particular attention was paid for new materials and methods to discover (more or less) future possibilities which can considerably improve restoration/reconstruction processes of artefacts, in terms of time, efforts and cost reductions.

#### **6. References**


82 Augmented Reality – Some Emerging Application Areas

2011). In our context frame we refer only on the *physical stimulation* part, since it is the less emotional but the only that makes sense for our purposes. Within it the *tactile sensation* is the most relevant and includes pressure, texture, puncture, thermal properties, softness, wetness, friction-induced phenomena such as slip, adhesion, and micro failures, as well as local features of objects such as shape, edges, embossing and recessed features (Hayward et al., 2004). But also vibro-tactile sensations, in the sense of the perception of oscillating objects in contact with the skin can be relevant for HCI aspects. A possibility to simulate the grasping of virtual objects can be realized by means of small pneumatic pistons in a hand worn solution, which make it possible to achieve a low weight and hence portable device (an example is the force-feedback glove from the HMI Laboratory at the Rutgers

Stands the importance of restoration and/or reconstruction of structures with artistic or historical values, architectural heritages, cultural artefacts and archaeological materials, this chapter discussed about the meaning and importance of AR applications within this frame. So, after a brief overview, we focused on the *restoration cycle*, underlining the Reality and Virtuality cross relations and how them support AR scenarios. An entire paragraph was devoted to AR applications, its criticalness, developments, and related software. Particular attention was paid for new materials and methods to discover (more or less) future possibilities which can considerably improve restoration/reconstruction processes of

Arriaga & Lozano (2009), "Space throughout time - Application of 3D virtual reconstruction

Bianchi G., Wengert C., Harders M., Cattin P. & Szekely G., (2005) "Camera-Marker

Bifulco P., Cesarelli M., Fratini A., Ruffo M., Pasquariello G. & Gargiluo G., (2011) "A

Benedetti M.G., Manca M., Sicari M., Ferraresi G., Casadio G., Buganè F. & Leardini A.

www.youtube.com/watch?v=BTPBNggldTw&feature=related

Augmented Reality Applcations," In *ISMAR*, 2005

and light projection techniques in the analysis and reconstruction of cultural heritage," *Proceedings of the 3rd ISPRS Int. Workshop 3D-ARCH 2009, 3D Virtual Reconstruction and Visualization of Complex Architectures*, Trento, Italy, Feb. 2009 Avery B., Sandor C., & Thomas B.H., (2009). *IEEE VR* 2009, Available from

Alignment Framework and Comparison with Hand-Eye Calibration for

wearable device for recording of biopotentials and body movements," *Proceedings IEEE Int. Symposium on Medical Measurements and Applications,* Bari, Italy, May

(2011) "Gait measures in patients with and without afo for equinus varus/drop foot," *Proceedings of IEEE International Symposium on Medical Measurements and* 

University, Burdea et al., 1992)

artefacts, in terms of time, efforts and cost reductions.

*Applications,* Bari, Italy, May 2011

**5. Conclusion** 

**6. References** 

2011


Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value 85

Post E. R., Orth , Russo P. R. & Gershenfeld N., "E-broidery: design and fabrication of textile-based computing," *IBM Systems Journal* Vol. 39 Issue 3-4, July 2000 Saggio G., Bocchetti S., Pinto C. A., Orengo G. & Giannini F., (2009) A novel application

Saggio G., Bocchetti S., Pinto C.A. & Orengo G., (2010), Wireless Data Glove System

Saggio G. & Sbernini L., (2011) New scenarios in human trunk posture measurements for

Scaioni M., Barazzetti L., Brumana R., Cuca B., Fassi F. & Prandi F., (2009), RC-Heli and

Stojakovic V. & Tepavcevica B., (2009) Optimal Methodes For 3d Modeling Of Devastated

Te'eni D., Carey J. & Zhang P., (2007) "Human Computer Interaction: Developing Effective Organizational Information Systems," John Wiley & Sons, Hoboken, 2007. Tsingas V., Liapakis C., Xylia V., Mavromati D., Moulou D., Grammatikopoulos L. &

Ursula K., Volker C., Ulrike S., Dieter G., Kerstin S., Isabel R. & Rainer M., (2001) "Meeting

Varela F.J., Thompson E. & Rosch E., (1991) "The embodied mind: Cognitive science and

Vassilios V., John K., Manolis T., Michael G., Luis A., Didier S., Tim G., Ioannis T. C., Renzo

*Virtual reality, archeology, and cultural heritage*, Glyfada, Greece: ACM, 2001 Vlahakis V., Ioannidis M., Karigiannis J., Tsotros M., Gounaris M., Stricker D., Gleue T.,

*Computer Graphics and Applications*, IEEE, vol. 22, pp. 52-60, 2002

*and cultural heritage,* Glyfada, Greece: ACM, 2001

human experience," MIT Press: Cambridge, MA, 1991

Bratislava, Slovak Republic, November 24-27, 2009, pp. 1–3

November 7-10, 2010

May 2011

30-31 May 2011, Bari, Italy

Italy, 25-28 February 2009

method for wearable bend sensors, ISABEL2009, *Proceedings of 2nd International Symposium on Applied Sciences in Biomedical and Communication Technologies*.

developed for HMI*, ISABEL2010*, *Proceedings of 3rd International Symposium on Applied Sciences in Biomedical and Communication Technologies*, Rome, Italy,

clinical applications, *IEEE Int. Symposium on Medical Measurements and Applications*,

Structure & Motion techniques for the 3-D reconstruction of a Milan Dome spire, Proceedings of *the 3rd ISPRS International Workshop 3D-ARCH 2009, 3D Virtual Reconstruction and Visualization of Complex Architectures*, Trento, Italy, Feb. 2009 Silva H., Lourenco A., Tomas R., Lee V. & Going S., (2011) "Accelerometry-based Study of

Body Vibration Dampening During Whole-Body Vibration Training," *Proceedings of IEEE International Symposium on Medical Measurements and Applications,* Bari, Italy,

Arhitectural Objects, Proceedings of *the 3rd ISPRS International Workshop 3D-ARCH 2009, 3D Virtual Reconstruction and Visualization of Complex Architectures*, Trento,

Stentoumis C., (2008) 3D modelling of the acropolis of Athens using balloon images and terrestrial laser scanning, The *Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences*. vol. XXXVII-B5. Beijing 2008, pp. 1101-1105

the spirit of history," *Proceedings of the 2001 conference on Virtual reality, archeology,* 

C. & Nikos I., "Archeoguide: first results of an augmented reality, mobile computing system in cultural heritage sites," *Proceedings of the 2001 conference on* 

Daehne P. & Almeida L., "Archeoguide: an AR guide for archaeological sites,"


84 Augmented Reality – Some Emerging Application Areas

Hayward V, Astley O.R., Cruz-Hernandez M., Grant D. & Robles-De-La-Torre G., (2004) "Haptic interfaces and devices," *Sensor Review*, vol. 24, no. 1, 2004, pp. 16–29 Karray F., Alemzadeh M., Saleh J.A. & Arab M.N. (2008) "Human-Computer Interaction:

Koehly R., Curtil D. & Wanderley M.M., (2006) "Paper FSRs and Latex/Fabric Traction

Jun C., Yousong Z., Anping L., Shuping J. & Hongwei Z., (2008) Image-Based Measurement

Lorussi F., Tognetti A., Tescioni M., Zupone G., Bartalesi R. & De Rossi D. (2005)

Mancera-Taboada J., Rodríguez-Gonzálvez P., González-Aguilera D., Muñoz-Nieto Á.,

*IAPRS*, Vol. XXXVIII, Part 3A – Saint-Mandé, France, September 1-3, 2010 Maturana H. & Varela F., (1985) "Autopoiesis and Cognition: the Realization of the Living,"

Milgram P. & Kishino F., (1994) A Taxonomy of Mixed Reality Visual Displays, *IEICE Transactions on Information Systems*, vol. E77-D, no.12, December 1994 Ming D., Liu X., Dai Y &, Wan B., (2009) "Indirect biomechanics measurement on shoulder

Mostarac P., Malaric R., Jurčević M., Hegeduš H., Lay-Ekuakille A. & Vergallo P., (2011)

Olwal, A., Lindfors, C., Gustafsson, J., Kjellberg, T. & Mattson, L., (2005) "ASTOR: An

Pollefeys M., Van Gool L., Vergauwen M., Cornelis K., Verbiest F. & Tops J., (2003) "3D

*Measurements and Applications,* Bari, Italy, May 2011

*Reality*, Vienna, Austria, Oct 5-8, 2005

*Systems*, vol. 1, no. 1, march 2008, pp. 137-159

*Chemical* (2008), doi:10.1016/j.snb.2009.03.063

D. Nugent et al., IOS Press, 2005

Venezia, Italy, 1985

China, May 11-13, 2009

vol. 23, pp. 20-27, 2003

Paris, France, 2006

Overview on State of the Art," *International Journal on Smart Sensing and Intelligent* 

Sensors: Methods for the Development of Home-Made Touch Sensors," *Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06)*,

of The Ming Great Wall, *The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences*, Vol. XXXVII-B5. Beijing 2008, pp. 969-973 Latessa G., Brunetti F., Reale A., Saggio G. & Di Carlo A., (2008) "Electrochemical synthesis

and characterization of flexible PEDOT:PSS based sensors" *Sensors and Actuators B:* 

"Electroactive Fabrics for Distributed, Confortable and Interactive Systems" in *Techn. and Informatics* vol.117, Personalized Health Management Systems, Ed. Chris

Gómez-Lahoz J., Herrero-Pascual J. & Picón-Cabrera I., (2010) On the use of laser scanner and photogrammetry for the global digitization of the medieval walls of Avila, In: Paparoditis N., Pierrot-Deseilligny M., Mallet C., Tournaire O. (Eds),

I ed. 1980, tr.it. *Autopoiesi e cognizione. La realizzazione del vivente*, Editore Marsilio,

joint moments of walker-assisted gait", *Proc. of IEEE Int. Conf. on Virtual Environments, Human-Computer Interfaces, and Measurement Systems*, Hong Kong,

"System for monitoring and fall detection of patients using mobile 3-axis accelerometers sensors," *Proceedings of IEEE International Symposium on Medical* 

Autostereoscopic Optical See-through Augmented Reality System," *ISMAR 2005 Proceedings of IEEE and ACM International Symposium on Mixed and Augmented* 

recording for archaeological fieldwork," *Computer Graphics and Applications*, IEEE,


**Part 2** 

**AR in Biological, Medical and** 

**Human Modeling and Applications** 

