**3.6 Animation**

Once we had opted for a purely virtual staging, the major issues became the process of developing and animating the avatars that would represent the characters in the story. There were a number of issues that affected this process. As mentioned earlier we were experimenting with the idea of applying a user's face to the avatars,

**Figure 12.** *Matching the motion capture recorded movements to the avatar movements.*

and so, at least in the preliminary version of the opera, we used our own faces. There are three main characters in the opera, and we applied an image of our faces to each of these characters—Kiss's face was applied to Relliana, Falcon's face to Mandra, the leader of the bullies, and Edwards' face to Aki, Relliana's friend. Aki is a bi-gendered character—neither female nor male. Therefore, we chose a neutral morphology for the avatar, used Edwards' masculine face, and chose dress-like clothing which resulted in an avatar which is successfully ambiguous in terms of gender. In follow-up work on the opera we propose to modify the user interface so that the process of assigning the face to the avatar could be carried out by any member of the audience for their private interface. The calibration process that allows this to occur has already been developed, tested and programmed (see **Figure 11**).

The Unity game engine underwent several improvements over the course of the project which allowed us to improve the animations. Initially, the work was quite onerous. In general, gestures, postures and movements were chosen from banks of animation data in appropriate formats that were publically available and these were looped or integrated with other animations to generate movements that would appear (we hoped) as natural as possible. Originally, all the timing of the animations had to be implemented manually. However, after our first public performance, Unity installed a new module called Timeline that enabled a more organic and systemic control of the timing between different events, greatly facilitating the subsequent animation efforts. A second major challenge was the development of appropriate lip-synching. Finding a reliable and robust method to develop lipsynching of the avatars with the singing proved to be a substantial challenge, and was never solved to our complete satisfaction.

Another problem with the use of avatars and a virtual reality environment, of course, is that the sense of embodiment, of embodied actors is largely lost. Since the themes incorporated into the opera were at least partially resonant with issues of embodiment, this was a significant deception for the production team, and we worked hard to re-introduce elements of embodiment to compensate for this loss.

The fourth element that needed to be mastered to create a compelling operatic experience, beyond the appearance of the avatars, their programmed movements, and the lip-synching, was the development of camera zooming and panning sequences that could also be used to enhance the drama. Because the opera was presented in a "streaming" Unity environment, in principle the means to control the camera could have been left in the hands of the user/observer. However, since camera movements can significantly enhance or detract from dramatic tension, we retained control of these elements in our first version of the opera.

#### **3.7 Bringing the components together**

Additional elements that needed to be managed included the opening sequence, the closing sequence, and the introductory remarks and organization. Indeed, because the opera presented the audience with a number of new elements, explanations needed to be provided along with some basic "training." We organized the material into several formats for different presentation protocols. In most of our presentations, we performed the singing live in addition to the prerecorded music.

Hence, during our first presentation, we used belts that tracked the singer's breathing and used the breath as a way to "operate" the interactive plants so that they opened wide under deeper and more expansive breathing and spread more of the buoyant particles into the air. The belts were designed using "flex bands," that is, sensors that convert elastic stretch into a capacitance measurement that can be read off by a computer chip. However, although we achieved one working prototype, the second failed to operate correctly. Given the difficulties ensuring

**99**

*Designing a Participatory and Interactive Opera DOI: http://dx.doi.org/10.5772/intechopen.82811*

solution within our final production.

**3.8 Public performances**

more hopeful outcome.

**3.9 Post-performance discussions**

if one only paid more attention.

**3.10 The co-creation environment**

robust measurements from the belts, we developed a second interaction method which used a centrally placed microphone to capture the audience singing and used the intensity of the voice to open the nemos and create more buoyant particles. This solution was much more robust and repeatable, and so we eventually adopted this

The initial staging was done using the Unity Game Engine in interactive mode projected onto a shared screen, following a predetermined script that determined avatar movements, virtual camera movements, music soundtracks, and that tracked voice production on the part of the audience. In addition, the two main live performers, Kiss and Edwards, sang (in Edwards' case, declaimed) the lyrics along with the pre-recorded lyrical tracks. The second and third times we presented the opera, we used a video recording of the scripted Unity staging along with the live performers and the voice-interaction module. This allowed us to incorporate subtitles in either English or French translation to assist audiences in understanding the sung or spoken text, as well as facilitating the complexity of the staging. In the final public presentation, we used a different real-time streaming version designed using the 360 view capabilities made available in the more recent version of Unity, with a view to giving the user greater flexibility using VR glasses to change view directions. For this fourth presentation, we had refined the avatar and camera movements, and had done more work on the lip-synching. This fourth staging could, in principal, accommodate changes to scripting possibilities and potentially allows for multiple endings. In the second, third and fourth stagings, we trained the audience to sing a small melodic line that could be used to guide the protagonist towards her

Each time we presented the opera to a public, we engaged in a post-performance discussion to both assess reactions on the part of audience members concerning content and staging, but also to discuss technical aspects of the production that might interest audience members. One of the goals of the development of this opera was to engage audiences around content issues, that is, the need to be more tolerant towards others who are different from ourselves, especially in the light of peer pressures towards greater conformity, but also to recognize that even tolerant communities may include individuals or groups who are less tolerant, and that communities that may at first appear to be intolerant may include elements who are more tolerant

Overall, audiences reacted favorably to the opera as staged, and during the discussion it became clear that the conformity-tolerance issues were accepted and interesting discussions ensued. These were mostly scientific audiences, although they included practitioners from a broad variety of public health contexts, includ-

Originally conceived to be a tool to enable public engagement in the process of producing the opera (and this is still our long term goal), it was necessary to simplify to some extent the design of the environment to fit for the budget and time constraints of the project. Essentially, we had funding for 3 years and the process of

ing disability studies, mental health, social justice, and so on.

robust measurements from the belts, we developed a second interaction method which used a centrally placed microphone to capture the audience singing and used the intensity of the voice to open the nemos and create more buoyant particles. This solution was much more robust and repeatable, and so we eventually adopted this solution within our final production.

## **3.8 Public performances**

*Interactive Multimedia - Multimedia Production and Digital Storytelling*

was never solved to our complete satisfaction.

**3.7 Bringing the components together**

and so, at least in the preliminary version of the opera, we used our own faces. There are three main characters in the opera, and we applied an image of our faces to each of these characters—Kiss's face was applied to Relliana, Falcon's face to Mandra, the leader of the bullies, and Edwards' face to Aki, Relliana's friend. Aki is a bi-gendered character—neither female nor male. Therefore, we chose a neutral morphology for the avatar, used Edwards' masculine face, and chose dress-like clothing which resulted in an avatar which is successfully ambiguous in terms of gender. In follow-up work on the opera we propose to modify the user interface so that the process of assigning the face to the avatar could be carried out by any member of the audience for their private interface. The calibration process that allows this to occur has already been developed, tested and programmed (see **Figure 11**). The Unity game engine underwent several improvements over the course of the project which allowed us to improve the animations. Initially, the work was quite onerous. In general, gestures, postures and movements were chosen from banks of animation data in appropriate formats that were publically available and these were looped or integrated with other animations to generate movements that would appear (we hoped) as natural as possible. Originally, all the timing of the animations had to be implemented manually. However, after our first public performance, Unity installed a new module called Timeline that enabled a more organic and systemic control of the timing between different events, greatly facilitating the subsequent animation efforts. A second major challenge was the development of appropriate lip-synching. Finding a reliable and robust method to develop lipsynching of the avatars with the singing proved to be a substantial challenge, and

Another problem with the use of avatars and a virtual reality environment, of course, is that the sense of embodiment, of embodied actors is largely lost. Since the themes incorporated into the opera were at least partially resonant with issues of embodiment, this was a significant deception for the production team, and we worked hard to re-introduce elements of embodiment to compensate for this loss. The fourth element that needed to be mastered to create a compelling operatic experience, beyond the appearance of the avatars, their programmed movements, and the lip-synching, was the development of camera zooming and panning sequences that could also be used to enhance the drama. Because the opera was presented in a "streaming" Unity environment, in principle the means to control the camera could have been left in the hands of the user/observer. However, since camera movements can significantly enhance or detract from dramatic tension, we

Additional elements that needed to be managed included the opening sequence,

the closing sequence, and the introductory remarks and organization. Indeed, because the opera presented the audience with a number of new elements, explanations needed to be provided along with some basic "training." We organized the material into several formats for different presentation protocols. In most of our presentations, we performed the singing live in addition to the prerecorded music. Hence, during our first presentation, we used belts that tracked the singer's breathing and used the breath as a way to "operate" the interactive plants so that they opened wide under deeper and more expansive breathing and spread more of the buoyant particles into the air. The belts were designed using "flex bands," that is, sensors that convert elastic stretch into a capacitance measurement that can be read off by a computer chip. However, although we achieved one working prototype, the second failed to operate correctly. Given the difficulties ensuring

retained control of these elements in our first version of the opera.

**98**

The initial staging was done using the Unity Game Engine in interactive mode projected onto a shared screen, following a predetermined script that determined avatar movements, virtual camera movements, music soundtracks, and that tracked voice production on the part of the audience. In addition, the two main live performers, Kiss and Edwards, sang (in Edwards' case, declaimed) the lyrics along with the pre-recorded lyrical tracks. The second and third times we presented the opera, we used a video recording of the scripted Unity staging along with the live performers and the voice-interaction module. This allowed us to incorporate subtitles in either English or French translation to assist audiences in understanding the sung or spoken text, as well as facilitating the complexity of the staging. In the final public presentation, we used a different real-time streaming version designed using the 360 view capabilities made available in the more recent version of Unity, with a view to giving the user greater flexibility using VR glasses to change view directions. For this fourth presentation, we had refined the avatar and camera movements, and had done more work on the lip-synching. This fourth staging could, in principal, accommodate changes to scripting possibilities and potentially allows for multiple endings. In the second, third and fourth stagings, we trained the audience to sing a small melodic line that could be used to guide the protagonist towards her more hopeful outcome.

## **3.9 Post-performance discussions**

Each time we presented the opera to a public, we engaged in a post-performance discussion to both assess reactions on the part of audience members concerning content and staging, but also to discuss technical aspects of the production that might interest audience members. One of the goals of the development of this opera was to engage audiences around content issues, that is, the need to be more tolerant towards others who are different from ourselves, especially in the light of peer pressures towards greater conformity, but also to recognize that even tolerant communities may include individuals or groups who are less tolerant, and that communities that may at first appear to be intolerant may include elements who are more tolerant if one only paid more attention.

Overall, audiences reacted favorably to the opera as staged, and during the discussion it became clear that the conformity-tolerance issues were accepted and interesting discussions ensued. These were mostly scientific audiences, although they included practitioners from a broad variety of public health contexts, including disability studies, mental health, social justice, and so on.

#### **3.10 The co-creation environment**

Originally conceived to be a tool to enable public engagement in the process of producing the opera (and this is still our long term goal), it was necessary to simplify to some extent the design of the environment to fit for the budget and time constraints of the project. Essentially, we had funding for 3 years and the process of

developing the opera itself took most of the first 2 years, leaving a little over a year to develop the co-creation environment (talesfromthehumanitat.com) with, by that point, greatly reduced funding.

We used the 3d design developed for the opera as the substratum for the co-creation environment—although we also substantially embellished the environment to provide more virtual experiences in much greater detail, drawing on both information provided by the original novel and extended notes by the author (Edwards). While the setting for the opera itself was a platform on the side of the floating city, we used the top surface of the city to define four areas for the co-creation environment. Each of these areas showcases a different set of co-creation tools. **Table 2** presents the main features of these four sites.

In addition to the sites themselves, we implemented a transport system. This was part of the fictional world on which the opera was based. Transport vehicles are called "threaders"—they are elongated spindle-like structures that move in closed tubes. The user of the co-creation environment may move by foot between the four sites, or may use a transport.

Furthermore, we framed the experience for the online co-creation environment within a game. Hence the user, upon entry, is presented with the task of collecting five medallions, representing the five different factions (EngFax, DeoFax, UmaFax, EcoFax and IdoFax), each of which is located at one of the co-creation stations. Note that the images used for the five medallions were designs developed by Morales at the request of Edwards in support of the novel cycle. The first medallion is provided to begin with, and the user must visit each of the co-creation stations to find the other medallions. Furthermore, they must activate the co-creation mechanisms to gain access to the medallion. The location of the medallion is indicated once the station's particular co-creation modality is engaged by a rising stream of red particles. Once all five medallions are collected, the user must find the final stream of particles to bring the five medallions together, and this liberates a final surprise interaction, which consists of access to an oracular service.

Hence, in a typical gaming encounter, a user would visit the dance zone and create a choreography with as many dancers as they chose (**Figure 13a**), would visit the Concourse and capture sung or whistled melodies that would be reproduced by the nemos when avatars passed nearby (**Figure 13b**), would visit the Agora and interact with the jonahs through their breathing, (**Figure 13c**) and would visit the Portals and either view/read or post a document of their choosing (**Figure 13d**), and would do these activities in any order. Moving between the sites could be done either on foot or via the transport system. Having visited and activated each of the four sites, the user could then access the oracular system, which is based on another of the 15 volumes that make up the *Ido Chronicles* (**Figure 14**). The interactive music components incorporated design principles developed by Kiss and her students [24] for the VR staging of music.

In addition to ensuring the esthetics of the 3d design of the co-creation environment, our visual designer, Jonathan Proulx Guimond worked on optimizing the rendition to make the complex 3d structure digitally more compact so as to


**101**

**Figure 14.**

**Figure 13.**

*and (d) the portals.*

in the quality of the rendering.

*Accessing the oracular function of the online co-creation environment.*

ensure faster loading times and better dynamics. For example, objects were sorted in terms of visibility for each scene, and objects not visible in any of the principal scenes were suppressed. Furthermore, details for objects that were far from camera positions were also suppressed. Objects were also grouped together where possible to diminish rendering times for complex structures. The colors were processed to incorporate atmospheric absorption and the effects of distance were also included

*The four stations of the online co-creation environment. (a) The dance zone; (b) the concourse; (c) the Agora;* 

*Designing a Participatory and Interactive Opera DOI: http://dx.doi.org/10.5772/intechopen.82811*

**Table 2.** *Co-creation stations.* *Designing a Participatory and Interactive Opera DOI: http://dx.doi.org/10.5772/intechopen.82811*

#### **Figure 13.**

*Interactive Multimedia - Multimedia Production and Digital Storytelling*

point, greatly reduced funding.

sites, or may use a transport.

developing the opera itself took most of the first 2 years, leaving a little over a year to develop the co-creation environment (talesfromthehumanitat.com) with, by that

In addition to the sites themselves, we implemented a transport system. This was part of the fictional world on which the opera was based. Transport vehicles are called "threaders"—they are elongated spindle-like structures that move in closed tubes. The user of the co-creation environment may move by foot between the four

Furthermore, we framed the experience for the online co-creation environment within a game. Hence the user, upon entry, is presented with the task of collecting five medallions, representing the five different factions (EngFax, DeoFax, UmaFax, EcoFax and IdoFax), each of which is located at one of the co-creation stations. Note that the images used for the five medallions were designs developed by Morales at the request of Edwards in support of the novel cycle. The first medallion is provided to begin with, and the user must visit each of the co-creation stations to find the other medallions. Furthermore, they must activate the co-creation mechanisms to gain access to the medallion. The location of the medallion is indicated once the station's particular co-creation modality is engaged by a rising stream of red particles. Once all five medallions are collected, the user must find the final stream of particles to bring the five medallions together, and this liberates a final

We used the 3d design developed for the opera as the substratum for the co-creation environment—although we also substantially embellished the environment to provide more virtual experiences in much greater detail, drawing on both information provided by the original novel and extended notes by the author (Edwards). While the setting for the opera itself was a platform on the side of the floating city, we used the top surface of the city to define four areas for the co-creation environment. Each of these areas showcases a different set of

co-creation tools. **Table 2** presents the main features of these four sites.

surprise interaction, which consists of access to an oracular service.

Hence, in a typical gaming encounter, a user would visit the dance zone and create a choreography with as many dancers as they chose (**Figure 13a**), would visit the Concourse and capture sung or whistled melodies that would be reproduced by the nemos when avatars passed nearby (**Figure 13b**), would visit the Agora and interact with the jonahs through their breathing, (**Figure 13c**) and would visit the Portals and either view/read or post a document of their choosing (**Figure 13d**), and would do these activities in any order. Moving between the sites could be done either on foot or via the transport system. Having visited and activated each of the four sites, the user could then access the oracular system, which is based on another of the 15 volumes that make up the *Ido Chronicles* (**Figure 14**). The interactive music components incorporated design principles developed by Kiss and her students [24] for the VR staging of music. In addition to ensuring the esthetics of the 3d design of the co-creation environment, our visual designer, Jonathan Proulx Guimond worked on optimizing the rendition to make the complex 3d structure digitally more compact so as to

**Site name Plant/animal Site building Co-creation skills**

Dance zone Phramae Monument Dance Concourse Nemos Concourse Singing Agora Jonahs Agora Video Portals Spiners Portal Text

**100**

**Table 2.**

*Co-creation stations.*

*The four stations of the online co-creation environment. (a) The dance zone; (b) the concourse; (c) the Agora; and (d) the portals.*

**Figure 14.** *Accessing the oracular function of the online co-creation environment.*

ensure faster loading times and better dynamics. For example, objects were sorted in terms of visibility for each scene, and objects not visible in any of the principal scenes were suppressed. Furthermore, details for objects that were far from camera positions were also suppressed. Objects were also grouped together where possible to diminish rendering times for complex structures. The colors were processed to incorporate atmospheric absorption and the effects of distance were also included in the quality of the rendering.
