**5. 3D virtual elements**

As illustrated in the excerpts given above, the scenes depicted are highly visual in nature. Proulx Guimond is a visual artist with whom Edwards had worked previously. Indeed, he developed the cover art for the novel, Plenum [1]. He created 3D scenes for each of the four situations. The scenes were designed so that they would integrate with the visual appearance of each belt. Hence the lunar scene was anchored to the belt which presents the surface of the moonlet itself (**Figure 7**), showing as extrusions of the artificial city constructed around the moonlet (in yellow). For each scene, we planned three distinct elements, first a static 2D visualization, second a static 3D visualization and finally a dynamically changing 3D visualization. The dynamic elements added to the depiction of the moonlet and its artificial extrusions were the arrival of starships which would dock with the city.

In the second scene, the heliocentric orbital station is modeled (in exaggerated scale, since it would normally be too small to see at the scale of the sun) along with the surface of the solar photosphere with its sunspots. Furthermore, the dynamic elements included the jonahs and a solar flare made for a dramatic and flamboyant event within the whole sequence. To provide the fashion representation with more drama, the orbital station was situated on a diagonal with respect to the solar surface. *Plenum a la Mode - Augmented Reality Fashions DOI: http://dx.doi.org/10.5772/intechopen.99042*

**Figure 7.** *3D image constructed for the lunar scene.*

**Figure 8.** *3D image constructed for the space platform scene.*

In the third scene, the main focus was the spaceborne platform itself, again, oriented obliquely with respect to the model's body and extruding from this (**Figure 8**). The platform includes a dock and two rotating cylinders used to provide pseudo-gravity to its occupants. Here, again, jonahs are seen moving past the structure, along with an octopus who is also one of the main characters in the story. The platform is viewed against a colorful emission nebula in the background, and here the dramatic event is the swirling movement of the nebula under time acceleration, the spinning of the cylinders and the movements of the jonahs and octopus.

The 3D graphics were created in 3DS Max™ and eventually transferred to the Unity environment. Efforts were undertaken to make the render of the visual as light as possible to ensure real time updating of the scene elements. This was achieved by reducing the complexity of the scene to its bare minimum, that is, incorporating only a small number of simple elements, and limiting the number of different materials and textures used. In addition, the animated sequences were also kept to a minimal complexity.

The visual integration of the virtual scene elements with the physical belt and, indeed, the body of the models required further adjustments once all the elements were in place and fully integrated. The integration, as outlined below, posed significant challenges since the majority of the work was carried out individually under lockdown conditions over the course of 2020 and the first half of 2021, that is, during the midst of the Covid19 pandemic. Indeed, at one point we considered doing the fashion show as a purely virtual event. However, the initial impetus for the project was for a show with live audience, and we remained convinced that the full impact of the AR technology consists of its use in real time in the presence of a physical audience. We therefore decided to delay the event itself until it could be performed live. As a consequence, however, this paper is being completed before the final presentation event.

## **6. Garment tracking**

Testing of our efforts to integrate the tracking software with the belts revealed a succession of challenges. We found that the software was temperamental in its ability to recognize the belt patterns, even though initial efforts had been successful. Good lighting turned out to be an important element in ensuring this recognition. Once recognition was achieved, and the virtual elements reliably overlaid on top of the optical image, it would retain the lock for a certain set of manipulations, but eventually the lock would be lost when viewing conditions became less than ideal (for example, if the person wearing the belt moved too far away from the camera). The model would then need to move in close, or image viewing parameters otherwise manipulated, until a recognition lock could once more be obtained. At first, the virtual image was stable only for short intervals. We introduced some persistence into the visualization, so that even when the recognition failed, the virtual objects would persist a second or so. This sometimes results, however, in a jerky movement of the virtual elements in relation to the model's movement.

Effective integration also required introducing occlusion effects. Hence, we used the belt's cylindrical shape to create an occlusion model and used this to hide virtual elements as they ostensibly moved behind the body. Without intentionally occluding the virtual elements in this way, they were perceived as always being in front of the model, breaking the illusion.

As the model moved, the location of the virtual elements would often lag behind the model. We used a cube-like envelope to test these issues. We eventually discovered that we had been testing the performance of the visualization against "busy" backgrounds. Much of the early testing was done in a living room or kitchen viewed on Zoom. When we met and did tests outside, the same problems persisted until we chose a uniform background. At that point, many of the tracking problems diminished. After that, we chose uniform walls for testing. Feature-filled backgrounds clearly confused the recognition software, causing it to lose its lock more frequently. Once this problem was identified, lock persistence, although not perfect, became acceptable.

At one point we also sought to use an accelerometer and Inertial Motion Unit (IMU) that could be integrated into the garment as another source of data to help stabilize the imaging. Our efforts in this direction failed to generate the necessary stability, however, and the effort was abandoned.

Another strategy we eventually developed to ensure longer lock persistence was to restructure the viewing geometry using a second camera feed. Essentially, we had

**Figure 9.** *Image of the finished lunar belt in situ along with image subset used for recognition.*

been using the same camera image as the source for recognition and to serve as the support for the visualization. Our realization that Vuforia must degrade the image somewhat during processing, a realization that followed its lack of sensitivity to difference resolution webcams, led us to segment out of the image the region surrounding the belt and provide this image segment to Vuforia in a separate camera feed instead of the full image (**Figure 9**). This resulted in better recognition at a wider range of distances. Hence, for example, in our earlier attempts a distance of about a meter was required to ensure a recognition lock, but after the segmentation step was added, a reliable lock up to three meters could now be achieved.

Recourse to a more complex set of processing options such as described here meant, however, abandoning our commitment to permitting the use of a tablet or smartphone for viewing the AR scene. In the final configuration, we need access to a full computer, although a laptop is also acceptable.
