**4. Spatial tools for visually impaired children**

The acquisition of spatial competence is typically a good indicator of the future ability to independently navigate in the environment and engage in positive social interaction with peers. While for sighted individuals, the visual feedback represents the most important incentive for actions and thus for the development of mobility and social skills, visually impaired individuals strongly rely on auditory and tactile landmarks to encode spatial and social information. Thus, the creation of technological devices to support visually impaired children in their spatial and social development would be a need. Nonetheless, despite the huge recent advancements in technological industry, most of the devices developed so far to address visually impaired population's needs are not widely accepted by adults and not easily adaptable to children [134].

As reported in the previous sections, visual impairments can determine spatial and social impairments during development. Technological support for the blind should fulfill two different but complementary tasks: the first is to substitute the absent sensory information (vision) with other sensory signals (audition and touch) for daily activities, and the second is to support the rehabilitation of impaired functions following sensory loss. This latter aspect is particularly important when the visual impairment occurs during the first year of life, because technological devices might represent an opportunity for children to develop perceptual and cognitive abilities by compensating for the sensory deprivation. Most of the technological supports developed to date have fulfilled mainly the first task, namely the substitution of vision with other modalities for everyday tasks such as object recognition.

Sensory substitution devices (SSDs) convert the stimuli, normally accessed through one sensory modality, into stimuli accessible to another sensory modality. Specifically, sensory substitution devices for visually impaired individuals aim at supplying the missing visual information with visual-to-tactile or visual-to-auditory conversion systems [135]. Typically, substitution systems based on visual-totactile conversion transforms images captured by a camera into tactile stimulations directed to users. From the first device developed in the mid-1960s by Bach-y-Rita (Tactile-Visual Sensory Substitution device or TVSS), that converts signals from a video camera into tactile stimulation applied to the back of the subject allowing for the recognition of lines and shapes [136], recent technological progress allowed the development of much smaller, portable, and wearable devices. For instance, wristbands, vests, belts, and shoes which allow hands-free interactions [137] and devices that can be placed on various body surfaces (e.g., fingers, wrist, head, abdomen, and feet) [138, 139]. Conversely, systems based on visual-to-auditory conversion transform the images captured by a camera into sounds transmitted to users via headphones. One of the most famous visual-to-auditory devices is the vOICe developed by Meijer [140] that associates height with pitch and brightness with loudness in a left-to-right scan of the visual image.

In our recent review, we listed the SSDs designed for visually impaired individuals by highlighting their main features and limitations for daily use [134]. In particular, we identified six main limitations that might determine low acceptance rate in adults and low adaptability in children:


Therefore, while sensory substitution devices have been shown to provide support for specific perceptual tasks in adults [142], they have never been tested in children principally because their use might too overwhelming for children. Nonetheless, technological development should be addressed especially to visually impaired children needs because cortical plasticity is maximal during the first year of life, therefore the benefit deriving from early interventions should be higher.

**259**

*The Role of Vision on Spatial Competence DOI: http://dx.doi.org/10.5772/intechopen.89273*

with either the auditory or the tactile modality alone.

Moreover, technological development should lead to multimodal stimulation whose benefits have been repeatedly reported compared to unimodal stimulation [143–145], while most of the SSDs developed so far substitute the visual function

With this in mind, we developed a new device for visually impaired children (Audio Bracelet for Blind Interaction, ABBI, [146]), which is an audio bracelet that produces an auditory feedback of body movements when positioned on a main effector such as the wrist in order to provide a sensorimotor signal similar to that used by sighted children to construct a sense of space. Indeed, several reports indicate that sighted children typically acquire spatial competence by experiencing visuomotor correspondences [72]. In this sense, our device could be used to align the spatial understanding between one's own body and the external space through coupling auditory feedback with intentional motor actions. The audio movement created by the bracelet conveys spatial information and allows the blind user to build a representation of the movement in space in an intuitive and direct manner. We validated the ABBI device with a clinical trial on an Italian sample of 44 visually impaired children aged 6–17 years old assigned to an experimental (ABBI training) or a control (classical training) rehabilitation condition. The experimental training group followed an intensive but entertaining rehabilitation for 12 weeks during which children performed ad-hoc developed audio-spatial exercises with the Audio Bracelet for Blind Interaction (ABBI). The clinical trial consisted of three sessions: pre-evaluation, training, and post-evaluation. Pre- and post-evaluation sessions lasted 60 min during which a battery of spatial and motor tests were performed [147]. The BSP (Blind Spatial Perception) battery comprised six tests: (1) auditory localization: the child listens to the sound produced by a set of loudspeakers positioned horizontally in front of him/her and localizes the sound source by pointing to it with a white cane; (2) auditory bisection: the child listens to a sequence of three sounds presented successively by a set of loudspeakers positioned horizontally in front of him/her and verbally reports whether the second sound is closer in space to the first or to the third one presented; (3) auditory distance: the child listens to two consecutive sounds produced by a set of loudspeakers positioned vertically in front of him/her in depth and verbally reports which of the two stimuli presented is closer in space to his/her own body; (4) auditory reaching: the child listens to a static sound positioned in far space and reaches the position of the sound by walking toward it; (5) proprioceptive reaching: the child repeats a movement trajectory after being presented with it by an external operator; (6) general mobility: the child walks straight on for three meters and then back to the starting position at his/her own pace. The training session lasted 12 weeks and children were assigned to the experimental training condition based on activities with the use of ABBI or to the classical training condition based on psychomotor lessons not necessarily involving sound localization activities. All children enrolled in the ABBI training group performed weekly training exercises with a trained rehabilitator for 45 min (9 h over 12 weeks) and weekly training sessions with a relative at home for 5 h (60 h over 12 weeks) for a total training period of 69 h. All training exercises were developed to train children' ability to recognize and localize sounds in space according to different levels of difficulty: (a) recognize and localize simple sound movements, such as a straight motion flow performed along the horizontal or sagittal planes in the front peri-personal space (first level); (b) recognize and localize complex sound movements, such as a motion flow performed randomly in space in the front peri-personal space, e.g., composite geometrical and nongeometrical figures (second level); (c) recognize and localize simple and complex sound movements in the back peri-personal space (third level); (d) recognize and localize simple and complex sound movements in the front and back in the extra-personal

#### *The Role of Vision on Spatial Competence DOI: http://dx.doi.org/10.5772/intechopen.89273*

*Visual Impairment and Blindness - What We Know and What We Have to Know*

with loudness in a left-to-right scan of the visual image.

in adults and low adaptability in children:

brightness in the vOICe [141]);

patients through standardized clinical trials;

when using the device;

Sensory substitution devices (SSDs) convert the stimuli, normally accessed through one sensory modality, into stimuli accessible to another sensory modality. Specifically, sensory substitution devices for visually impaired individuals aim at supplying the missing visual information with visual-to-tactile or visual-to-auditory conversion systems [135]. Typically, substitution systems based on visual-totactile conversion transforms images captured by a camera into tactile stimulations directed to users. From the first device developed in the mid-1960s by Bach-y-Rita (Tactile-Visual Sensory Substitution device or TVSS), that converts signals from a video camera into tactile stimulation applied to the back of the subject allowing for the recognition of lines and shapes [136], recent technological progress allowed the development of much smaller, portable, and wearable devices. For instance, wristbands, vests, belts, and shoes which allow hands-free interactions [137] and devices that can be placed on various body surfaces (e.g., fingers, wrist, head, abdomen, and feet) [138, 139]. Conversely, systems based on visual-to-auditory conversion transform the images captured by a camera into sounds transmitted to users via headphones. One of the most famous visual-to-auditory devices is the vOICe developed by Meijer [140] that associates height with pitch and brightness

In our recent review, we listed the SSDs designed for visually impaired individuals by highlighting their main features and limitations for daily use [134]. In particular, we identified six main limitations that might determine low acceptance rate

• Invasiveness: SSDs can be physically invasive in the sense that in order to be used, they must be positioned on crucial body parts (e.g., ears or mouth), thus limiting perceptual functions in users or they must be transported (e.g., in

• Extensive training: SSDs typically require long periods of training in order to be used because users need to learn how to interpret the output of the device, which is typically not immediate (e.g., sound loudness corresponds to pixel

• High cognitive load: SSDs usually require high attentional resources, which makes it difficult for the user to focus on the main task they are performing

• No clinical validation: SSDs frequently remain prototypes and do not reach the blind users market, principally because they are not validated on large sample

• Artificiality: SSDs are generally based on the idea that users can understand the properties of visual stimulus by listening (in the case of visual-to-auditory SSDs) or feeling (in the case of visual-to-tactile SSDs) a stimulus resulting from an artificial transformation code, missing an important aspect of the

learning process, which is the association of action and perception.

Therefore, while sensory substitution devices have been shown to provide support for specific perceptual tasks in adults [142], they have never been tested in children principally because their use might too overwhelming for children. Nonetheless, technological development should be addressed especially to visually impaired children needs because cortical plasticity is maximal during the first year of life, therefore the benefit deriving from early interventions should be higher.

backpacks), thus limiting users' navigation for weight and size;

**258**

Moreover, technological development should lead to multimodal stimulation whose benefits have been repeatedly reported compared to unimodal stimulation [143–145], while most of the SSDs developed so far substitute the visual function with either the auditory or the tactile modality alone.

With this in mind, we developed a new device for visually impaired children (Audio Bracelet for Blind Interaction, ABBI, [146]), which is an audio bracelet that produces an auditory feedback of body movements when positioned on a main effector such as the wrist in order to provide a sensorimotor signal similar to that used by sighted children to construct a sense of space. Indeed, several reports indicate that sighted children typically acquire spatial competence by experiencing visuomotor correspondences [72]. In this sense, our device could be used to align the spatial understanding between one's own body and the external space through coupling auditory feedback with intentional motor actions. The audio movement created by the bracelet conveys spatial information and allows the blind user to build a representation of the movement in space in an intuitive and direct manner.

We validated the ABBI device with a clinical trial on an Italian sample of 44 visually impaired children aged 6–17 years old assigned to an experimental (ABBI training) or a control (classical training) rehabilitation condition. The experimental training group followed an intensive but entertaining rehabilitation for 12 weeks during which children performed ad-hoc developed audio-spatial exercises with the Audio Bracelet for Blind Interaction (ABBI). The clinical trial consisted of three sessions: pre-evaluation, training, and post-evaluation. Pre- and post-evaluation sessions lasted 60 min during which a battery of spatial and motor tests were performed [147]. The BSP (Blind Spatial Perception) battery comprised six tests: (1) auditory localization: the child listens to the sound produced by a set of loudspeakers positioned horizontally in front of him/her and localizes the sound source by pointing to it with a white cane; (2) auditory bisection: the child listens to a sequence of three sounds presented successively by a set of loudspeakers positioned horizontally in front of him/her and verbally reports whether the second sound is closer in space to the first or to the third one presented; (3) auditory distance: the child listens to two consecutive sounds produced by a set of loudspeakers positioned vertically in front of him/her in depth and verbally reports which of the two stimuli presented is closer in space to his/her own body; (4) auditory reaching: the child listens to a static sound positioned in far space and reaches the position of the sound by walking toward it; (5) proprioceptive reaching: the child repeats a movement trajectory after being presented with it by an external operator; (6) general mobility: the child walks straight on for three meters and then back to the starting position at his/her own pace. The training session lasted 12 weeks and children were assigned to the experimental training condition based on activities with the use of ABBI or to the classical training condition based on psychomotor lessons not necessarily involving sound localization activities. All children enrolled in the ABBI training group performed weekly training exercises with a trained rehabilitator for 45 min (9 h over 12 weeks) and weekly training sessions with a relative at home for 5 h (60 h over 12 weeks) for a total training period of 69 h. All training exercises were developed to train children' ability to recognize and localize sounds in space according to different levels of difficulty: (a) recognize and localize simple sound movements, such as a straight motion flow performed along the horizontal or sagittal planes in the front peri-personal space (first level); (b) recognize and localize complex sound movements, such as a motion flow performed randomly in space in the front peri-personal space, e.g., composite geometrical and nongeometrical figures (second level); (c) recognize and localize simple and complex sound movements in the back peri-personal space (third level); (d) recognize and localize simple and complex sound movements in the front and back in the extra-personal


*One year follow-up of the ABBI group (T2-T0). In order to evaluate the effects within groups, two-tailed t-tests assuming equal variances were performed between groups at baseline (T0) and post-training period (T1). Changes in the outcome measures were then calculated between baseline (T0) and post-training period (T1) in the ABBI training and classical training group (ΔΑ and ΔC), and between baseline (T0) and follow-up period (T2) in the ABBI training group (ΔΑ2). Data are presented as mean and standard deviation. The stars indicate the statistical significance of the corresponding t-test of the score difference (\*p < 0.05; \*\*p < 0.01; \*\*\*p < 0.001). Table readapted from [148].*

#### **Table 1.**

*Score difference (Δ) after 12 weeks training (T1-T0).*

space (fourth level). The comparison of overall spatial performance before and after the training with a dedicated assessment battery indicated that the ABBI device is effective in improving spatial skills in an intuitive manner (see **Table 1** for a summary of results), confirming that in the case of blindness perceptual development can be enhanced with naturally associated auditory feedbacks to body movements [148]. Moreover, the validation of the ABBI device demonstrated that the early introduction of a tailored audio-motor training could potentially prevent spatial developmental delays in visually impaired children [149].
