**2.5 Positions and pathways forward**

While there are promising pathways forward, the graphical access challenge for BVI individuals remains a vexing and largely unsolved problem. We argue that the solution requires advancements on several fronts, including ideological, technological, and perceptual. While there has been significant research advancing our understanding of the technological and perceptual pieces (as illustrated in the vibrotactile touchscreen use case presented here), we also want to call the community to consider new ideological perspectives that will advance the field as a whole. Specifically, we present four positions that our group views as necessary for moving closer to addressing the graphical access challenge and that we see as being best addressed by vibrotactile touchscreen technology:


**117**

feedback.

*The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways…*

To truly advance this class of technology, we need a shift from thinking of assistive technologies as being specialized, single-purpose hardware/software supporting a single (niche) user group to being incorporated in a commercial platform supporting multiple functions that can be used by a broad range of people. Of course, specialized equipment is necessary in certain instances—if you want a hardcopy page of braille or to emboss a physical tactile map, you will need a specialized Braille/graphics embosser. However, in many instances, nonvisual access to information can be delivered using standard commercial devices, which has the advantage of vastly decreasing the development costs and purchase price, thereby increasing actual adoption by BVI users. One example of this is text-to-speech engines, which provide access to visually-based textual information on the screen via speech output. While an intervening software layer is needed to efficiently analyze the video model and represent this information in an intuitive manner for auditory output, the requisite hardware involving a sound card and speaker output is already available on almost all commercial devices. Adding speech input requires a mic, which is also on all smart devices, as is embedded speech-to-text software. In the spirit of this chapter, this idea can be extended to include tactile feedback. Many current touchscreen displays have vibration capabilities in some form. Using the standard vibration motor can open pathways to a whole new universe of haptic information that can augment, complement, or completely replace other modes of

As such, the traditional notion of developing highly specialized assistive technology for specific groups of users (e.g., BVI users) as a completely separate process from mainstream technology needs to be reconsidered. This shift is more about a mindset than the technology itself. That is, designers of assistive technology should start with the goal of using commercial hardware and existing software platforms when possible. They should first consider how to creatively use the built-in components of the system and the existing feature set of the interface to solve the problem before resorting to the use of specialized one-off hardware or software development. Using existing hardware, computational platforms, sensors, and other components when possible and making the access layer as implemented in software as possible betters the overall commercial product while also reducing the price of

*2.6.2 A shift from retrofitting existing technologies to embedding universal design* 

We posit that mass market companies (and researchers) developing mainstream products should embrace the notion of universal and inclusive design in their R&D process, as this not only results in products that will benefit the greatest number of users (thereby increasing their pool of potential customers) but will also have many unintended positive results that will better support core users. Consider Apple, who developed a completely inaccessible product (the iPhone) in 2007. Although touchscreen technology has been around for a long time, Apple's 2007 introduction of the iPhone brought them to the mass market. Initially, this was considered a huge set back to accessibility for blind consumers, as this new disruptive technology was based around a flat, featureless glass surface with no screen reader to provide text-to-speech. As such, blind users were completely unable to access

*2.6.1 A shift from using single-purpose, specialized hardware solutions to* 

*considering mainstream, multi-use technologies*

*DOI: http://dx.doi.org/10.5772/intechopen.82289*

developing accessible technologies at large.

*from the onset*

**2.6 Ideological requirements**

We briefly elaborate on these positions below.

*The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways… DOI: http://dx.doi.org/10.5772/intechopen.82289*

#### **2.6 Ideological requirements**

*Interactive Multimedia - Multimedia Production and Digital Storytelling*

success of these tasks (e.g., tablets versus smaller mobile platforms), and we have shown that performance on a pattern matching task is equivalent across small and large screen sizes [63]. Even though this is a low resolution output mode, these data show that vibrotactile graphics can still be used effectively and accurately when rendered on the smaller form factor of phone-sized smart devices. This is a positive finding, as the majority of BVI users of smart devices are using mobile phones. A recent review by Grussenmeyer and colleagues provides a thorough survey of how touchscreen-based technologies have been used to support information access by people who are BVI and reiterates the prevalent challenges that exist to bring full inclusion to this population [64]. In short, many of these projects suggest promising pathways forward for vibrotactile touchscreens, supported with empirical evidence and positive qualitative feedback of their capacity to convey multimodal information for the interpretation of visual graphics. Moreover, these platforms offer several significant advantages to one-off information access hardware, with the primary benefits being portability, multi-functional use, relative affordability, and widespread adoption and support by the BVI demographic. Indeed, vibrotactile touchscreens provide a robust multimodal framework, which if continually developed in conjunction with advances in touchscreen-based smart devices, has the potential to become the de-facto, universal means for accessing graphics in a multimodal, digital form (for example, see **Figure 3**). A universal, multimodal platform that is widely available is not only beneficial for the BVI population but extends to many others who benefit from multimodal learning platforms and the brain's capacity to process both redundant and complementary information from

While there are promising pathways forward, the graphical access challenge for BVI individuals remains a vexing and largely unsolved problem. We argue that the solution requires advancements on several fronts, including ideological, technological, and perceptual. While there has been significant research advancing our understanding of the technological and perceptual pieces (as illustrated in the vibrotactile touchscreen use case presented here), we also want to call the community to consider new ideological perspectives that will advance the field as a whole. Specifically, we present four positions that our group views as necessary for moving closer to addressing the graphical access challenge and that we see as being best

1.A shift in thinking of assistive technologies as single-purpose, specialized hardware solutions to considering mainstream technologies (and simple adaptations to them) as a first choice for a development platform.

2.A shift in the traditional approach of retrofitting existing technologies for accessibility to embedding universal design in technologies from the onset.

3.A shift in using unimodal feedback as a primary mode of interaction to lever-

4.A shift in designing based on features and capabilities to a principled design approach driven by end user needs that is scoped by practical guidelines sup-

aging all modalities available for primary interactions.

porting efficient and effective usage/implementation.

We briefly elaborate on these positions below.

**116**

different senses.

**2.5 Positions and pathways forward**

addressed by vibrotactile touchscreen technology:

#### *2.6.1 A shift from using single-purpose, specialized hardware solutions to considering mainstream, multi-use technologies*

To truly advance this class of technology, we need a shift from thinking of assistive technologies as being specialized, single-purpose hardware/software supporting a single (niche) user group to being incorporated in a commercial platform supporting multiple functions that can be used by a broad range of people. Of course, specialized equipment is necessary in certain instances—if you want a hardcopy page of braille or to emboss a physical tactile map, you will need a specialized Braille/graphics embosser. However, in many instances, nonvisual access to information can be delivered using standard commercial devices, which has the advantage of vastly decreasing the development costs and purchase price, thereby increasing actual adoption by BVI users. One example of this is text-to-speech engines, which provide access to visually-based textual information on the screen via speech output. While an intervening software layer is needed to efficiently analyze the video model and represent this information in an intuitive manner for auditory output, the requisite hardware involving a sound card and speaker output is already available on almost all commercial devices. Adding speech input requires a mic, which is also on all smart devices, as is embedded speech-to-text software. In the spirit of this chapter, this idea can be extended to include tactile feedback. Many current touchscreen displays have vibration capabilities in some form. Using the standard vibration motor can open pathways to a whole new universe of haptic information that can augment, complement, or completely replace other modes of feedback.

As such, the traditional notion of developing highly specialized assistive technology for specific groups of users (e.g., BVI users) as a completely separate process from mainstream technology needs to be reconsidered. This shift is more about a mindset than the technology itself. That is, designers of assistive technology should start with the goal of using commercial hardware and existing software platforms when possible. They should first consider how to creatively use the built-in components of the system and the existing feature set of the interface to solve the problem before resorting to the use of specialized one-off hardware or software development. Using existing hardware, computational platforms, sensors, and other components when possible and making the access layer as implemented in software as possible betters the overall commercial product while also reducing the price of developing accessible technologies at large.

### *2.6.2 A shift from retrofitting existing technologies to embedding universal design from the onset*

We posit that mass market companies (and researchers) developing mainstream products should embrace the notion of universal and inclusive design in their R&D process, as this not only results in products that will benefit the greatest number of users (thereby increasing their pool of potential customers) but will also have many unintended positive results that will better support core users. Consider Apple, who developed a completely inaccessible product (the iPhone) in 2007. Although touchscreen technology has been around for a long time, Apple's 2007 introduction of the iPhone brought them to the mass market. Initially, this was considered a huge set back to accessibility for blind consumers, as this new disruptive technology was based around a flat, featureless glass surface with no screen reader to provide text-to-speech. As such, blind users were completely unable to access

the native input or output functions with these devices. However, in 2009, Apple released the iPhone 3GS, which included the VoiceOver screen reader and a host of associated interactive gestures as part of the native operating system (iOS 3.0). Overnight, this release propelled Apple from a company who had ostensibly abandoned their long history supporting BVI users to the leader of mobile accessibility. TalkBack, the Android analog to VoiceOver, was also released in 2009, though it has been slower to gain momentum among the BVI community compared with iOSbased devices. Almost immediately, the iPhone became one of the most accessible pieces of assistive technology even though it was not designed to be an assistive technology in and of itself. For example, VoiceOver was designed to assist BVI users on the iPhone, but it was built-in to the native OS, rather than requiring an expensive, separate, stand-alone software package, as is the traditional model of selling screen-reader software. In addition to this universal design aspect, VoiceOver's inclusion had many unintended benefits to other markets that would have not been realized if it had not been included. For instance, self-voicing benefits people using English as a second language, it helps those with learning disabilities, and it is used regularly by individuals for proof reading. This revealed further pathways, where app developers leveraged features like the Siri personal assistant and other built-in sensors to develop apps that support accessibility in a wide variety of applications. Examples of these include apps that can read barcodes, can tell you about your surroundings, can describe a picture to you, can read money to you, and so on [65]. The exponential growth and broad-based proliferation of touchscreen-based devices has been an amazing boon for access technology. For the first time, it is now possible to incorporate most of the expensive, stand-alone devices that were previously required for information access, as fully accessible apps on the phone. The rapid development of apps harnessing this power, mobile flexibility, diversity of usage scenarios, and user groups means that all roads (at least from a computing standpoint) lead to incorporating some aspect of these technologies, and this has broad-based benefits that extend across demographics. Further, the incorporation of multimodal feedback—visual, aural, and touch—expands the possibilities and capabilities that can be achieved through these new developments. To maximize the broader impacts possible when incorporating inclusive/universal design, we strongly encourage developers to leverage all communication channels available from the onset of the design and implementation process.

## *2.6.3 A shift from relying on unimodal feedback to leveraging all modalities available for primary interactions*

Many hardware platforms today rely heavily on unimodal feedback. Even if they have multimodal capabilities, many of these multimodal interactions are significantly underutilized and sparsely implemented. Additionally, many of them are only implemented as a means for input or output, but not both, with additional modalities being used only for secondary or tertiary cueing. For example, touchscreens currently can provide visual, auditory, and vibrotactile information, yet they are generally only thought of as visual input/output interfaces. Despite having built-in vibration capabilities, vibrotactile cues are usually only used for conveying information about alerts or confirmation of an operation, not as a primary mode of extracting key information during user interactions or as input to the system. Acknowledging and enabling multimodal information as a primary means of input and output interaction is an important design consideration moving forward. This chapter provides several examples of research illustrating the benefits of leveraging all modalities available on touchscreens, with a specific focus on its potential to address the graphical access problem for BVI individuals. We note that there are

**119**

*The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways…*

likely several other unintended positive outcomes that would result should such an approach be adopted with touchscreens and other technologies if multimodal

*2.6.4 A shift from designing based on interface features to designing based on end-*

A critical first step here is overcoming the engineering trap, i.e., designing based on maximizing features and developer interests. The better approach is adopting a principled user-based design philosophy from the onset that considers the most relevant features ensuring the greatest functional utility for the end-user. The context of the technology implementation, how it will be deployed and used, how it compares to current tools, and where it falls short or excels are all worthy investigations that need to be explored. Most importantly, adhering to standards and guidelines to scope when and where a given technology is (or is not) appropriate are necessary. Success here often requires interdisciplinary research that cuts across several domains, involves multiple stakeholders in the process, and incorporates iterative end user assessment and participation. While advancements in technology will certainly open up new pathways, we, as designers, must also be open and cognizant to the reality that more advanced technology does not necessarily mean an immediately better solution. New technologies and advancements should be probed from multiple perspectives and should be situated and contextualized in practical use case scenarios that consider known perceptual and cognitive capabilities. While this approach may not be the fastest or the easiest path, it is certainly the one that will best inform when and how a new product will be most successful and when and where it will not work. Our group has come together to do this for vibrotactile touchscreens, and we are encouraged by the growing number of teams who are also adopting this design approach. We acknowledge that this user-centered, needsbased, principled design model takes a great deal of time and resources, and that all technology developments begin with feasibility studies. We are hoping to encourage communities of researchers and technology developers to come together to extend these inquiries and tackle this challenge from multiple perspectives, with the shared goal of driving it to its full potential. We further encourage researchers to disseminate and share their work, and when possible, to open SDK's, API's, and hardware

*DOI: http://dx.doi.org/10.5772/intechopen.82289*

*user needs*

capabilities were leveraged equally in the user experience.

platforms for community access, contribution, and growth.

We believe that a principled solution to graphical access, designed from the onset to maximize the perceptual and cognitive characteristics of nonvisual and multimodal information processing, while also meeting the most pressing information access needs of the target demographic, could have broad and immediate societal impact. In this chapter, we highlight both the challenges and the vast potential of touchscreen-based smart devices as a platform for alleviating the graphics accessibility gap. We review the state of the art in this line of research and present positions and pathways forward for addressing the graphical access challenge from multiple perspectives. We do this specifically from an ideological standpoint, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain. Despite the need for more research, we see vibrotactile touchscreen platforms as a promising springboard for bringing multimodal, nonvisual graphical access into the hands of individuals everywhere. Because of their portability, availability, capabilities,

**3. Conclusions and future research**

*The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways… DOI: http://dx.doi.org/10.5772/intechopen.82289*

likely several other unintended positive outcomes that would result should such an approach be adopted with touchscreens and other technologies if multimodal capabilities were leveraged equally in the user experience.

#### *2.6.4 A shift from designing based on interface features to designing based on enduser needs*

A critical first step here is overcoming the engineering trap, i.e., designing based on maximizing features and developer interests. The better approach is adopting a principled user-based design philosophy from the onset that considers the most relevant features ensuring the greatest functional utility for the end-user. The context of the technology implementation, how it will be deployed and used, how it compares to current tools, and where it falls short or excels are all worthy investigations that need to be explored. Most importantly, adhering to standards and guidelines to scope when and where a given technology is (or is not) appropriate are necessary. Success here often requires interdisciplinary research that cuts across several domains, involves multiple stakeholders in the process, and incorporates iterative end user assessment and participation. While advancements in technology will certainly open up new pathways, we, as designers, must also be open and cognizant to the reality that more advanced technology does not necessarily mean an immediately better solution. New technologies and advancements should be probed from multiple perspectives and should be situated and contextualized in practical use case scenarios that consider known perceptual and cognitive capabilities. While this approach may not be the fastest or the easiest path, it is certainly the one that will best inform when and how a new product will be most successful and when and where it will not work. Our group has come together to do this for vibrotactile touchscreens, and we are encouraged by the growing number of teams who are also adopting this design approach. We acknowledge that this user-centered, needsbased, principled design model takes a great deal of time and resources, and that all technology developments begin with feasibility studies. We are hoping to encourage communities of researchers and technology developers to come together to extend these inquiries and tackle this challenge from multiple perspectives, with the shared goal of driving it to its full potential. We further encourage researchers to disseminate and share their work, and when possible, to open SDK's, API's, and hardware platforms for community access, contribution, and growth.

## **3. Conclusions and future research**

We believe that a principled solution to graphical access, designed from the onset to maximize the perceptual and cognitive characteristics of nonvisual and multimodal information processing, while also meeting the most pressing information access needs of the target demographic, could have broad and immediate societal impact. In this chapter, we highlight both the challenges and the vast potential of touchscreen-based smart devices as a platform for alleviating the graphics accessibility gap. We review the state of the art in this line of research and present positions and pathways forward for addressing the graphical access challenge from multiple perspectives. We do this specifically from an ideological standpoint, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain. Despite the need for more research, we see vibrotactile touchscreen platforms as a promising springboard for bringing multimodal, nonvisual graphical access into the hands of individuals everywhere. Because of their portability, availability, capabilities,

*Interactive Multimedia - Multimedia Production and Digital Storytelling*

from the onset of the design and implementation process.

*available for primary interactions*

*2.6.3 A shift from relying on unimodal feedback to leveraging all modalities* 

Many hardware platforms today rely heavily on unimodal feedback. Even if they have multimodal capabilities, many of these multimodal interactions are significantly underutilized and sparsely implemented. Additionally, many of them are only implemented as a means for input or output, but not both, with additional modalities being used only for secondary or tertiary cueing. For example, touchscreens currently can provide visual, auditory, and vibrotactile information, yet they are generally only thought of as visual input/output interfaces. Despite having built-in vibration capabilities, vibrotactile cues are usually only used for conveying information about alerts or confirmation of an operation, not as a primary mode of extracting key information during user interactions or as input to the system. Acknowledging and enabling multimodal information as a primary means of input and output interaction is an important design consideration moving forward. This chapter provides several examples of research illustrating the benefits of leveraging all modalities available on touchscreens, with a specific focus on its potential to address the graphical access problem for BVI individuals. We note that there are

the native input or output functions with these devices. However, in 2009, Apple released the iPhone 3GS, which included the VoiceOver screen reader and a host of associated interactive gestures as part of the native operating system (iOS 3.0). Overnight, this release propelled Apple from a company who had ostensibly abandoned their long history supporting BVI users to the leader of mobile accessibility. TalkBack, the Android analog to VoiceOver, was also released in 2009, though it has been slower to gain momentum among the BVI community compared with iOSbased devices. Almost immediately, the iPhone became one of the most accessible pieces of assistive technology even though it was not designed to be an assistive technology in and of itself. For example, VoiceOver was designed to assist BVI users on the iPhone, but it was built-in to the native OS, rather than requiring an expensive, separate, stand-alone software package, as is the traditional model of selling screen-reader software. In addition to this universal design aspect, VoiceOver's inclusion had many unintended benefits to other markets that would have not been realized if it had not been included. For instance, self-voicing benefits people using English as a second language, it helps those with learning disabilities, and it is used regularly by individuals for proof reading. This revealed further pathways, where app developers leveraged features like the Siri personal assistant and other built-in sensors to develop apps that support accessibility in a wide variety of applications. Examples of these include apps that can read barcodes, can tell you about your surroundings, can describe a picture to you, can read money to you, and so on [65]. The exponential growth and broad-based proliferation of touchscreen-based devices has been an amazing boon for access technology. For the first time, it is now possible to incorporate most of the expensive, stand-alone devices that were previously required for information access, as fully accessible apps on the phone. The rapid development of apps harnessing this power, mobile flexibility, diversity of usage scenarios, and user groups means that all roads (at least from a computing standpoint) lead to incorporating some aspect of these technologies, and this has broad-based benefits that extend across demographics. Further, the incorporation of multimodal feedback—visual, aural, and touch—expands the possibilities and capabilities that can be achieved through these new developments. To maximize the broader impacts possible when incorporating inclusive/universal design, we strongly encourage developers to leverage all communication channels available

**118**

and wide adoption among the BVI community, multimodal touchscreen interfaces are poised to serve as a model for universally designed consumer technologies that are also effective assistive technologies. These multimodal interfaces are also poised to close the accessibility gap while serving as a model for how we think about accessibility in the context of a new technological era.
