**Wheelchair and Virtual Environment Trainer by Intelligent Control**

Pedro Ponce, Arturo Molina and Rafael Mendoza

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/48410

## **1. Introduction**

270 Fuzzy Controllers – Recent Advances in Theory and Applications

Systems Magazine pp. 6–12.

Engineers, pp. 26-46.

No. 1, pp. 116-132.

14–23.

265–276.

Rauch, H. E. (1994). Intelligent fault diagnosis and control reconfiguration. IEEE Control

Segel, L. (1956) 'Theoretical Prediction and Experimental Substantiation of the Response of the Automobile to Steering Control', Automobile Division, The Institute of Mechanical

Stengel, R. F. (1991). Intelligent failure-tolerant control. IEEE Control Systems Magazine pp.

Tanaka, K and Wang, O. (1998) 'Fuzzy Regulators and Fuzzy observers: A linear Matrix

Takagi, M and Sugeno M. (1985) 'Fuzzy identification of systems and its application to moddeling and control', IEEE Trans. on Systems Man and Cybernetics-part C, Vol. 15,

Xioodong and L, Qingling, Z. (2003) 'New approaches to H∞ controller designs based on fuzzy observers for T-S fuzzy systems via LMI', Automatica Vol 39, pp.1571-1582. Zhang, Y. and J. Jiang (2003). Bibliographical review on reconfigurable fault-tolerant control systems. In: Proceedings of the SAFEPROCESS 2003: 5th Symposium on Fault Detection and Safety for Technical Processes. number M2-C1. IFAC. Washington D.C., USA. pp.

Inequality Approach', Proceeding of 36 th IEEE CDC, pp. 1315-1320.

There are many kinds of diseases and injuries that produce mobility problems. The people affected with any disability must deal with a new lifestyle, specifically people with tetraplegia. According to ICF [1], people with tetraplegia have damages associated to the power of muscles of all limbs, tone of muscles of all limbs, resistance of all muscles of the body and endurance of all muscles of the body. The main objective of this project was to help disabled people to move any member of their body, although this wheelchair can be used for persons with mobility problems, doctors should not recommend it to all patients because it reduces muscle movement, which could lead to muscular dystrophy.

Currently, there is not an efficient system that covers the different needs that a person with quadriplegia could have. Their mobility is reduced by physical injury and, depending on the extent of the damage nursing and family assistance is required. Even though many platforms have been developed to address this problem, there is not an integrated system that allows the patient to move autonomously from one place to another, thus limiting the patient to remain at rest all the time. In previous research projects completed in Canada and in the United States, such as wheelchairs controlled with the tongue [2] and a wheelchair controlled with head and shoulder movements [3]. Those systems provide mobility for the person with any injury in functions related to muscle strength. This work offers a different alternative for the patient and aims to build an autonomous wheelchair that can afford enough motion capacity to transport a person with quadriplegia. Different kinds of controls are provided, so the trajectories required by the patient must be controlled using ocular movements or voice commands, among others.

An existing brand of electric wheelchair was used (the commercial Quickie wheelchair model P222 [4] with a Qtronix controller).

© 2012 Ponce et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **2. Eye movement control system**

## **2.1. General description**

The eye control is based on the magnetic dipole generated by the eye movement; therefore a voltage signal is produced allowing us to sense these voltages using clinical electrodes. Those signals, microvolts, come with noise. A biomedical differential amplifier was used to sense the desired signal in the first electronic stage and simple amplification in the second stage. The signals are digitalized and acquired into the computer in the range of volts via data acquisition hardware for further manipulation. Once the signal is filtered and normalized, the main program based on artificial neural networks learns the signals for each eye movement. This allows us to classify the signal, so it can be compared against the next signals acquired. In this manner, the system could detect which kind of movement was made and assign a direction command to the wheelchair.

## **2.2. Physiological facts**

There is a magnetic dipole starting between the retina and cornea that generates differences of voltage around the eye. This voltage ranges from 15 to 200 microvolts depending on the person. The voltage signals also contain noise with the fundamental of a base frequency between 3 and 6 hertz. This voltage can be plotted over time to obtain an electro-oculogram (EOG) [7] which describes the eye movement. Fig. 1 shows a person with the electrodes.

**Figure 1.** Patient using the EOG and training the signal recognition system.

Prior to digitalizing the signals, an analogical stage of amplifiers, divided in two basic parts, was used. The first p is a differential amplifier AD620 for biomedical applications, a gain of 1000x was calculated using the equation (1):

$$R\_G = \frac{49.4k\Omega}{G-1} \tag{1}$$

Wheelchair and Virtual Environment Trainer by Intelligent Control 273

This section will present an overview of the program implemented for the EOG Signal acquisition and filtering stages. Fig. 2 shows three icons, the first one (data) represents a local variable which receives the values from the LabVIEW utility used to connect the computer to a DAQ. The second icon represents the filter which is configured as written in part B. The third icon is an output that will display a chart in the front panel in which the

After the signal was filtered, it was to be divided into frames with 400 samples each. This stage is very important because the signals are not all the same length, thus by using the

Once the length of our data arrays was normalized, the amplitude of the signal had to be normalized as well. After all the aforementioned stages, there will be six normalized arrays,

In Figure 3, it can be seen that this section will receive the variables and give the calculated error. In the case of the recognition signals where two different kinds of neural networks, trigonometric neural networks and Hebbian neural networks were tested, both of them gave good results. Figure 3 shows the configuration for the Hebbian Learning. The diagram of the

This network will take the point from the incoming signal and will make an approximation of this value and save it into W. If train is false the network will only return the values already saved in W; by doing this "Hebbian Comparison" will calculate the error between

configuration of this kind of network is shown inside the block diagram of Figure 3.

**2.3. EOG LabVIEW Code** 

**Figure 2.** Signal filter

user can see the filtered signal in real time.

**Figure 3.** Training System LabVIEW Code

the incoming signal and W.

framing process all the signals will have the same length.

which are then connected to their Neural Network, as shown in Figure 3.

Where G is the gain of the component and RG is the resistance. The digitalization of the amplified signal is carried out with the National Instruments DAQ [8] data acquisition hardware.

## **2.3. EOG LabVIEW Code**

272 Fuzzy Controllers – Recent Advances in Theory and Applications

made and assign a direction command to the wheelchair.

**Figure 1.** Patient using the EOG and training the signal recognition system.

1000x was calculated using the equation (1):

The eye control is based on the magnetic dipole generated by the eye movement; therefore a voltage signal is produced allowing us to sense these voltages using clinical electrodes. Those signals, microvolts, come with noise. A biomedical differential amplifier was used to sense the desired signal in the first electronic stage and simple amplification in the second stage. The signals are digitalized and acquired into the computer in the range of volts via data acquisition hardware for further manipulation. Once the signal is filtered and normalized, the main program based on artificial neural networks learns the signals for each eye movement. This allows us to classify the signal, so it can be compared against the next signals acquired. In this manner, the system could detect which kind of movement was

There is a magnetic dipole starting between the retina and cornea that generates differences of voltage around the eye. This voltage ranges from 15 to 200 microvolts depending on the person. The voltage signals also contain noise with the fundamental of a base frequency between 3 and 6 hertz. This voltage can be plotted over time to obtain an electro-oculogram (EOG) [7] which describes the eye movement. Fig. 1 shows a person with the electrodes.

Prior to digitalizing the signals, an analogical stage of amplifiers, divided in two basic parts, was used. The first p is a differential amplifier AD620 for biomedical applications, a gain of

> 49.4 1 *<sup>G</sup> <sup>k</sup> <sup>R</sup> G*

Where G is the gain of the component and RG is the resistance. The digitalization of the amplified

signal is carried out with the National Instruments DAQ [8] data acquisition hardware.

(1)

**2. Eye movement control system** 

**2.1. General description** 

**2.2. Physiological facts** 

This section will present an overview of the program implemented for the EOG Signal acquisition and filtering stages. Fig. 2 shows three icons, the first one (data) represents a local variable which receives the values from the LabVIEW utility used to connect the computer to a DAQ. The second icon represents the filter which is configured as written in part B. The third icon is an output that will display a chart in the front panel in which the user can see the filtered signal in real time.

**Figure 2.** Signal filter

After the signal was filtered, it was to be divided into frames with 400 samples each. This stage is very important because the signals are not all the same length, thus by using the framing process all the signals will have the same length.

Once the length of our data arrays was normalized, the amplitude of the signal had to be normalized as well. After all the aforementioned stages, there will be six normalized arrays, which are then connected to their Neural Network, as shown in Figure 3.

**Figure 3.** Training System LabVIEW Code

In Figure 3, it can be seen that this section will receive the variables and give the calculated error. In the case of the recognition signals where two different kinds of neural networks, trigonometric neural networks and Hebbian neural networks were tested, both of them gave good results. Figure 3 shows the configuration for the Hebbian Learning. The diagram of the configuration of this kind of network is shown inside the block diagram of Figure 3.

This network will take the point from the incoming signal and will make an approximation of this value and save it into W. If train is false the network will only return the values already saved in W; by doing this "Hebbian Comparison" will calculate the error between the incoming signal and W.

The Front Panel on LabVIEW shown in Figure 4 is the screen that the user will see when training the wheelchair. On the upper half of the screen are all the charts of the EOG (Figure 5) and the upper two charts refer to the filtered signals of both the vertical and the horizontal channels. The lower two charts refer to the length normalized signals. The righthand chart will show the detected signal. On the lower half of the screen (Figure 6), the user will see all the controls needed to train the system. A main training button will activate and set all the systems in training mode. Then the user will select which signal will be trained. Doing so will open the connection to let only that signal through for its training. Once recognized, the signal will appear in the biggest chart and the user will push the corresponding switch to train the Neural Network. When the user has trained all the movements, the program needs to be set in comparison mode, for this case the user must deactivate the principal training button and put the selector on Blink (Parpadeo). Then the system is ready to receive signals. To avoid problems with natural eye movements, the chair was programmed to be commanded with codes. To get the program into motion mode the user must blink twice, which is why the selector was put on Parpadeo, so that the first signals that go through the system could only be Blink signals. After the system recognizes two Blink signals the chair is ready to receive any other signal. By looking up the chair will move forward, looking left or right will make the chair turn accordingly. The system will stay in motion mode until the user looks down; this command will stop the chair and reset the program to wait for the two blinks. Embedding this code into a higher level of the program will allow the EOG system to communicate with thecontrol program that will receive the Boolean variables for each eye movement direction. Depending on the Boolean variable received on true, the control program will command the chair to move in that direction.

Wheelchair and Virtual Environment Trainer by Intelligent Control 275

**Figure 6.** Eye movement

**Figure 7.** Analog received input from eye movements

**Figure 8.** Signal generated when the eye moves up

assist the patient. Figure 9 shows the main LabVIEW code.

**3.1. EOG and voice message system coupling** 

The wheelchair in this study is intended to be used by quadriplegic patients; it was programmed to add a voice message system to the chair. This system, will also allow the user to send pre-recorded voice messages by means of the EOG system, as another way to

In this section of the project, all the EOG system and programming remained the same as in the previous direction control system. But instead of coupling the EOG system to the motor control program, it was coupled it to a very simple program that allows the computer to play pre-recorded messages, such as: I am hungry, I am tired, etc. The messages can be recorded to meet the patients' needs to aid them in their communication with their

**3. Voice control** 

**Figure 4.** Figure 4. Frontal panel for training signal

**Figure 5.** Filtered signal for vertical and horizontal channels

**Figure 4.** Figure 4. Frontal panel for training signal

**Figure 5.** Filtered signal for vertical and horizontal channels

direction.

The Front Panel on LabVIEW shown in Figure 4 is the screen that the user will see when training the wheelchair. On the upper half of the screen are all the charts of the EOG (Figure 5) and the upper two charts refer to the filtered signals of both the vertical and the horizontal channels. The lower two charts refer to the length normalized signals. The righthand chart will show the detected signal. On the lower half of the screen (Figure 6), the user will see all the controls needed to train the system. A main training button will activate and set all the systems in training mode. Then the user will select which signal will be trained. Doing so will open the connection to let only that signal through for its training. Once recognized, the signal will appear in the biggest chart and the user will push the corresponding switch to train the Neural Network. When the user has trained all the movements, the program needs to be set in comparison mode, for this case the user must deactivate the principal training button and put the selector on Blink (Parpadeo). Then the system is ready to receive signals. To avoid problems with natural eye movements, the chair was programmed to be commanded with codes. To get the program into motion mode the user must blink twice, which is why the selector was put on Parpadeo, so that the first signals that go through the system could only be Blink signals. After the system recognizes two Blink signals the chair is ready to receive any other signal. By looking up the chair will move forward, looking left or right will make the chair turn accordingly. The system will stay in motion mode until the user looks down; this command will stop the chair and reset the program to wait for the two blinks. Embedding this code into a higher level of the program will allow the EOG system to communicate with thecontrol program that will receive the Boolean variables for each eye movement direction. Depending on the Boolean variable received on true, the control program will command the chair to move in that

**Figure 7.** Analog received input from eye movements

**Figure 8.** Signal generated when the eye moves up

## **3. Voice control**

The wheelchair in this study is intended to be used by quadriplegic patients; it was programmed to add a voice message system to the chair. This system, will also allow the user to send pre-recorded voice messages by means of the EOG system, as another way to assist the patient. Figure 9 shows the main LabVIEW code.

## **3.1. EOG and voice message system coupling**

In this section of the project, all the EOG system and programming remained the same as in the previous direction control system. But instead of coupling the EOG system to the motor control program, it was coupled it to a very simple program that allows the computer to play pre-recorded messages, such as: I am hungry, I am tired, etc. The messages can be recorded to meet the patients' needs to aid them in their communication with their

environment. The EOG program will return a Boolean variable which will then select the message corresponding to the eye movement chosen by the user. The selection will search for the path of the saved pre-recorded message and will then play it.

Wheelchair and Virtual Environment Trainer by Intelligent Control 277

fails continuously if many people use the same trained configuration. On the other hand, the different tests that were performed in a closed space without any source of noise were 100%

This system runs in the same way as the EOG, by saying to the chair *derecho*, the chair will start moving. Saying the words *derecha* or *izquierda* will turn the chair right or left and by saying *atrás* the chair will stop. The Boolean variable is received into our control system the

After the user sends the reference command by voice or eye movement, the electric wheelchair uses fuzzy logic and neural networks for taking over the complete electric

For transferring the vague fuzzy form of human reasoning to mathematical systems a fuzzy

The use of IF-THEN rules in fuzzy systems gives us the possibility to easily understand the information modeled by the system. In most of the fuzzy systems the knowledge is obtained

Artificial neural networks can learn from experience, but most of the topologies do not allow us to understand the information learnt by the networks. ANN's are incorporated into fuzzy systems to form Neuro-Fuzzy systems which can acquire knowledge automatically by learning algorithms of neural networks. Neuro-Fuzzy systems have the advantage over Fuzzy systems that the acquired knowledge is easy to understand -more meaningful- to humans. Another technique used with Neuro-Fuzzy systems is clustering, which is usually employed to initialize unknown parameters such as the number of fuzzy rules or the number of membership functions for the premise part of the rules. They are also used to

The position of the wheelchair is taking over by the Neuro- Fuzzy controller, thus it will

The controller takes information from three ultrasonic sensors which measure the distance from the chair to an obstacle located in different positions of the wheelchair, as shown in

The outputs of the Neuro-Fuzzy controller were the voltages sent to a system that generates a PWM to move the electric motors and the directions in which the wheel will turn. The controller is based on trigonometric neural networks and fuzzy cluster means. It follows a Takagi-Sugeno inference method but instead of using polynomials on the defuzzification process it also uses trigonometric neural networks (T-ANNs). The diagram of the neuro-

create dynamic systems and update the parameters of the system.

avoid crashing against static and dynamic obstacles.

satisfactory.

same way as in the EOG.

wheelchair navigation system.

**4.1. The neuro-fuzzy controller** 

fuzzy controller is shown in Figure 11.

logic system is applied.

from human experts.

Figure 10.

**4. Electric wheelchair navigation system** 

**Figure 9.** a.Shows the activated Boolean received into a case structure that selects the path used for opening each file. Figure 9.b.Sound playback.

The second stage shown on Figure.9.b of the structure works by opening a \*.wav file, checking for errors and preparing the file to be reproduced, a while structure is used to play the message until the end of the file is reached, then the file is closed and the sequence is over.

## **3.2. Voice commands**

For patients with less severe motion problems, it was decided to implement a Voice Command system. This allows the user to tell the chair in which direction they want to move.

## **3.3. Basic program**

For this section two separate programs were used, Windows Speech Recognition and Speech Test 8.5 by Leo Cordaro (NI DOC-4477). The Speech Test program allows us to modify the phrases that Windows Speech Recognition will recognize. By doing so and coupling Speech Test to our control system, it is possible to control the chair with voice commands.

The input phrases can be modified by accessing the Speech Test 8.5 Vi. By selecting speech (selection box) the connection between both programs can be activated. Then Speech Test 8.5 will receive the variable from Speech Recognition and by connecting it to our control system it is possible to receive the same variable and control the chair.

At first, the user must train the Windows Speech Recognition. This is strongly recommended, because although it can differentiate different people's voices, the system fails continuously if many people use the same trained configuration. On the other hand, the different tests that were performed in a closed space without any source of noise were 100% satisfactory.

This system runs in the same way as the EOG, by saying to the chair *derecho*, the chair will start moving. Saying the words *derecha* or *izquierda* will turn the chair right or left and by saying *atrás* the chair will stop. The Boolean variable is received into our control system the same way as in the EOG.

## **4. Electric wheelchair navigation system**

276 Fuzzy Controllers – Recent Advances in Theory and Applications

for the path of the saved pre-recorded message and will then play it.

 (a) (b)

opening each file. Figure 9.b.Sound playback.

over.

move.

commands.

**3.2. Voice commands** 

**3.3. Basic program** 

environment. The EOG program will return a Boolean variable which will then select the message corresponding to the eye movement chosen by the user. The selection will search

**Figure 9.** a.Shows the activated Boolean received into a case structure that selects the path used for

The second stage shown on Figure.9.b of the structure works by opening a \*.wav file, checking for errors and preparing the file to be reproduced, a while structure is used to play the message until the end of the file is reached, then the file is closed and the sequence is

For patients with less severe motion problems, it was decided to implement a Voice Command system. This allows the user to tell the chair in which direction they want to

For this section two separate programs were used, Windows Speech Recognition and Speech Test 8.5 by Leo Cordaro (NI DOC-4477). The Speech Test program allows us to modify the phrases that Windows Speech Recognition will recognize. By doing so and coupling Speech Test to our control system, it is possible to control the chair with voice

The input phrases can be modified by accessing the Speech Test 8.5 Vi. By selecting speech (selection box) the connection between both programs can be activated. Then Speech Test 8.5 will receive the variable from Speech Recognition and by connecting it to our control system

At first, the user must train the Windows Speech Recognition. This is strongly recommended, because although it can differentiate different people's voices, the system

it is possible to receive the same variable and control the chair.

After the user sends the reference command by voice or eye movement, the electric wheelchair uses fuzzy logic and neural networks for taking over the complete electric wheelchair navigation system.

For transferring the vague fuzzy form of human reasoning to mathematical systems a fuzzy logic system is applied.

The use of IF-THEN rules in fuzzy systems gives us the possibility to easily understand the information modeled by the system. In most of the fuzzy systems the knowledge is obtained from human experts.

Artificial neural networks can learn from experience, but most of the topologies do not allow us to understand the information learnt by the networks. ANN's are incorporated into fuzzy systems to form Neuro-Fuzzy systems which can acquire knowledge automatically by learning algorithms of neural networks. Neuro-Fuzzy systems have the advantage over Fuzzy systems that the acquired knowledge is easy to understand -more meaningful- to humans. Another technique used with Neuro-Fuzzy systems is clustering, which is usually employed to initialize unknown parameters such as the number of fuzzy rules or the number of membership functions for the premise part of the rules. They are also used to create dynamic systems and update the parameters of the system.

## **4.1. The neuro-fuzzy controller**

The position of the wheelchair is taking over by the Neuro- Fuzzy controller, thus it will avoid crashing against static and dynamic obstacles.

The controller takes information from three ultrasonic sensors which measure the distance from the chair to an obstacle located in different positions of the wheelchair, as shown in Figure 10.

The outputs of the Neuro-Fuzzy controller were the voltages sent to a system that generates a PWM to move the electric motors and the directions in which the wheel will turn. The controller is based on trigonometric neural networks and fuzzy cluster means. It follows a Takagi-Sugeno inference method but instead of using polynomials on the defuzzification process it also uses trigonometric neural networks (T-ANNs). The diagram of the neurofuzzy controller is shown in Figure 11.

**Figure 10.** Connection diagram and the electric wheelchair

**Figure 11.** Basic diagram of the Neuro-Fuzzy controller

Theory of Trigonometric Neural Networks

If the function *f x* is periodic and integrable in Lebesgue (continuous and periodic functions 2 in , or 0,2 ). It must be written as *f C* [ ,] or just *f C* . The deviation -error- of *f C* from the Fourier series at the point or from a trigonometric polynomial of order *n* .

$$E\_n\left(f\right) = \min\_{\tau\_n} \tag{2}$$

Wheelchair and Virtual Environment Trainer by Intelligent Control 279

*a f x dx <sup>T</sup>* (6)

(7)

(8)

(5)

 <sup>0</sup> 1 1 cos sin 2 *nn k n n <sup>a</sup> <sup>f</sup> <sup>x</sup> a nx b nx A x* 

> <sup>0</sup> <sup>0</sup> 1 *<sup>T</sup>*

*na f x n x dx <sup>T</sup>*

*nb f x n x dx <sup>T</sup>*

The trigonometric Fourier series consists of the sum of functions multiplied by a coefficient

The advantages of these neural networks are that the weights of the network can be computed using analytical methods as a linear equation system. The error on the solution decreases when the number of neurons is augmented which corresponds to adding more

To train the network we need to know the available inputs and outputs. The traditional way to train a network is to assign random values to the weights and then wait for the function to converge using the gradient descendent method. Using this topology the network is trained using the least-squares method fixing a finite number of neurons and arranging the system in a matrix formAx=B. We will use cosines for approximating the function with pair

Figure 12 shows the ICTL and the trigonometric neural networks icons. The trigonometric neural networks inside the ICTL include examples. The front panel and block diagram of the example can be seen in fig. 13. In the block diagram the code is related to training and evaluation of the network. The signals from the eye or voice could be recognized using trigonometric neural

networks, as shown in figure 14 in which the example presents a signal approximation.

**Figure 12.** The ICTL showing the Trigonometric Neural Networks icons

1

*T*

plus a constant, so a neural network can be built based on the previous equations.

harmonics according to the Fourier series.

functions and sine in the case of impair functions.

Numerical example of T-ANN's

<sup>0</sup>

 <sup>0</sup> <sup>1</sup> sin *T*

cos

$$|\max|\mathbf{f(x)} - \tau\_\mathbf{n}(\mathbf{x})| = \min\|\mathbf{f} - \tau\_\mathbf{n}\|$$

$$0 \le \mathbf{x} \le 2\pi \tag{3}$$

Using Favard sums of falling in its extreme basic property, give the best approximation for trigonometric polynomials of a class (periodic continuous functions) as follows:

$$\|f^{\cdot}\| = \max\_{\mathbf{x}} \left| f^{\cdot}(\mathbf{x}) \right| \le 1 \tag{4}$$

Fourier series have been proven to be able to model any periodical signal in [2]. For any given signal *f x* it is said to be periodic if *f x fx T* where *T* is the fundamental period of the signal. The signal can be modeled using Fourier series:

$$f\left(\mathbf{x}\right) - \frac{a\_0}{2} + \sum\_{n=1}^{\alpha} \left( a\_n \cos\left(n\mathbf{x}\right) + b\_n \sin\left(n\mathbf{x}\right) \right) = \sum\_{n=1}^{\alpha} A\_k\left(\mathbf{x}\right) \tag{5}$$

$$a\_0 = \frac{1}{T} \int\_0^T f\left(x\right) dx\tag{6}$$

$$a\_n = \frac{1}{T} \int\_0^T f\left(x\right) \cos\left(n\alpha x\right) dx\tag{7}$$

$$b\_n = \frac{1}{T} \int\_0^T f\left(\mathbf{x}\right) \sin\left(n\mathbf{x}\right) d\mathbf{x} \tag{8}$$

The trigonometric Fourier series consists of the sum of functions multiplied by a coefficient plus a constant, so a neural network can be built based on the previous equations.

The advantages of these neural networks are that the weights of the network can be computed using analytical methods as a linear equation system. The error on the solution decreases when the number of neurons is augmented which corresponds to adding more harmonics according to the Fourier series.

To train the network we need to know the available inputs and outputs. The traditional way to train a network is to assign random values to the weights and then wait for the function to converge using the gradient descendent method. Using this topology the network is trained using the least-squares method fixing a finite number of neurons and arranging the system in a matrix formAx=B. We will use cosines for approximating the function with pair functions and sine in the case of impair functions.

#### Numerical example of T-ANN's

278 Fuzzy Controllers – Recent Advances in Theory and Applications

**Figure 10.** Connection diagram and the electric wheelchair

**Figure 11.** Basic diagram of the Neuro-Fuzzy controller

or 0,2

Theory of Trigonometric Neural Networks

 in , 

functions 2

polynomial of order *n* .

If the function *f x* is periodic and integrable in Lebesgue (continuous and periodic

The deviation -error- of *f C* from the Fourier series at the point or from a trigonometric

 min *n*

max|f(x) − τ�(x)| = min‖f−τ�‖

Using Favard sums of falling in its extreme basic property, give the best approximation for

' max ' 1 *<sup>x</sup>*

Fourier series have been proven to be able to model any periodical signal in [2]. For any given signal *f x* it is said to be periodic if *f x fx T* where *T* is the fundamental

trigonometric polynomials of a class (periodic continuous functions) as follows:

period of the signal. The signal can be modeled using Fourier series:

*<sup>n</sup> E f*

). It must be written as *f C* [ ,]

 

(2)

� � � � �� (3)

*f fx* (4)

or just *f C* .

Figure 12 shows the ICTL and the trigonometric neural networks icons. The trigonometric neural networks inside the ICTL include examples. The front panel and block diagram of the example can be seen in fig. 13. In the block diagram the code is related to training and evaluation of the network. The signals from the eye or voice could be recognized using trigonometric neural networks, as shown in figure 14 in which the example presents a signal approximation.

**Figure 12.** The ICTL showing the Trigonometric Neural Networks icons

Wheelchair and Virtual Environment Trainer by Intelligent Control 281

(11)

2 2 || || *ik k i A d xv* (10)

2 1

(12)

Where*A* is a positive definite matrix, m is the weighting exponent ݉߳ሾͳǡ λሿ.If *m* and *c* parameters are fixed and define sets then *(U,V)* may be globally minimal for *Jm(U,V)* only if:

1

In this algorithm, the parameter *m* determines the fuzziness of the clusters; if *m* is large the cluster is fuzzier. For 1 *m* FCM the solution becomes the crisp one, and for *m* the solution is as fuzzy as possible. There is no theoretical reference for the selection of m, and usually *m = 2*is chosen. After the shape of the membership functions are fixed, the T-ANN's

Sometimes the controller response can be improved by using predictors, which provide future information and allow it to respond in advance. One of the simplest yet most powerful predictors

Exponential smoothing is computationally simple and fast at the same time, this method can perform well in comparison with other more complex methods. The series used for prediction is considered as a composition of more than one structural component (average and trend) each of which can be individually modeled. We will use series without

Where: y(x), ݕ௩ሺݔሻ, ݕ௧ሺݔሻ, and ݁ሺݔሻ are the data, the average, the trend and the error components individually modeled using exponential smoothing. The p-step ahead of

ݕሺݔሻ ൌ ݕ௩ሺݔሻ ݕ௧ሺݔሻ ݁ሺݔሻǢ ൌ Ͳ (13)

ݕ כ ሺݔ ȁ݇ሻ ൌ ݕ௩ሺݔሻ ݕ௧ሺݔሻ (14)

is based on exponential smoothing. A popular approach used is the Holt's method.

seasonality in the predictor. This type of series can be expressed as:

*v*

1


*u*

*u x*

1 1

 

*N m k ik k j N m k ik*

*<sup>m</sup> <sup>c</sup> k i <sup>j</sup> k j*

*x v x v*

1 1

The fuzzy c-means solution can be described as:

2. Calculate fuzzy centers for each cluster using ܸሺሻ(12)

1. Fix *c* and *m*, set *p=0* and initialize ܷሺሻ

FCM Algorithm:

learn each one of them.

prediction [3] is given by:

*Predictive Method* 

 

*i c k N*

*ik*

*u*

1 *i c* 

3. Update fuzzy partition matrix ܷሺሻ for the p-thiteration using (11) 4. If ฮܷ െ ܷሺିଵሻฮ ൏א then, *j j* 1 and return to the second step

**Figure 13.** The front panel and block diagram

**Figure 14.** Trigonometric neural network example network using 5 (left) and 20 neurons (right)

#### Fuzzy Cluster Means

Clustering methods split a set of *N* elements *X xx x* 1 2 , , *<sup>n</sup>* into a *c* group denoted 1 2 , , *<sup>n</sup> c* . Traditional clustering set methods assume that each data vector can belong to one and only one class, though in practice clusters normally overlap, and some data vectors can belong partially to several clusters. Fuzzy set theory provides a natural way to describe this situation by FCM.

The fuzzy partition matrices M, for c classes and N data points were defined by three conditions:� = ������1,2,3�


The FCM optimum criteria function has the following form:

$$J\_m\left(\mathcal{U}, V\right) = \sum\_{i=1}^c \sum\_{k=1}^N \mu\_{ik}^m d\_{ik}^2 \tag{9}$$

Where *ik d* is an inner product norm defined as:

Wheelchair and Virtual Environment Trainer by Intelligent Control 281

$$d\_{ik}^2 = \mathbb{I} \mid \mathbf{x}\_k - \boldsymbol{\upsilon}\_i \mid \mathbb{I}\_A^2 \tag{10}$$

Where*A* is a positive definite matrix, m is the weighting exponent ݉߳ሾͳǡ λሿ.If *m* and *c* parameters are fixed and define sets then *(U,V)* may be globally minimal for *Jm(U,V)* only if:

$$\begin{array}{ll} \forall & \\ 1 \le i \le c & u\_{ik} = \frac{1}{1 \le k \le N} \\ 1 \le k \le N & \\ & \sum\_{j=1}^{c} \left( \frac{|\|\|\boldsymbol{x}\_{k} - \boldsymbol{v}\_{j}\|\|}{|\|\boldsymbol{x}\_{k} - \boldsymbol{v}\_{j}\|\|} \right)^{2} \\ \\ & \\ & \sum\_{j=1}^{N} \left( \boldsymbol{\mu}\_{k} \right)^{m} \mathbf{x}\_{j} \end{array} \tag{11}$$

$$\forall \begin{array}{ll} \forall & \boldsymbol{v}\_{j} = \frac{\sum\_{k=1}^{N} \left(\boldsymbol{\mu}\_{ik}\right)^{m} \boldsymbol{x}\_{k}}{\sum\_{k=1}^{N} \left(\boldsymbol{\mu}\_{ik}\right)^{m}} \end{array} \tag{12}$$

#### FCM Algorithm:

280 Fuzzy Controllers – Recent Advances in Theory and Applications

**Figure 13.** The front panel and block diagram

Fuzzy Cluster Means

 1 2 , , *<sup>n</sup> c* 

 

to describe this situation by FCM.

The second condition: ∑ ��� = 1 ∀ �

Where *ik d* is an inner product norm defined as:

The first condition: ∀ 1 � � � � ���[0,1], 1 � � � �

The third condition:∀ 1 � � � � 0 � ∑ ��� � � �

The FCM optimum criteria function has the following form:

conditions:� = ������1,2,3�

Clustering methods split a set of *N* elements *X xx x* 1 2 , , *<sup>n</sup>* into a *c* group denoted

belong to one and only one class, though in practice clusters normally overlap, and some data vectors can belong partially to several clusters. Fuzzy set theory provides a natural way

The fuzzy partition matrices M, for c classes and N data points were defined by three

���

 <sup>2</sup> 1 1

*m ik ik i k J UV d* 

*c N <sup>m</sup>*

(9)

��� 1 � � � �

,

. Traditional clustering set methods assume that each data vector can

**Figure 14.** Trigonometric neural network example network using 5 (left) and 20 neurons (right)

The fuzzy c-means solution can be described as:


In this algorithm, the parameter *m* determines the fuzziness of the clusters; if *m* is large the cluster is fuzzier. For 1 *m* FCM the solution becomes the crisp one, and for *m* the solution is as fuzzy as possible. There is no theoretical reference for the selection of m, and usually *m = 2*is chosen. After the shape of the membership functions are fixed, the T-ANN's learn each one of them.

#### *Predictive Method*

Sometimes the controller response can be improved by using predictors, which provide future information and allow it to respond in advance. One of the simplest yet most powerful predictors is based on exponential smoothing. A popular approach used is the Holt's method.

Exponential smoothing is computationally simple and fast at the same time, this method can perform well in comparison with other more complex methods. The series used for prediction is considered as a composition of more than one structural component (average and trend) each of which can be individually modeled. We will use series without seasonality in the predictor. This type of series can be expressed as:

$$y(\mathbf{x}) = y\_{av}(\mathbf{x}) + py\_{tr}(\mathbf{x}) + e(\mathbf{x}); p = 0 \tag{13}$$

Where: y(x), ݕ௩ሺݔሻ, ݕ௧ሺݔሻ, and ݁ሺݔሻ are the data, the average, the trend and the error components individually modeled using exponential smoothing. The p-step ahead of prediction [3] is given by:

$$y\*(x+p|k) = y\_{av}(x) + py\_{tr}(x) \tag{14}$$

The average and the trend components are modeled as:

$$y\_{av}(\mathbf{x}) = (1 - \alpha)y(\mathbf{x}) + \alpha \left( y\_{av}(\mathbf{x} - 1) + y\_{tr}(k - 1) \right) \tag{15}$$

$$y\_{tr}\left(\mathbf{x}\right) = \left(1 - \beta\right)y\_{tr}\left(\mathbf{x} - 1\right) + \beta\left(y\_{av}\left(\mathbf{x}\right) + y\_{av}\left(\mathbf{x} - 1\right)\right) \tag{16}$$

Where and are the average and the trend components of the signal. Where ݕ௩ሺݔሻ and ݕ௧ሺݔሻ are the smoothing coefficients, its values range (0,1). ݕ௩andݕ௧ can be initialized as:

$$y\_{av}\left(1\right) = y\left(1\right)\tag{17}$$

Wheelchair and Virtual Environment Trainer by Intelligent Control 283

the combinations as well as the evaluated membership functions to obtain the

**Step 4.** This VI creates a 1D array with the number of rules of the system: where n is the

**Step 6.** This VI defuzzifies using the Takagi method with the obtained crisp outputs from

This version of one input, one output of the controller was modified to have three inputs

Each input is fuzzified with four membership functions whose form is defined by the FCM algorithm. The crisp distances gathered by the distance sensors are clustered by FCM and then T-ANN's are trained. As can be seen in figure 17, the main shape of the clusters is

With three inputs and four membership functions, there is a total of sixty four rules that can be evaluated. These rules are *IF THEN* and have the following form: **IF**ݔଵ is ߤ&ݔଶ is ߤ&ݔଷ is ߤ**THEN***PWM LeftEngine*, *Direction Left Engine* , *PWM Right Engine*, *Direction Right Engine*.

The value of each rule is obtained through the inference method min that consists of evaluating the ߤᇱ௦ and return the smallest one for each rule. The final system output is

number of rules, it is used on the defuzzification process.

premises of the IF-THEN rules.

**Figure 16.** Neuro-Fuzzy controller block diagram

**Figure 17.** Input membership functions

obtained by:

learnt by the neural networks and no main information is lost.

the T-ANN.

**Step 5.** This VI evaluates a T-ANN on each of the rules.

and four outputs; the block diagram is shown in figure 16.

$$y\_{tr}\left(1\right) = \frac{\left(y\left(1\right) - y\left(0\right)\right) + \left(y\left(2\right) - y\left(1\right)\right)}{2} \tag{18}$$

**Figure 15.** Block diagram of the neuro-fuzzy controller with one input, one output

The execution of the controller depends on several VI's (more information in [4]), which are explained in the following steps:


the combinations as well as the evaluated membership functions to obtain the premises of the IF-THEN rules.


282 Fuzzy Controllers – Recent Advances in Theory and Applications

The average and the trend components are modeled as:

*y*

*y x yx y x y k av* 1 11

*yx yx y x y x tr* 11 1

Where and are the average and the trend components of the signal. Where ݕ௩ሺݔሻ and ݕ௧ሺݔሻ are the smoothing coefficients, its values range (0,1). ݕ௩andݕ௧ can be initialized as:

10 21 <sup>1</sup>

*yy yy*

2 *tr*

**Figure 15.** Block diagram of the neuro-fuzzy controller with one input, one output

explained in the following steps:

The execution of the controller depends on several VI's (more information in [4]), which are

**Step 1.** This is a predictor VI based on exponential smoothing; the coefficients alpha and

1D array with the newest information in the last element of the array. **Step 2.** This VI executes the FCM method; the information of the crisp inputs must be fed as

beta must be fed as scalar values. The past and present information must be fed in a

well as the stop conditions for the cycle; the program will return the coefficients of the trigonometric networks, the fundamental frequency and other useful information. **Step 3.** These three VI's execute the evaluation of the premises. The first is on the top left

generator of the combinations of rules that depends on the number of inputs and membership functions. The second one on the bottom left combines and evaluates the input membership functions. The last one on the right uses the information on

 

*av tr* (15)

*tr av av* (16)

1 1 *av y y* (17)

(18)

 

**Step 6.** This VI defuzzifies using the Takagi method with the obtained crisp outputs from the T-ANN.

This version of one input, one output of the controller was modified to have three inputs and four outputs; the block diagram is shown in figure 16.

**Figure 16.** Neuro-Fuzzy controller block diagram

Each input is fuzzified with four membership functions whose form is defined by the FCM algorithm. The crisp distances gathered by the distance sensors are clustered by FCM and then T-ANN's are trained. As can be seen in figure 17, the main shape of the clusters is learnt by the neural networks and no main information is lost.

**Figure 17.** Input membership functions

With three inputs and four membership functions, there is a total of sixty four rules that can be evaluated. These rules are *IF THEN* and have the following form: **IF**ݔଵ is ߤ&ݔଶ is ߤ&ݔଷ is ߤ**THEN***PWM LeftEngine*, *Direction Left Engine* , *PWM Right Engine*, *Direction Right Engine*.

The value of each rule is obtained through the inference method min that consists of evaluating the ߤᇱ௦ and return the smallest one for each rule. The final system output is obtained by:

$$Output = \frac{\sum\_{i=1}^{r} \left[ \min\left(\mu\_{i1,2,3}\right) \text{NN}\left(\mathbf{x}\_1, \mathbf{x}\_2, \mathbf{x}\_3\right) \right]}{\sum\_{i=1}^{r} \min\left(\mu\_{i1,2,3}\right)} \tag{19}$$

Wheelchair and Virtual Environment Trainer by Intelligent Control 285

**Figure 19.** The wheelchair recovering the direction with the Direction Controller.

in order to compensate.

The IF-THEN Rules:

 CW: Clockwise NC: No Change

CCW : Counterclockwise

DIRR is CCW, DIRL is CW.

direction is [-180, 180], also in degrees.

A fuzzy controller that controls the direction can be used in combination with the obstacle avoidance controller. The directions controller will have as input the difference between the desired and the current direction of the wheelchair. The direction magnitude describes how many degrees the chair will have to turn and the sign indicates if it has to be done in one direction or the other. The output is the PWM and the direction that each wheel has to take

Three fuzzifiying input membership functions will be used for the degrees and the turning direction, as shown in figure 20. The range for the degrees is [0, 360] degrees and the turning

The form of the rule is the following: **IF** *degree* is *Ain* & direction is *in B* **THEN** 

Figure 21 shows the rule base with the nine possible combinations of inputs and outputs.

1. IF Degree is Small & Direction is Left THEN PWMR IS Very Few, PWML IS Very Few,

*PWM Left Engine* , *DirectionLeft Engine* , *PWM Right Engine* , *DirectionRight Engine* .

The outputs are obtained with the rule consequences using singletons.

**Figure 20.** Input membership functions for Degrees and Direction

For the direction of the wheel, three states are used: clockwise (1), counterclockwise (-1) and stopped (0). The fuzzy output is rounded to the nearest value and the direction is obtained.

## **5. Results using the controller**

The wheelchair was set on a human sized chessboard and the pieces were set in a maze, presented in Figure 18 with some of the trajectories described by the chair.

**Figure 18.** Wheelchair maze and trajectories

The wheelchair always managed to avoid obstacles, but failed to return to the desired direction. It also fails to recognize whether the obstacle is a human being or an object and, thus, have different behaviors to avoid them.

Controller Enhancements

## Direction Controller

As can be seen from the previous results, the wheelchair will effectively avoid obstacles but the trajectories that it follows are always different, sometimes it may follow the directions we want but other times it will not. A direction controller can solve this problem, so we need a sensor to obtain feedback from the direction of the wheelchair. A compass could be an option to sense the direction, either the 1490 (digital) or 1525 (analog) from images SI [8]. After the electric wheelchair controller avoids an obstacle the compass sensor will give it information to return to the desired direction, as shown in Figure 19.

**Figure 19.** The wheelchair recovering the direction with the Direction Controller.

A fuzzy controller that controls the direction can be used in combination with the obstacle avoidance controller. The directions controller will have as input the difference between the desired and the current direction of the wheelchair. The direction magnitude describes how many degrees the chair will have to turn and the sign indicates if it has to be done in one direction or the other. The output is the PWM and the direction that each wheel has to take in order to compensate.

Three fuzzifiying input membership functions will be used for the degrees and the turning direction, as shown in figure 20. The range for the degrees is [0, 360] degrees and the turning direction is [-180, 180], also in degrees.

**Figure 20.** Input membership functions for Degrees and Direction

The form of the rule is the following: **IF** *degree* is *Ain* & direction is *in B* **THEN**  *PWM Left Engine* , *DirectionLeft Engine* , *PWM Right Engine* , *DirectionRight Engine* .

Figure 21 shows the rule base with the nine possible combinations of inputs and outputs. The outputs are obtained with the rule consequences using singletons.

The IF-THEN Rules:

284 Fuzzy Controllers – Recent Advances in Theory and Applications

**5. Results using the controller** 

**Figure 18.** Wheelchair maze and trajectories

Controller Enhancements

Direction Controller

thus, have different behaviors to avoid them.

 

*NN x x x*

(19)

1 1,2,3 1 2 3 1 1,2,3

min

For the direction of the wheel, three states are used: clockwise (1), counterclockwise (-1) and stopped (0). The fuzzy output is rounded to the nearest value and the direction is obtained.

The wheelchair was set on a human sized chessboard and the pieces were set in a maze,

The wheelchair always managed to avoid obstacles, but failed to return to the desired direction. It also fails to recognize whether the obstacle is a human being or an object and,

As can be seen from the previous results, the wheelchair will effectively avoid obstacles but the trajectories that it follows are always different, sometimes it may follow the directions we want but other times it will not. A direction controller can solve this problem, so we need a sensor to obtain feedback from the direction of the wheelchair. A compass could be an option to sense the direction, either the 1490 (digital) or 1525 (analog) from images SI [8]. After the electric wheelchair controller avoids an obstacle the compass sensor will give it

information to return to the desired direction, as shown in Figure 19.

*i i*

*r*

presented in Figure 18 with some of the trajectories described by the chair.

*Output*

*i i r*

min , ,


2. IF Degree is Small & Direction is Center THEN PWMR IS Very Few, PWML IS Very Few, DIRR is NC, DIRL is NC.

Wheelchair and Virtual Environment Trainer by Intelligent Control 287

This controller will act when the distances recognized by the sensors are Very Far, because the system will have enough space to maneuver and recover the direction that it has to follow, otherwise the obstacle avoidance controller will have the control of the wheelchair.

Cities are not designed to be transited by disabled people, thus one of their main concerns is the paths and obstacles they have to cope with to get from one point to another. Big cities are becoming more and more crowded, so moving around on streets with a wheelchair is a

If temperature and simple shape sensors are installed in the wheelchair (Figure 23 shows the proposal) then some kind of behavior can be programmed so the system can differentiate between a human being and a non human being. Additionally the use of a speaker or a horn

The proposed behavior is based on a fuzzy controller which has as input the temperature in centigrade's degrees of the obstacle and as output the time in seconds the wheelchair will be stopped and a message or a horn will be played. It has three triangular fuzzy input membership functions as shown in Figure 24. The output membership functions are two

is needed to ask people to move out of the way of the chair.

**Figure 23.** Wheelchair with temperature sensors for obstacle avoidance

singletons, as can be seen in Figure 25.

**Figure 24.** Input membership function for temperature

Obstacle Avoidance Behavior

big challenge.


**Figure 21.** Rule Base and output membership functions for the Direction controller

The surfaces for the PWM and the direction are shown in Figure 22. For both PWM outputs the surface is the same, while for the direction the surfaces change and completely invert from left to right.

**Figure 22.** Surfaces for PWM and Direction outputs

This controller will act when the distances recognized by the sensors are Very Far, because the system will have enough space to maneuver and recover the direction that it has to follow, otherwise the obstacle avoidance controller will have the control of the wheelchair.

#### Obstacle Avoidance Behavior

286 Fuzzy Controllers – Recent Advances in Theory and Applications

Few, DIRR is NC, DIRL is NC.

DIRR is CW, DIRL is CCW.

is CCW, DIRL is CW.

from left to right.

DIRR is NC, DIRL is NC.

DIRR is CW, DIRL is CCW.

Much, DIRR is CCW, DIRL is CW.

Much, DIRR is NC, DIRL is NC.

Much, DIRR is CW, DIRL is CCW.

**Figure 22.** Surfaces for PWM and Direction outputs

2. IF Degree is Small & Direction is Center THEN PWMR IS Very Few, PWML IS Very

3. IF Degree is Small & Direction is Right THEN PWMR IS Very Few, PWML IS Very Few,

4. IF Degree is Medium & Direction is Left THEN PWMR IS Some, PWML IS Some, DIRR

5. IF Degree is Medium & Direction is Center THEN PWMR IS Some, PWML IS Some,

6. IF Degree is Medium & Direction is Right THEN PWMR IS Some, PWML IS Some,

7. IF Degree is Large & Direction is Left THEN PWMR IS Very Much, PWML IS Very

8. IF Degree is Large & Direction is Center THEN PWMR IS Very Much, PWML IS Very

9. IF Degree is Large & Direction is Right THEN PWMR IS Very Much, PWML IS Very

The surfaces for the PWM and the direction are shown in Figure 22. For both PWM outputs the surface is the same, while for the direction the surfaces change and completely invert

**Figure 21.** Rule Base and output membership functions for the Direction controller

Cities are not designed to be transited by disabled people, thus one of their main concerns is the paths and obstacles they have to cope with to get from one point to another. Big cities are becoming more and more crowded, so moving around on streets with a wheelchair is a big challenge.

If temperature and simple shape sensors are installed in the wheelchair (Figure 23 shows the proposal) then some kind of behavior can be programmed so the system can differentiate between a human being and a non human being. Additionally the use of a speaker or a horn is needed to ask people to move out of the way of the chair.

**Figure 23.** Wheelchair with temperature sensors for obstacle avoidance

The proposed behavior is based on a fuzzy controller which has as input the temperature in centigrade's degrees of the obstacle and as output the time in seconds the wheelchair will be stopped and a message or a horn will be played. It has three triangular fuzzy input membership functions as shown in Figure 24. The output membership functions are two singletons, as can be seen in Figure 25.

**Figure 24.** Input membership function for temperature

The IF-THEN Rules:


Wheelchair and Virtual Environment Trainer by Intelligent Control 289

**Figure 27.** Control structure. The user can select any of the controls depending on his/her needs and the

The wheelchair is controlled using two coils to generate an electromagnetic field which could be detected by two sensors. Depending on the density of the magnetic field and its intensity, the motors could be controlled to move the wheelchair in any direction. However this solution also requires that both coils should be fixed in place and they cannot be moved with respect to the sensors, so the sensed field will be always the same for one determined configuration. The use of fuzzy logic to design an obstacle-avoidance system and the Hebbian network used to determine the different kinds of eye movements were the tools that helped us to obtain efficient answers. Specific hardware is required to obtain the signals that would be processed with these systems. Each control was programmed on LabVIEW in different files and all of them were included in a LabVIEW project. Each one must be executed separately, so when using the voice controller, the eye controller cannot be used. In future, the obstacle-avoidance system will have higher priority than the eye and voice

1. DAQ: for sensing the different voltage signals produced by the eyes and to set the directions using the manual control. Each one gets values from different ports of a NI USB-6210. In the case of the voltages generated by the eyes, the data acquisition was

3. BasicStamp[12]: This device is used to acquire the signals detected from three ultrasonic

connected to the analog port and the manual control to the digital input port. 2. CompactRIO: This device was used to generate PWM to the coils allowing us to have control for the different directions. The cRIO model 9014 had the following modules:

surrounding environment.

controllers.

The hardware used is:

distance sensors.

a. a.Two H-bridges: for controlling the PWM. b. b.5 V TTL Bidirectional digital I/O Module.

**Figure 25.** Singleton outputs for the temperature controller

The controller response is shown in Figure 26.

**Figure 26.** Time controller response

## **6. Structural design**

The full system was built on a Quickie Wheelchair. The full system diagram is shown in Figure 27. Using LabVIEW, three different kinds of controls were programmed:


Wheelchair and Virtual Environment Trainer by Intelligent Control 289

**Figure 27.** Control structure. The user can select any of the controls depending on his/her needs and the surrounding environment.

The wheelchair is controlled using two coils to generate an electromagnetic field which could be detected by two sensors. Depending on the density of the magnetic field and its intensity, the motors could be controlled to move the wheelchair in any direction. However this solution also requires that both coils should be fixed in place and they cannot be moved with respect to the sensors, so the sensed field will be always the same for one determined configuration. The use of fuzzy logic to design an obstacle-avoidance system and the Hebbian network used to determine the different kinds of eye movements were the tools that helped us to obtain efficient answers. Specific hardware is required to obtain the signals that would be processed with these systems. Each control was programmed on LabVIEW in different files and all of them were included in a LabVIEW project. Each one must be executed separately, so when using the voice controller, the eye controller cannot be used. In future, the obstacle-avoidance system will have higher priority than the eye and voice controllers.

The hardware used is:

288 Fuzzy Controllers – Recent Advances in Theory and Applications

1. IF Temperature is Low THEN TIME IS Few. 2. IF Temperature is Human THEN TIME IS Much. 3. IF Temperature is Hot THEN TIME IS Few.

**Figure 25.** Singleton outputs for the temperature controller

The controller response is shown in Figure 26.

**Figure 26.** Time controller response

The full system was built on a Quickie Wheelchair. The full system diagram is shown in

Figure 27. Using LabVIEW, three different kinds of controls were programmed:

**6. Structural design** 

2. Eye-movements control 3. Keyboard control

1. Voice control

The IF-THEN Rules:


4. Three ultrasonic sensors that measure distance and are used to help the obstacleavoidance system. Two of them are placed in front of the wheelchair and one at the back.

Wheelchair and Virtual Environment Trainer by Intelligent Control 291

(20)

is the kinetic friction

 cos *<sup>k</sup> a g sen* 

The 3D virtual trainer presents the dynamic obstacles: pedestrians, animals; and the static obstacles: furniture, walls, etc. that would most likely be seen in the locations in which the patient conducts his/her activities. The simulator offers online recommendations on how to

Within the virtual world, sounds and noises, both from dynamic objects and the environment itself, are presented. The user can listen to conversations when he/she is outside: barking, car engines, among others. These sounds vary depending on the patient's position in the virtual world and interaction with certain objects. For example, the volume of dynamic objects (such as people or animals) increases or decreases proportionally according

As a way to help users, they can see the virtual world from different perspectives. The first perspective, or first person, presents the objects to the patient as seen from the intelligent wheelchair. This is the most useful because it is how the user will see them in real life. The third-person perspective offers a view of the user's avatar and the objects closest to it, that is, the camera is placed above and behind, five meters in each direction, with a 45° tilt. Thus, the user can see more details of its virtual environment. Finally, a perspective that is independent from the user's movement is offered, making it possible to explore the entire

**Figure 29.** This image shows the virtual trainer along with the augmented reality interface. In the upper left corner, information about the performance of the user is shown: elapsed time, found targets. In the upper right corner there are more statistics such as number of collisions and the name of the nearest object. In the lower area data that can help the user to navigate more properly or about the closest

Where *a* is the acceleration in the inclined plane, *g* is gravity, *<sup>k</sup>*

is the inclination angle.

to their distance from the user in the virtual environment.

virtual world to get a look at the objects to interact with later.

coefficient and

3D Perspectives

Statistics: augmented reality

avoid these obstacles or interact with them.

Static and dynamic obstacles

 

5. A laptop to execute the programs and visualize the different commands introduced by the user.

## **7. Virtual trainning**

## **7.1. Augmented reality**

The simulator has a variety of modules that make up the augmented reality of the virtual environment. Augmented reality refers to those aspects that help to represent real physical situations and whose data is printed on screen to help users to understand them.

Distance to close objects

Taking the user's position in the virtual world as the center, a circle with a 5-meters radius is generated; then the distance to all objects within this area is computed with vector operations and the screen displays the distance to the nearest one and its name. For this, a file with specific information about all the objects at the simulator stage is consulted. This information allows the user to gain experience of how to move in small spaces, know the speed of the chair and the time between the moment when the command is indicated and the moment when it is executed (system response time).

Variation of the wheelchair's movement according to the terrain

Depending on the terrain where the patient moves in the virtual world, different physical phenomena are represented. If the terrain's surface is uneven, such as grass or pavement, vibrations are displayed on the screen and there is a change in the traction of the intelligent wheelchair. If the terrain is tilted, the perspective tilts and the corresponding acceleration changes are re-created as shown by the following formula obtained from the analysis of forces in Figure 28.

**Figure 28.** Forces that act upon the user and the intelligent wheelchair while moving on a tilted plane.

Wheelchair and Virtual Environment Trainer by Intelligent Control 291

$$a = \lg\left(\text{sen}\theta + \mu\_k \cos\theta\right) \tag{20}$$

Where *a* is the acceleration in the inclined plane, *g* is gravity, *<sup>k</sup>* is the kinetic friction coefficient and is the inclination angle.

Static and dynamic obstacles

290 Fuzzy Controllers – Recent Advances in Theory and Applications

the moment when it is executed (system response time).

Variation of the wheelchair's movement according to the terrain

back.

the user.

**7. Virtual trainning** 

**7.1. Augmented reality** 

Distance to close objects

forces in Figure 28.

4. Three ultrasonic sensors that measure distance and are used to help the obstacleavoidance system. Two of them are placed in front of the wheelchair and one at the

5. A laptop to execute the programs and visualize the different commands introduced by

The simulator has a variety of modules that make up the augmented reality of the virtual environment. Augmented reality refers to those aspects that help to represent real physical

Taking the user's position in the virtual world as the center, a circle with a 5-meters radius is generated; then the distance to all objects within this area is computed with vector operations and the screen displays the distance to the nearest one and its name. For this, a file with specific information about all the objects at the simulator stage is consulted. This information allows the user to gain experience of how to move in small spaces, know the speed of the chair and the time between the moment when the command is indicated and

Depending on the terrain where the patient moves in the virtual world, different physical phenomena are represented. If the terrain's surface is uneven, such as grass or pavement, vibrations are displayed on the screen and there is a change in the traction of the intelligent wheelchair. If the terrain is tilted, the perspective tilts and the corresponding acceleration changes are re-created as shown by the following formula obtained from the analysis of

**Figure 28.** Forces that act upon the user and the intelligent wheelchair while moving on a tilted plane.

situations and whose data is printed on screen to help users to understand them.

The 3D virtual trainer presents the dynamic obstacles: pedestrians, animals; and the static obstacles: furniture, walls, etc. that would most likely be seen in the locations in which the patient conducts his/her activities. The simulator offers online recommendations on how to avoid these obstacles or interact with them.

Within the virtual world, sounds and noises, both from dynamic objects and the environment itself, are presented. The user can listen to conversations when he/she is outside: barking, car engines, among others. These sounds vary depending on the patient's position in the virtual world and interaction with certain objects. For example, the volume of dynamic objects (such as people or animals) increases or decreases proportionally according to their distance from the user in the virtual environment.

### 3D Perspectives

As a way to help users, they can see the virtual world from different perspectives. The first perspective, or first person, presents the objects to the patient as seen from the intelligent wheelchair. This is the most useful because it is how the user will see them in real life. The third-person perspective offers a view of the user's avatar and the objects closest to it, that is, the camera is placed above and behind, five meters in each direction, with a 45° tilt. Thus, the user can see more details of its virtual environment. Finally, a perspective that is independent from the user's movement is offered, making it possible to explore the entire virtual world to get a look at the objects to interact with later.

Statistics: augmented reality

**Figure 29.** This image shows the virtual trainer along with the augmented reality interface. In the upper left corner, information about the performance of the user is shown: elapsed time, found targets. In the upper right corner there are more statistics such as number of collisions and the name of the nearest object. In the lower area data that can help the user to navigate more properly or about the closest

As mentioned in the description of some other modules of the virtual trainer, the user will be shown at any time information about objects in the simulator: distance to the nearest object, its name and the distance to the nearest target; as well as information about his/her performance: time spent at each level of the simulator and the number of collisions. Likewise, the screen's lower left corner providesinformation on the nearest object, such as extra precautions to consider, material and dimensions, or what would be the best instruction on driving the wheelchair through the location the user is currently at. Also, if relevant, the user is informed about the changes in terrain and sound as well as the change of perspective. These statistics are shown in Figure 29 above.

Wheelchair and Virtual Environment Trainer by Intelligent Control 293

to be introduced in the data sequence of vertexes and XNA ensures that these data will be repeated for each instance. Thissaves memory and is generally easier to handle, since you can use the original mesh and the only thing needed is a call for each subset. This is because the main bottleneck in the GPU occurs when transformations are made to the pixels, such as the application of textures. With hardware instancing, the common operations for these transformations are not made every time an object is going to be drawn but only a relationship between the data flows through the flags. The arrangement of data streams is

To accomplish hardware instancing, the following classes, processors and pipelines were

 InstancedModelProcessor.- This is the processor that does the heavy work of converting the model's data into vertexes, and giving as output an Instanced Model Content. Basically a cycle goes through the vertexes and assigns them a corresponding texture. InstanceSkinnedModelPart.- It is actually a wrapper around the ModelMesh class, with

The last point to consider in hardware instancing is that each row of the processed mesh transforms is encoded as a pixel, and because the matrix dimensions are 4x4, 4 pixels represent a transform. Then a texture is assigned to each transformation according to the

Part I

vertexes that represent them. This form of encoding is depicted in Figure 31.

**Figure 30.** Array of the data sequences of the vertex buffers and their pointers

additional functionality to support instancing.

shown below in Figure 30.

used:

In this approach, the user is provided with not only a way to learn how to control the intelligent wheelchair, but also a space where he/she can find cultural information about the objects that constitute the virtual world. Likewise, the augmented reality interface facilitates and makes the user's training more pleasant.

## **7.2. Simulator performance**

Due to the intense computing conducted in real time to show the aforementioned statistics and the level of detail present in virtual environments, such as the interior of houses, the implementation of algorithms that provide greater game performance and visual quality is necessary.

In simulators, video games and other interactive media, 3-D models and animations are crucial components. Games like Gears of War and Half-Life 2 would not be as striking if not for the vivid, detailed models and animations. Games within the XNA Framework, which are able to take advantage of the GPU (Graphics Unit Processor) of the Xbox 360 or PC, are not the exception. Many advanced rendering techniques can be exploited through the XNA Framework, such as hardware instancing.

Traditional implementations of 3-D models require a large header on the CPU (Control Unit Processor) and are not efficient or completely instantiated. Many processes are performed in the CPU and each part of the model requires its own call to Draw method and sometimes multiple calls if the model has a large number of polygons. This means, in the XNA framework, that creating a large number of models in the CPU creates a bottleneck.

The aim of this work is to present an alternative to "traditional" techniques to instantiate 3-D models. This technique even makes it possible to render animated models and, depending on its complexity, we can draw more than 45 models with a single call to Draw. Since the Xbox has a powerful graphics card, this is very desirable, and any way the level of processing on the CPU is reduced.

Hardware instancing works by sending two streams of vertexes to the video card, in order to send information about the objects' vertexes (which is called a regular vertex buffer) and information for the instances (position and color) to the GPU simultaneously. The advantage of hardware instancing is that a flag can be used to indicate which data flow (stream) contains information about certain vertexes and instances. Thus, only the original mesh has to be introduced in the data sequence of vertexes and XNA ensures that these data will be repeated for each instance. Thissaves memory and is generally easier to handle, since you can use the original mesh and the only thing needed is a call for each subset. This is because the main bottleneck in the GPU occurs when transformations are made to the pixels, such as the application of textures. With hardware instancing, the common operations for these transformations are not made every time an object is going to be drawn but only a relationship between the data flows through the flags. The arrangement of data streams is shown below in Figure 30.

**Figure 30.** Array of the data sequences of the vertex buffers and their pointers

292 Fuzzy Controllers – Recent Advances in Theory and Applications

and makes the user's training more pleasant.

Framework, such as hardware instancing.

processing on the CPU is reduced.

**7.2. Simulator performance** 

necessary.

of perspective. These statistics are shown in Figure 29 above.

As mentioned in the description of some other modules of the virtual trainer, the user will be shown at any time information about objects in the simulator: distance to the nearest object, its name and the distance to the nearest target; as well as information about his/her performance: time spent at each level of the simulator and the number of collisions. Likewise, the screen's lower left corner providesinformation on the nearest object, such as extra precautions to consider, material and dimensions, or what would be the best instruction on driving the wheelchair through the location the user is currently at. Also, if relevant, the user is informed about the changes in terrain and sound as well as the change

In this approach, the user is provided with not only a way to learn how to control the intelligent wheelchair, but also a space where he/she can find cultural information about the objects that constitute the virtual world. Likewise, the augmented reality interface facilitates

Due to the intense computing conducted in real time to show the aforementioned statistics and the level of detail present in virtual environments, such as the interior of houses, the implementation of algorithms that provide greater game performance and visual quality is

In simulators, video games and other interactive media, 3-D models and animations are crucial components. Games like Gears of War and Half-Life 2 would not be as striking if not for the vivid, detailed models and animations. Games within the XNA Framework, which are able to take advantage of the GPU (Graphics Unit Processor) of the Xbox 360 or PC, are not the exception. Many advanced rendering techniques can be exploited through the XNA

Traditional implementations of 3-D models require a large header on the CPU (Control Unit Processor) and are not efficient or completely instantiated. Many processes are performed in the CPU and each part of the model requires its own call to Draw method and sometimes multiple calls if the model has a large number of polygons. This means, in the XNA

The aim of this work is to present an alternative to "traditional" techniques to instantiate 3-D models. This technique even makes it possible to render animated models and, depending on its complexity, we can draw more than 45 models with a single call to Draw. Since the Xbox has a powerful graphics card, this is very desirable, and any way the level of

Hardware instancing works by sending two streams of vertexes to the video card, in order to send information about the objects' vertexes (which is called a regular vertex buffer) and information for the instances (position and color) to the GPU simultaneously. The advantage of hardware instancing is that a flag can be used to indicate which data flow (stream) contains information about certain vertexes and instances. Thus, only the original mesh has

framework, that creating a large number of models in the CPU creates a bottleneck.

To accomplish hardware instancing, the following classes, processors and pipelines were used:


The last point to consider in hardware instancing is that each row of the processed mesh transforms is encoded as a pixel, and because the matrix dimensions are 4x4, 4 pixels represent a transform. Then a texture is assigned to each transformation according to the vertexes that represent them. This form of encoding is depicted in Figure 31.

Part I

Wheelchair and Virtual Environment Trainer by Intelligent Control 295

message through the web page. This is also a quick means of communication for specialists who care for patients with disabilities and, at the same time, a space where users can share the information not only about the virtual trainer but about their life experiences which can be useful for other users. With this, a community can be created where patients can identify

The screen in Figure 32 shows the storage of users' data through SQL Express Edition. The general table that keeps a record of all the registered patients shows their best time (time in which they completed a level of the simulator), best score, number of games played, as well

Similarly, more detailed statistics for each individual user are stored, which can be seen in Figure 33: scores, number of collisions, best time and most used instruction. In this way the patients can monitor their progress and work on the command that is most difficult for them. To help them, the targets that they have to collect in each level of the virtual trainer are placed in such a way as to let the users strengthen the instructions with which they collide the most and lower the percentage of the instruction they most

**Figure 32.** Information kept in the SQL Express data base about the statistics of the registered users

with and help each other.

as their score average.

widely use.

Part II

**Figure 31.** Encoding of the mesh transforms as in the Hardware Instancing method

## **7.3. Labview Interface for eye and voice control**

The Labview interface that verifies the operation and control of Windows Speech Recognition is shown Figure 20. This window appears in the background of the computer when the game starts if the user wants to verify the voice instructions identified (these commands are recognized only when the Labview program has started). This window appears by means of an object of type Process which calls the Labview file (Virtual Instrument) from XNA.

To communicate with the two applications (Labview and XNA) and control the virtual trainer through the acquired data by Labview, a parallel access to a file which contains the instructions executed by the user in string format was implemented; then these commands are encoded to generate an event in XNA. This part of the project can be improved in order to move the information between the two applications more efficiently and with lower exception handles.

## **7.4. Web page and data base**

In this section the web page of the project where the new simulator's users can register to monitor their activity and performance is explained. The patient's progress is measured through the mentioned statistics, such as number of collisions, instructions used, elapsed time in each level, among other variables. Based on them, a score is set to the user according to the weights assigned to each statistic. This page is also available for the patients' doctors; the page was developed using Microsoft Web Developer 2008.

Once registered, users can log in to access their information as well as that of the other players. Likewise, the user can also share information with other people and send them a message through the web page. This is also a quick means of communication for specialists who care for patients with disabilities and, at the same time, a space where users can share the information not only about the virtual trainer but about their life experiences which can be useful for other users. With this, a community can be created where patients can identify with and help each other.

294 Fuzzy Controllers – Recent Advances in Theory and Applications

Part II

The Labview interface that verifies the operation and control of Windows Speech Recognition is shown Figure 20. This window appears in the background of the computer when the game starts if the user wants to verify the voice instructions identified (these commands are recognized only when the Labview program has started). This window appears by means of an object of type Process which calls the Labview file (Virtual

To communicate with the two applications (Labview and XNA) and control the virtual trainer through the acquired data by Labview, a parallel access to a file which contains the instructions executed by the user in string format was implemented; then these commands are encoded to generate an event in XNA. This part of the project can be improved in order to move the information between the two applications more efficiently and with lower

In this section the web page of the project where the new simulator's users can register to monitor their activity and performance is explained. The patient's progress is measured through the mentioned statistics, such as number of collisions, instructions used, elapsed time in each level, among other variables. Based on them, a score is set to the user according to the weights assigned to each statistic. This page is also available for the patients' doctors;

Once registered, users can log in to access their information as well as that of the other players. Likewise, the user can also share information with other people and send them a

**Figure 31.** Encoding of the mesh transforms as in the Hardware Instancing method

**7.3. Labview Interface for eye and voice control** 

the page was developed using Microsoft Web Developer 2008.

Instrument) from XNA.

exception handles.

**7.4. Web page and data base** 

The screen in Figure 32 shows the storage of users' data through SQL Express Edition. The general table that keeps a record of all the registered patients shows their best time (time in which they completed a level of the simulator), best score, number of games played, as well as their score average.

Similarly, more detailed statistics for each individual user are stored, which can be seen in Figure 33: scores, number of collisions, best time and most used instruction. In this way the patients can monitor their progress and work on the command that is most difficult for them. To help them, the targets that they have to collect in each level of the virtual trainer are placed in such a way as to let the users strengthen the instructions with which they collide the most and lower the percentage of the instruction they most widely use.

**Figure 32.** Information kept in the SQL Express data base about the statistics of the registered users

Wheelchair and Virtual Environment Trainer by Intelligent Control 297

The complete system works well in a laboratory environment. The signals from eye movement and voice commands are translated into actual movements of the chair, allowing people who are disabled and cannot move their hands or even their head to move freely through spaces. There is still not a full version that can run an avoidance system at the same time as the chair is been controlled with eye movement. This should be the next step for further work. As was intended, the four main control systems that give the wheelchair more compatibility and adaptability to patients with different disorders were successfully completed. This allowed the chair to be moved with the eyes for those who cannot speak. Speech recognition was included for people who cannot move and directional buttons (joystick) for any other users. Many problems were presented when trying to interfere with the systems already built by the manufacturer; the use of magnetic inductors is one of the temporal solutions that should be eliminated even though the emulation of the joystick is good and works well. The use of these inductors produces a lot of power loss, thus considerably reducing the in-use time of the batteries. It also generates a small retardation in the use of Windows Vista Speech Recognition software and enters some faults into our system, since it is well known that this user interface is not well developed and sometimes does not recognize what it was expected to do, which is not good enough for a system like ours that requires a quick response to commands. This project demonstrated how intelligent control systems can be applied to improve existing products. The use of intelligent

algorithms broadened the possibilities of interpretation and manipulation.

Development Organization) project on "Intelligent RT Software Project".

de Monterrey Campus Ciudad de México, 2007.

*Escuela de Ingeniería y Arquitectura, Instituto Tecnológico y de Estudios Superiores de Monterrey,* 

This work is supported by NEDO Japan (New Energy and Industrial Technology

[5] P. Ponce and F. D. Ramirez.*Intelligent Control Systems with LabVIEW*.UnitedKingdom,

[7] R. Barea, L. Boquete, M. Mazo, E. López and L. M. Bergasa, *Aplicación de electrooculografía para ayuda a minusválidos.*; Alcalá de Henares. Madrid, Spain: Universidad de Alcalá. [15] F. D. Ramirez and D. Mendez, *Neuro-FuzzyNavigationSystemfor Mobile Robots. Electronics and CommunicationsEngineering Project*, Instituto Tecnologico y de Estudios Superiores

Pedro Ponce, Arturo Molina and Rafael Mendoza

**9. Conclusions** 

**Author details** 

*Mexico City, Mexico* 

**10. References** 

Springer, 2009.

Book Chapter

**Acknowledgement** 

**Figure 33.** Statistics kept for every user and on which the virtual trainer based the way it distributes the targets among the virtual world to help the patients to strengthen their skills by using the intelligent wheelchair.

All the previous information is transmitted to the database using the libraries System.Data.SqlClient and System.Data.OleDb once a user has completed a level of the virtual trainer. The information consists of the statistics mentioned in section IV.1 and in shown in Figure 5, and some mathematical calculations to obtain averages and keep count.

## **8. Results**

The following results show the controller performance. The voice controller increases the accuracy; if the wheelchair needs to work in a noisy environment, a noise cancelation system has to be included. The results are quite good when the wheelchair works in a normal noise environment, with the average value ranging from 0 to 70 db, and an average value of around 90% is obtained. The EOG recognition system changes the precision value according to the time because the user requires a certain length of time to move the eye in a correct way in order to recognize the signal. At first, the user is not familiarized with the whole system, thus the signal is not well defined and the system has a medium performance. After six weeks, the system increases the precision to around 94 %. Fig 34 shows the results.

**Figure 34.** a. Tests on the voice control system response. b. EOG system response.Experimental Results

## **9. Conclusions**

296 Fuzzy Controllers – Recent Advances in Theory and Applications

wheelchair.

**8. Results** 

**Figure 33.** Statistics kept for every user and on which the virtual trainer based the way it distributes the targets among the virtual world to help the patients to strengthen their skills by using the intelligent

All the previous information is transmitted to the database using the libraries System.Data.SqlClient and System.Data.OleDb once a user has completed a level of the virtual trainer. The information consists of the statistics mentioned in section IV.1 and in shown in Figure 5, and some mathematical calculations to obtain averages and keep count.

The following results show the controller performance. The voice controller increases the accuracy; if the wheelchair needs to work in a noisy environment, a noise cancelation system has to be included. The results are quite good when the wheelchair works in a normal noise environment, with the average value ranging from 0 to 70 db, and an average value of around 90% is obtained. The EOG recognition system changes the precision value according to the time because the user requires a certain length of time to move the eye in a correct way in order to recognize the signal. At first, the user is not familiarized with the whole system, thus the signal is not well defined and the system has a medium performance. After six weeks, the system increases the precision to around 94 %. Fig 34 shows the results.

**Figure 34.** a. Tests on the voice control system response. b. EOG system response.Experimental Results

The complete system works well in a laboratory environment. The signals from eye movement and voice commands are translated into actual movements of the chair, allowing people who are disabled and cannot move their hands or even their head to move freely through spaces. There is still not a full version that can run an avoidance system at the same time as the chair is been controlled with eye movement. This should be the next step for further work. As was intended, the four main control systems that give the wheelchair more compatibility and adaptability to patients with different disorders were successfully completed. This allowed the chair to be moved with the eyes for those who cannot speak. Speech recognition was included for people who cannot move and directional buttons (joystick) for any other users. Many problems were presented when trying to interfere with the systems already built by the manufacturer; the use of magnetic inductors is one of the temporal solutions that should be eliminated even though the emulation of the joystick is good and works well. The use of these inductors produces a lot of power loss, thus considerably reducing the in-use time of the batteries. It also generates a small retardation in the use of Windows Vista Speech Recognition software and enters some faults into our system, since it is well known that this user interface is not well developed and sometimes does not recognize what it was expected to do, which is not good enough for a system like ours that requires a quick response to commands. This project demonstrated how intelligent control systems can be applied to improve existing products. The use of intelligent algorithms broadened the possibilities of interpretation and manipulation.

## **Author details**

Pedro Ponce, Arturo Molina and Rafael Mendoza *Escuela de Ingeniería y Arquitectura, Instituto Tecnológico y de Estudios Superiores de Monterrey, Mexico City, Mexico* 

## **Acknowledgement**

This work is supported by NEDO Japan (New Energy and Industrial Technology Development Organization) project on "Intelligent RT Software Project".

## **10. References**

Book Chapter


Journal

[1] International classification of functioning, disability and health: ICF. World health Organization, Geneva 2001, 228 págs.

**Chapter 13** 

© 2012 Sinthipsomboon et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2012 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

**A Hybrid of Fuzzy and Fuzzy Self-Tuning PID** 

**Controller for Servo Electro-Hydraulic System** 

Kwanchai Sinthipsomboon, Issaree Hunsacharoonroj, Josept Khedari,

The application of hydraulic actuation to heavy duty equipment reflects the ability of the hydraulic circuit to transmit larger forces and to be easily controlled. It has many distinct advantages such as the response accuracy, self-lubricating and heat transfer properties of the fluid, relative large torques, large torque-to-inertia ratios, high loop gains, relatively high stiffness and small position error. Although the high cost of hydraulic components and power unit, loss of power due to leakage, inflexibility, nonlinear response, and errorprone low power operation tends to limit the use of hydraulic drives, they nevertheless constitute a large subset of all industrial drives and are extensively used in the transportation and manufacturing industries (Merrit, 1976; Rong-Fong Fung *et al*, 1997;

The Servo Electro-hydraulic System (SEHS), among others, is perhaps the most important system because it takes the advantages of both the large output power of traditional hydraulic systems and the rapid response of electric systems. However, there are also many challenges in the design of SEHS. For example, they are the highly nonlinear phenomena such as fluid compressibility, the flow/pressure relationship and dead-band due to the internal leakage and hysteresis, and the many uncertainties of hydraulic systems due to linearization. Therefore, it seems to be quite difficult to perform a high precision servo control by using linear control method Rong-Fong Fung *et al*, 1997; Aliyari *et al*, 2007;

Classical PID controller is the most popular control tool in many industrial applications because they can improve both the transient response and steady state error of the system at the same time. Moreover, it has simple architecture and conceivable physical intuition

and reproduction in any medium, provided the original work is properly cited.

Watcharin Po-ngaen and Pornjit Pratumsuwan

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/48614

**1. Introduction** 

Aliyari *et al*, 2007).

Pratumsuwan *et al*, 2010).


Online Journal

