**Part 1**

## **Applications**

**1** 

**Modular Robotic Approach** 

**for Endoluminal Surgery –** 

*1The University of Tokyo 2Scuola Superiore Sant'Anna 3Italian Institute of Technology* 

*4Waseda University* 

*1,4Japan 2,3Italy* 

**– Wireless Robotic Modules and a** 

Masakatsu G. Fujie, Arianna Menciassi and Paolo Dario

Kanako Harada, Ekawahyu Susilo, Takao Watanabe, Kazuya Kawamura,

The trend in surgical robots is moving from traditional master-slave robots to miniaturized devices for screening and simple surgical operations (Cuschieri, A. 2005). For example, capsule endoscopy (Moglia, A. 2007) has been conducted worldwide over the last five years with successful outcomes. To enhance the dexterity of commercial endoscopic capsules, capsule locomotion has been investigated using legged capsules (Quirini, M. 2008) and capsules driven by external magnetic fields (Sendoh, M. 2003; Ciuti, G. 2010; Carpi, F. 2009). Endoscopic capsules with miniaturized arms have also been studied to determine their potential for use in biopsy (Park, S.-K. 2008). Furthermore, new surgical procedures known as natural orifice transluminal endoscopic surgery (NOTES) and Single Port Access surgery are accelerating the development of innovative endoscopic devices (Giday, S. 2006; Bardaro, S.J. 2006). These advanced surgical devices show potential for the future development of minimally invasive and endoluminal surgery. However, the implementable functions in such devices are generally limited owing to space constraints. Moreover, advanced capsules or endoscopes with miniaturized arms have rather poor dexterity because the diameter of such arms must be small (i.e. a few millimeters), which results in a small force being

A modular surgical robotic system known as the ARES (Assembling Reconfigurable Endoluminal Surgical system) system has been proposed based on the aforementioned motivations (Harada, K. 2009; Harada, K. 2010; Menciassi, A. 2010). The ARES system is designed for screening and interventions in the gastrointestinal (GI) tracts to overcome the intrinsic limitations of single-capsules or endoscopic devices. In the proposed system,

**1. Introduction** 

generated at the tip.

**Reconfigurable Master Device** 

**in Surgical Applications** 

## **Modular Robotic Approach in Surgical Applications – Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery –**

Kanako Harada, Ekawahyu Susilo, Takao Watanabe, Kazuya Kawamura, Masakatsu G. Fujie, Arianna Menciassi and Paolo Dario

*1The University of Tokyo 2Scuola Superiore Sant'Anna 3Italian Institute of Technology 4Waseda University 1,4Japan 2,3Italy* 

## **1. Introduction**

The trend in surgical robots is moving from traditional master-slave robots to miniaturized devices for screening and simple surgical operations (Cuschieri, A. 2005). For example, capsule endoscopy (Moglia, A. 2007) has been conducted worldwide over the last five years with successful outcomes. To enhance the dexterity of commercial endoscopic capsules, capsule locomotion has been investigated using legged capsules (Quirini, M. 2008) and capsules driven by external magnetic fields (Sendoh, M. 2003; Ciuti, G. 2010; Carpi, F. 2009). Endoscopic capsules with miniaturized arms have also been studied to determine their potential for use in biopsy (Park, S.-K. 2008). Furthermore, new surgical procedures known as natural orifice transluminal endoscopic surgery (NOTES) and Single Port Access surgery are accelerating the development of innovative endoscopic devices (Giday, S. 2006; Bardaro, S.J. 2006). These advanced surgical devices show potential for the future development of minimally invasive and endoluminal surgery. However, the implementable functions in such devices are generally limited owing to space constraints. Moreover, advanced capsules or endoscopes with miniaturized arms have rather poor dexterity because the diameter of such arms must be small (i.e. a few millimeters), which results in a small force being generated at the tip.

A modular surgical robotic system known as the ARES (Assembling Reconfigurable Endoluminal Surgical system) system has been proposed based on the aforementioned motivations (Harada, K. 2009; Harada, K. 2010; Menciassi, A. 2010). The ARES system is designed for screening and interventions in the gastrointestinal (GI) tracts to overcome the intrinsic limitations of single-capsules or endoscopic devices. In the proposed system,

Modular Robotic Approach in Surgical Applications

detailed examination after the procedure is complete.

Fig. 1. Proposed procedures for the ARES system

operation.

**2.2 Advantages of the modular approach in surgical applications** 

The modular approach has great potential to provide many advantages to surgical applications. These advantages are summarized below using the ARES system as shown in Fig.2. The numbering of the items in Fig.2 is correlated with the following numbering. i. The topology of the modular surgical robot can be customized for each patient according to the location of the disease and the size of the body cavity in which the modular robot is deployed. A set of functional modules such as cameras, needles and forceps can be selected for each patient based on the necessary diagnosis and surgical

ii. The modular approach facilitates delivery of more components inside a body cavity that has small entrance/exit hole(s). As there are many cavities in the human body, the modular approach would benefit treatment in such difficult-to-reach places. Because

– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 5

according to preoperative planning by repeated docking and undocking of the modules (the undocking mechanism and electrical contacts between modules are necessary for reconfiguration, but they have not been implemented in the presented design). The robotic modules are controlled via wireless bidirectional communication with a master device operated by the surgeon, while the progress in procedure is observed using intraoperative imaging devices such as fluoroscopy and cameras mounted on the modules. After the surgical tasks are completed, the robot reconfigures itself to a snake-like shape to pass through the pyloric sphincter and travel to examine the small intestine and the colon, or it completely disassembles itself into individual modules so that it can be brought out without external aid. One of the modules can bring a biopsy tissue sample out of the body for

miniaturized robotic modules are ingested and assembled in the stomach cavity. The assembled robot can then change its configuration according to the target location and task.

Modular surgical robots are interesting owing to their potential for application as selfreconfigurable modular robots and innovative surgical robots. Many self-reconfigurable modular robots have been investigated worldwide (Yim, M. 2007; Murata, S. 2007) with the goal of developing systems that are robust and adaptive to the working environment. Most of these robots have been designed for autonomous exploration or surveillance tasks in unstructured environments; therefore, there are no strict constraints regarding the number of modules, modular size or working space. Because the ARES has specific applications and is used in the GI tract environment, it raises many issues that have not been discussed in depth in the modular robotic field. Modular miniaturization down to the ingestible size is one of the most challenging goals. In addition, a new interface must be developed so that surgeons can intuitively maneuver the modular surgical robot.

The purpose of this paper is to clarify the advantages of the modular approach in surgical applications, as well as to present proof of concept of the modular robotic surgical system.

The current paper is organized as follows: Section 2 describes the design of the ARES system. Section 3 details the design and prototyping of robotic modules, including the experimental results. Section 4 describes a reconfigurable master device designed for the robotic modules, and its preliminary evaluation is reported.

### **2. Design of the modular surgical system**

### **2.1 Clinical indications and proposed procedures**

The clinical target of the ARES system is the entire GI tract, i.e., the esophagus, stomach, small intestine, and colon. Among GI tract pathologies that can benefit from modular robotic features, biopsy for detection of early cancer in the upper side of the stomach (the fundus and the cardia) was selected as the surgical task to be focused on as a first step. Stomach cancer is the second leading cause of cancer-related deaths worldwide (World Health Organization 2006), and stomach cancer occurring in the upper side of the stomach has the worst outcome in terms of the 5-year survival ratio (Pesic, M. 2004). Thus, early diagnosis of cancer utilizing an advanced endoluminal device may lead to better prognosis. The stomach has a large volume (about 1400 ml) when distended, which provides working space to assemble the ingested robotic modules and change the topology of the assembled robot inside (i.e. reconfiguration). Each robotic module should be small enough to be swallowed and pass through the whole GI tract. Because the size of the commercial endoscopic capsules (11 mm in diameter and 26 mm in length (Moglia, A. 2007)) has already been shown to be acceptable for the majority of patients as an ingestible device, each module needs to be miniaturized to this size before being applied to clinical cases.

The surgical procedures proposed for the ARES system (Harada, K. 2010) are shown in Fig. 1. Prior to the surgical procedure, the patient drinks a liquid to distend the stomach to a volume of about 1400 ml. Next, the patient ingests 10-15 robotic modules that complete the assembly process before the liquid naturally drains away from the stomach in 10-20 minutes. The number of the modules swallowed depends on the target tasks and is determined in advance based on the pre-diagnosis. Magnetic self-assembly in the liquid using permanent magnets was selected for this study since its feasibility has already been demonstrated (Nagy, Z. 2007). Soon after the assembly, the robot configures its topology 4 Robotic Systems – Applications, Control and Programming

miniaturized robotic modules are ingested and assembled in the stomach cavity. The assembled robot can then change its configuration according to the target location and task. Modular surgical robots are interesting owing to their potential for application as selfreconfigurable modular robots and innovative surgical robots. Many self-reconfigurable modular robots have been investigated worldwide (Yim, M. 2007; Murata, S. 2007) with the goal of developing systems that are robust and adaptive to the working environment. Most of these robots have been designed for autonomous exploration or surveillance tasks in unstructured environments; therefore, there are no strict constraints regarding the number of modules, modular size or working space. Because the ARES has specific applications and is used in the GI tract environment, it raises many issues that have not been discussed in depth in the modular robotic field. Modular miniaturization down to the ingestible size is one of the most challenging goals. In addition, a new interface must be developed so that

The purpose of this paper is to clarify the advantages of the modular approach in surgical applications, as well as to present proof of concept of the modular robotic surgical system. The current paper is organized as follows: Section 2 describes the design of the ARES system. Section 3 details the design and prototyping of robotic modules, including the experimental results. Section 4 describes a reconfigurable master device designed for the

The clinical target of the ARES system is the entire GI tract, i.e., the esophagus, stomach, small intestine, and colon. Among GI tract pathologies that can benefit from modular robotic features, biopsy for detection of early cancer in the upper side of the stomach (the fundus and the cardia) was selected as the surgical task to be focused on as a first step. Stomach cancer is the second leading cause of cancer-related deaths worldwide (World Health Organization 2006), and stomach cancer occurring in the upper side of the stomach has the worst outcome in terms of the 5-year survival ratio (Pesic, M. 2004). Thus, early diagnosis of cancer utilizing an advanced endoluminal device may lead to better prognosis. The stomach has a large volume (about 1400 ml) when distended, which provides working space to assemble the ingested robotic modules and change the topology of the assembled robot inside (i.e. reconfiguration). Each robotic module should be small enough to be swallowed and pass through the whole GI tract. Because the size of the commercial endoscopic capsules (11 mm in diameter and 26 mm in length (Moglia, A. 2007)) has already been shown to be acceptable for the majority of patients as an ingestible device, each module

The surgical procedures proposed for the ARES system (Harada, K. 2010) are shown in Fig. 1. Prior to the surgical procedure, the patient drinks a liquid to distend the stomach to a volume of about 1400 ml. Next, the patient ingests 10-15 robotic modules that complete the assembly process before the liquid naturally drains away from the stomach in 10-20 minutes. The number of the modules swallowed depends on the target tasks and is determined in advance based on the pre-diagnosis. Magnetic self-assembly in the liquid using permanent magnets was selected for this study since its feasibility has already been demonstrated (Nagy, Z. 2007). Soon after the assembly, the robot configures its topology

needs to be miniaturized to this size before being applied to clinical cases.

surgeons can intuitively maneuver the modular surgical robot.

robotic modules, and its preliminary evaluation is reported.

**2. Design of the modular surgical system 2.1 Clinical indications and proposed procedures**  according to preoperative planning by repeated docking and undocking of the modules (the undocking mechanism and electrical contacts between modules are necessary for reconfiguration, but they have not been implemented in the presented design). The robotic modules are controlled via wireless bidirectional communication with a master device operated by the surgeon, while the progress in procedure is observed using intraoperative imaging devices such as fluoroscopy and cameras mounted on the modules. After the surgical tasks are completed, the robot reconfigures itself to a snake-like shape to pass through the pyloric sphincter and travel to examine the small intestine and the colon, or it completely disassembles itself into individual modules so that it can be brought out without external aid. One of the modules can bring a biopsy tissue sample out of the body for detailed examination after the procedure is complete.

Fig. 1. Proposed procedures for the ARES system

## **2.2 Advantages of the modular approach in surgical applications**

The modular approach has great potential to provide many advantages to surgical applications. These advantages are summarized below using the ARES system as shown in Fig.2. The numbering of the items in Fig.2 is correlated with the following numbering.


Modular Robotic Approach in Surgical Applications

**3.1 Design and prototyping of the robotic modules** 

smaller than the theoretical value as explained in the next subsection.

mode is selected, the motor can be driven with a resolution of 0.178º.

The aforementioned brushless DC motor came with a dedicated motor driving board (SSD04, Namiki Precision Jewel Co., Ltd., 19.6 mm × 34.4 mm × 3 mm). This board only allows driving of one motor; hence, two boards are required for a robotic module with 2 DOFs. Because there was not sufficient space for the boards in the robotic module, a custom made high density control board was designed and developed in-house. This control board consisted of one CC2430 microcontroller (Texas Instrument, USA) as the main wireless controller and three sets of A3901 dual bridge motor drivers (Allegro MicroSystem, Inc., USA). The fabricated board is 9.6 mm in diameter, 2.5 mm in thickness and 0.37 g in weight, which is compatible with swallowing. The A3901 motor driver chip was originally intended for a brushed DC motor, but a software commutation algorithm was implemented to control a brushless DC motor as well. An IEEE 802.15.4 wireless personal area network (WPAN) was introduced as an embedded feature (radio peripheral) of the microcontroller. The implemented algorithm enables control of the selected brushless DC motor in Back Electro-Motive Force (BEMF) feedback mode or slow speed stepping mode. When the stepping

For the modular approach, each control board shall be equipped with a wired locating system for intra-modular communication in addition to the wireless communication. Aside from wireless networking, the wired locating system, which is not implemented in the presented design, would be useful for identification of the sequence of the docked modules

**3. Robotic modules** 

• Controller

– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 7

Figure 3 shows the design and prototypes of the Structural Module and the Biopsy Module (Harada, K. 2009, Harada, K. 2010). The Structural Module has two degrees of freedom (±90° of bending and 360° of rotation). The Structural Module contains a Li-Po battery (20 mAh, LP2-FR, Plantraco Ltd., Canada), two brushless DC geared motors that are 4 mm in diameter and 17.4 mm in length (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan) and a custom-made motor control board capable of wireless control (Susilo, E. 2009). The stall torque of the selected geared motor is 10.6 mNm and the speed is 112 rpm when controlled by the developed controller. The bending mechanism is composed of a worm and a spur gear (9:1 gear reduction), whereas the rotation mechanism is composed of two spur gears (no gear reduction). All gears (DIDEL SA, Switzerland) were made of nylon, and they were machined to be implemented in the small space of the capsule. Two permanent magnets (Q-05-1.5-01-N, Webcraft GMbH, Switzerland) were attached at each end of the module to help with self-alignment and modular docking. The module is 15.4 mm in diameter and 36.5 mm in length; it requires further miniaturization before clinical application. The casing of the prototype was made of acrylic plastic and fabricated by 3D rapid prototyping (Invison XT 3-D Modeler, 3D systems, Inc., USA). The total weight is 5.6 g. Assuming that the module would weigh 10 g with the metal chassis and gears, the maximum torque required for lifting two connected modules is 5.4 mNm for both the bending DOF and rotation DOF. Assuming that the gear transmission efficiency for the bending mechanism is 30%, the stall torque for the bending DOF is 28.6 mNm. On the other hand, the stall torque for the rotation DOF is 8.5 mNm when the transmission efficiency for the rotation mechanism is 80%. The torque was designed to have sufficient force for surgical operation, but the transmission efficiency of the miniaturized plastic gears was much

several functional modules can be used simultaneously, the modular robot may perform rather complicated tasks that a single endoscopic capsule or an endoscopic device is not capable of conducting. For example, if more than two camera modules are employed, the surgeon can conduct tasks while observing the site from different directions.


As these advantages suggest, a modular surgical robot would be capable of achieving rather complicated tasks that have not been performed using existing endoluminal surgical devices. These advantages are valid for modular robots that work in any body cavity with a small entrance and exit. Moreover, this approach may be introduced to NOTES or Single Port Access surgery, in which surgical devices must reach the abdominal cavity through a small incision.

In Section 3, several robotic modules are proposed, and the performance of these modules is reported to show the feasibility of the proposed surgical system.

Fig. 2. Advantages of the modular approach in surgical applications

## **3. Robotic modules**

6 Robotic Systems – Applications, Control and Programming

iii. Surgical tools of relatively large diameter can be brought into the body cavity. Conventionally, small surgical forceps that can pass through an endoscopic channel of a few millimeters have been used for endoluminal surgery. Conversely, surgical devices that have the same diameter as an endoscope can be used in the modular surgical system. Consequently, the force generated at the tip of the devices would be rather

iv. The surgical system is more adaptive to the given environment and robust to failures. Accordingly, it is not necessary for the surgical robot to equip all modules that might be necessary in the body because the surgeons can decide whether to add modules with different functionalities, even during the surgical operation. After use, the modules can be detached and discarded if they are not necessary in the following procedures. Similarly, a module can be easily replaced with a new one in case of malfunction. As these advantages suggest, a modular surgical robot would be capable of achieving rather complicated tasks that have not been performed using existing endoluminal surgical devices. These advantages are valid for modular robots that work in any body cavity with a small entrance and exit. Moreover, this approach may be introduced to NOTES or Single Port Access surgery, in which surgical devices must reach the abdominal cavity through a

In Section 3, several robotic modules are proposed, and the performance of these modules is

(iii)

(iv)

large, and the performance of the functional devices would be high.

reported to show the feasibility of the proposed surgical system.

Fig. 2. Advantages of the modular approach in surgical applications

directions.

small incision.

(i)

(ii)

several functional modules can be used simultaneously, the modular robot may perform rather complicated tasks that a single endoscopic capsule or an endoscopic device is not capable of conducting. For example, if more than two camera modules are employed, the surgeon can conduct tasks while observing the site from different

## **3.1 Design and prototyping of the robotic modules**

Figure 3 shows the design and prototypes of the Structural Module and the Biopsy Module (Harada, K. 2009, Harada, K. 2010). The Structural Module has two degrees of freedom (±90° of bending and 360° of rotation). The Structural Module contains a Li-Po battery (20 mAh, LP2-FR, Plantraco Ltd., Canada), two brushless DC geared motors that are 4 mm in diameter and 17.4 mm in length (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan) and a custom-made motor control board capable of wireless control (Susilo, E. 2009). The stall torque of the selected geared motor is 10.6 mNm and the speed is 112 rpm when controlled by the developed controller. The bending mechanism is composed of a worm and a spur gear (9:1 gear reduction), whereas the rotation mechanism is composed of two spur gears (no gear reduction). All gears (DIDEL SA, Switzerland) were made of nylon, and they were machined to be implemented in the small space of the capsule. Two permanent magnets (Q-05-1.5-01-N, Webcraft GMbH, Switzerland) were attached at each end of the module to help with self-alignment and modular docking. The module is 15.4 mm in diameter and 36.5 mm in length; it requires further miniaturization before clinical application. The casing of the prototype was made of acrylic plastic and fabricated by 3D rapid prototyping (Invison XT 3-D Modeler, 3D systems, Inc., USA). The total weight is 5.6 g. Assuming that the module would weigh 10 g with the metal chassis and gears, the maximum torque required for lifting two connected modules is 5.4 mNm for both the bending DOF and rotation DOF. Assuming that the gear transmission efficiency for the bending mechanism is 30%, the stall torque for the bending DOF is 28.6 mNm. On the other hand, the stall torque for the rotation DOF is 8.5 mNm when the transmission efficiency for the rotation mechanism is 80%. The torque was designed to have sufficient force for surgical operation, but the transmission efficiency of the miniaturized plastic gears was much smaller than the theoretical value as explained in the next subsection.

## • Controller

The aforementioned brushless DC motor came with a dedicated motor driving board (SSD04, Namiki Precision Jewel Co., Ltd., 19.6 mm × 34.4 mm × 3 mm). This board only allows driving of one motor; hence, two boards are required for a robotic module with 2 DOFs. Because there was not sufficient space for the boards in the robotic module, a custom made high density control board was designed and developed in-house. This control board consisted of one CC2430 microcontroller (Texas Instrument, USA) as the main wireless controller and three sets of A3901 dual bridge motor drivers (Allegro MicroSystem, Inc., USA). The fabricated board is 9.6 mm in diameter, 2.5 mm in thickness and 0.37 g in weight, which is compatible with swallowing. The A3901 motor driver chip was originally intended for a brushed DC motor, but a software commutation algorithm was implemented to control a brushless DC motor as well. An IEEE 802.15.4 wireless personal area network (WPAN) was introduced as an embedded feature (radio peripheral) of the microcontroller. The implemented algorithm enables control of the selected brushless DC motor in Back Electro-Motive Force (BEMF) feedback mode or slow speed stepping mode. When the stepping mode is selected, the motor can be driven with a resolution of 0.178º.

For the modular approach, each control board shall be equipped with a wired locating system for intra-modular communication in addition to the wireless communication. Aside from wireless networking, the wired locating system, which is not implemented in the presented design, would be useful for identification of the sequence of the docked modules

Modular Robotic Approach in Surgical Applications

**3.2 Performance of the Structural Module** 

tasks can be achieved using different manufacturing processes.

Measured Angle (deg.)

0

0 45 90 135 180

45

90

135

180

the target.

45

0


Measured Angle (deg.)



measurement (right) (Menciassi, A. 2010)

90

– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 9

The Biopsy Module can generate a force of 7.1 N at its tip, and can also open the grasping parts to a width of 19 mm with an opening angle of 90 degrees. These values are much larger than those of conventional endoscopic forceps, which are 2-4 mm in diameter. As a

In conventional endoscopy, forceps are inserted through endoscopic channels that are parallel to the direction of the endoscopic view, which often results in the forceps hiding the target. Conversely, the Biopsy Module can be positioned at any angle relative to the endoscopic view owing to the modular approach, thereby allowing adequate approach to

The mechanical performance of the bending and rotation DOFs of the Structural Module was measured in preliminary tests (Menciassi, A. 2010), and the results are summarized in Fig.4. The bending angle was varied by up to ± 90° in steps of 10° three times in succession. The measured range of the bending angle was -86.0° to +76.3°, and the maximum error was 15.8°. The rotation angle was increased from 0° to 180° in steps of 45° three times in succession, and the measured range of the rotational angle was between 0° and 166.7° with a maximum error of 13.3°. The difference between the driven angle and the measured angle was due to backlash of the gears and the lack of precision and stiffness of the casing made by 3D rapid prototyping. Regardless of the errors and the hysteresis, the repeatability was sufficient for the intended application for both DOFs. These results indicate that the precision of each motion can be improved by changing the materials of the gears and the casing. Since the motor can be controlled with a resolution of 0.178°, very precise surgical

Commanded Angle (deg.)

Fig. 4. Bending angle measurement (left), rotation angle measurement (middle), and torque

In addition to the angle measurements, both bending and rotation torque were measured. The torque was measured by connecting cylindrical parts with permanent magnets at both ends until the bending/rotational motion stopped. The length and weight of each cylinder was designed in advance, and several types of cylinders were prepared. The measured bending torque was 6.5 mNm and the rotation torque was 2.2 mNm. The figure also shows one module lifting up two modules attached to its bending mechanism as a demonstration. The performance in terms of precision and generated torque, which are very important for reconfiguration and surgical tasks, was sufficient; however, the precision was limited owing

demonstration, Figure 3 shows the Biopsy Module holding a coin weighing 7.5 g.

in real time. The wired locating system is composed of three lines, one for serial multidrop communication, one for a peripheral locator and one as a ground reference. When the modules are firmly connected, the intra-modular communication can be switched from wireless to wired to save power while maintaining the predefined network addresses. When one module is detached intentionally or by mistake, it will switch back to wireless mode.

• Battery

The battery capacity carried by each module may differ from one to another (e.g. from 10 mAh to 50 mAh) depending on the available space inside the module. For the current design, a 20 mAh Li-Po battery was selected. Continuous driving of the selected motor on its maximum speed using a 20 mAh Li-Po battery was found to last up to 17 minutes. A module does not withdraw power continuously because the actuation mechanisms can maintain their position when there is no current to the motor owing to its high gear reduction (337:1). A module consumes power during actuation, but its power use is very low in stand-by mode.

• Biopsy Module

The Biopsy Module is a Functional Module that can be used to conduct diagnosis. The grasping mechanism has a worm and two spur gears, which allows wide opening of the grasping parts. The grasping parts can be hidden in the casing at the maximum opening to prevent tissue damage during ingestion. The motor and other components used for the Biopsy Module are the same as for the Structural Module. The brushless DC geared motors (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan), the control board, a worm gear and two spur gears (9:1 gear reduction) were implemented in the Biopsy Module. A permanent magnet (Q-05-1.5-01-N, Webcraft GMbH, Switzerland) was placed at one side to be connected to another Structural Module.

Fig. 3. Design and prototypes of the structural module (left) and the biopsy module (right)

The Biopsy Module can generate a force of 7.1 N at its tip, and can also open the grasping parts to a width of 19 mm with an opening angle of 90 degrees. These values are much larger than those of conventional endoscopic forceps, which are 2-4 mm in diameter. As a demonstration, Figure 3 shows the Biopsy Module holding a coin weighing 7.5 g.

In conventional endoscopy, forceps are inserted through endoscopic channels that are parallel to the direction of the endoscopic view, which often results in the forceps hiding the target. Conversely, the Biopsy Module can be positioned at any angle relative to the endoscopic view owing to the modular approach, thereby allowing adequate approach to the target.

## **3.2 Performance of the Structural Module**

8 Robotic Systems – Applications, Control and Programming

in real time. The wired locating system is composed of three lines, one for serial multidrop communication, one for a peripheral locator and one as a ground reference. When the modules are firmly connected, the intra-modular communication can be switched from wireless to wired to save power while maintaining the predefined network addresses. When one module is detached intentionally or by mistake, it will switch back to wireless mode.

The battery capacity carried by each module may differ from one to another (e.g. from 10 mAh to 50 mAh) depending on the available space inside the module. For the current design, a 20 mAh Li-Po battery was selected. Continuous driving of the selected motor on its maximum speed using a 20 mAh Li-Po battery was found to last up to 17 minutes. A module does not withdraw power continuously because the actuation mechanisms can maintain their position when there is no current to the motor owing to its high gear reduction (337:1). A module consumes power during actuation, but its power use is very

The Biopsy Module is a Functional Module that can be used to conduct diagnosis. The grasping mechanism has a worm and two spur gears, which allows wide opening of the grasping parts. The grasping parts can be hidden in the casing at the maximum opening to prevent tissue damage during ingestion. The motor and other components used for the Biopsy Module are the same as for the Structural Module. The brushless DC geared motors (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan), the control board, a worm gear and two spur gears (9:1 gear reduction) were implemented in the Biopsy Module. A permanent magnet (Q-05-1.5-01-N, Webcraft GMbH, Switzerland) was placed at one side to

Fig. 3. Design and prototypes of the structural module (left) and the biopsy module (right)

• Battery

low in stand-by mode. • Biopsy Module

be connected to another Structural Module.

The mechanical performance of the bending and rotation DOFs of the Structural Module was measured in preliminary tests (Menciassi, A. 2010), and the results are summarized in Fig.4. The bending angle was varied by up to ± 90° in steps of 10° three times in succession. The measured range of the bending angle was -86.0° to +76.3°, and the maximum error was 15.8°. The rotation angle was increased from 0° to 180° in steps of 45° three times in succession, and the measured range of the rotational angle was between 0° and 166.7° with a maximum error of 13.3°. The difference between the driven angle and the measured angle was due to backlash of the gears and the lack of precision and stiffness of the casing made by 3D rapid prototyping. Regardless of the errors and the hysteresis, the repeatability was sufficient for the intended application for both DOFs. These results indicate that the precision of each motion can be improved by changing the materials of the gears and the casing. Since the motor can be controlled with a resolution of 0.178°, very precise surgical tasks can be achieved using different manufacturing processes.

Fig. 4. Bending angle measurement (left), rotation angle measurement (middle), and torque measurement (right) (Menciassi, A. 2010)

In addition to the angle measurements, both bending and rotation torque were measured. The torque was measured by connecting cylindrical parts with permanent magnets at both ends until the bending/rotational motion stopped. The length and weight of each cylinder was designed in advance, and several types of cylinders were prepared. The measured bending torque was 6.5 mNm and the rotation torque was 2.2 mNm. The figure also shows one module lifting up two modules attached to its bending mechanism as a demonstration. The performance in terms of precision and generated torque, which are very important for reconfiguration and surgical tasks, was sufficient; however, the precision was limited owing

Modular Robotic Approach in Surgical Applications

**4.1 Design and prototyping of the reconfigurable master device** 

it takes only a few seconds to connect one module to another.

**4. Reconfigurable master device** 

redundancy.

– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 11

One main advantage of using a modular approach in surgical applications is the adaptivity to the given environment as mentioned in Section 2.2. Wherever the robotic platform is deployed in the GI tract, the robotic topology can be changed based on preoperative plans or the in-situ situation to fit in any particular environment. This dynamic changing and reshaping of the robotic topology should be reflected on the user interface. Since it is possible for a robotic topology to have redundant DOFs, the master device for the modular surgical system needs to be able to handle the redundancy that is inherent to modular robots. Based on these considerations, we propose a reconfigurable master device that resembles the robotic platform (Fig.6). When the assembled robot changes its topology, the master device follows the same configuration. The robotic module shown in Fig. 6 has a diameter of 15.4 mm, while a module of the reconfigurable master device has a diameter of 30 mm. The master modules can be easily assembled or disassembled using set screws, and

Each robotic module is equipped with two motors as described in the previous section; thus, each master module is equipped with two potentiometers (TW1103KA, Tyco Electronics) that are used as angular position sensors. Calculating the angular position of each joint of the reconfigurable master device is quite straightforward. A common reference voltage is sent from a data acquisition card to all potentiometers, after which the angular position can be calculated from the feedback readings. Owing to the identical configuration, the angle of each joint of the robotic modules can be easily determined, even if the topology has

Fig. 6. Robotic modules (top line) and the reconfigurable master device (bottom line): one

module (left), assembled modules (middle) and prototypes (right)

to the aforementioned fabrication problems. The thin walls of the casing made of acrylic plastic were easily deformed, which caused friction between the parts. The casing made of metal or PEEK and tailor-made metal gears with high precision will improve the mechanism rigidity and performance, thus producing the optimal stability.

## **3.3 Possible designs of robotic modules**

Figure 5 shows various designs of robotic modules that can be implemented in the modular surgical robot. The modules can be categorized into three types: structural modules, functional modules, and other modules. Structural modules are used to configure a robotic topology. Functional modules are used for diagnosis or intervention, while other modules can be added to enhance the performance and robustness of the robotic system. Obviously, an assembled robot made of different types of modules (i.e. a robot with high heterogeneity) may provide high dexterity, but the self-assembly in the stomach and control of the modules would become more difficult. To optimize the level of heterogeneity, self-assembly of the robotic modules must be developed so that the reconfiguration of the robotic topology following the self-assembly can be planned in advance. Employing pre-assembled modular arms or tethered modules can be another option to facilitate assembly in a body cavity; however, this would require further anesthesia, and it would hinder the promotion of massive screening.

Fig. 5. Various designs of the robotic modules

## **4. Reconfigurable master device**

10 Robotic Systems – Applications, Control and Programming

to the aforementioned fabrication problems. The thin walls of the casing made of acrylic plastic were easily deformed, which caused friction between the parts. The casing made of metal or PEEK and tailor-made metal gears with high precision will improve the mechanism

Figure 5 shows various designs of robotic modules that can be implemented in the modular surgical robot. The modules can be categorized into three types: structural modules, functional modules, and other modules. Structural modules are used to configure a robotic topology. Functional modules are used for diagnosis or intervention, while other modules can be added to enhance the performance and robustness of the robotic system. Obviously, an assembled robot made of different types of modules (i.e. a robot with high heterogeneity) may provide high dexterity, but the self-assembly in the stomach and control of the modules would become more difficult. To optimize the level of heterogeneity, self-assembly of the robotic modules must be developed so that the reconfiguration of the robotic topology following the self-assembly can be planned in advance. Employing pre-assembled modular arms or tethered modules can be another option to facilitate assembly in a body cavity; however, this would require further anesthesia, and it would hinder the promotion of

rigidity and performance, thus producing the optimal stability.

**3.3 Possible designs of robotic modules** 

Fig. 5. Various designs of the robotic modules

massive screening.

## **4.1 Design and prototyping of the reconfigurable master device**

One main advantage of using a modular approach in surgical applications is the adaptivity to the given environment as mentioned in Section 2.2. Wherever the robotic platform is deployed in the GI tract, the robotic topology can be changed based on preoperative plans or the in-situ situation to fit in any particular environment. This dynamic changing and reshaping of the robotic topology should be reflected on the user interface. Since it is possible for a robotic topology to have redundant DOFs, the master device for the modular surgical system needs to be able to handle the redundancy that is inherent to modular robots. Based on these considerations, we propose a reconfigurable master device that resembles the robotic platform (Fig.6). When the assembled robot changes its topology, the master device follows the same configuration. The robotic module shown in Fig. 6 has a diameter of 15.4 mm, while a module of the reconfigurable master device has a diameter of 30 mm. The master modules can be easily assembled or disassembled using set screws, and it takes only a few seconds to connect one module to another.

Each robotic module is equipped with two motors as described in the previous section; thus, each master module is equipped with two potentiometers (TW1103KA, Tyco Electronics) that are used as angular position sensors. Calculating the angular position of each joint of the reconfigurable master device is quite straightforward. A common reference voltage is sent from a data acquisition card to all potentiometers, after which the angular position can be calculated from the feedback readings. Owing to the identical configuration, the angle of each joint of the robotic modules can be easily determined, even if the topology has redundancy.

Fig. 6. Robotic modules (top line) and the reconfigurable master device (bottom line): one module (left), assembled modules (middle) and prototypes (right)

Modular Robotic Approach in Surgical Applications

plane.

Combination

Master device

Simulator

Fig. 7. Three topologies used in the experiments

from the camera module in Fig.6.

– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 13

A bar was selected as the target object instead of a sphere because the height of the collision point is different for each topology when the object appears in the same position in the 2D

The experiment was designed so that a bar appears at random in the shared workspace. The bar represents a target area at which the Biopsy Module needs to collect tissue samples, and this experiment is a simple example to select one topology among three choices given that the arm can reach the target. We assumed that this choice may vary depending on the user, and this experiment was designed to determine if the reconfigurability of the master device,

During the experiment, the operator of the reconfigurable master device could hear a beeping sound when the distal end of the arm (i.e. the grasping part of the biopsy module) touched the bar. The task designed for the experiments was to move the arm of the reconfigurable master device as quickly as possible, touch the bar in CG, and then maintain its position for three seconds. The plane in which the bar stands is shown in grids (Fig.9), and the operator obtains 3D perception by observing these grids. The plane with the grids is the same for all topologies. The angle of the view was set so that the view is similar to that

Five subjects (a-e) participated in the experiments, none of whom were surgeons. Each subject was asked to freely move the master device to learn how to operate it; however, this practice was allowed for one minute before starting the experiments. Each subject started from Topology I, then tested Topology II and finally Topology III. The time needed to touch the bar and maintain it for three seconds was measured. This procedure was repeated ten times for each topology with a randomized position of the bar. During the procedure, the bar appeared at random; however, it always appeared in the shared workspace to ensure

Topology I Topology II Topology III

i.e. customization of the robot, provides advantages and improves performance.

The advantages of the proposed master device include intuitive manipulation. For example, the rotational movement of a structural module used to twist the arm is limited to ± 180°, and the master module also has this limitation. This helps surgeons intuitively understand the range of the motion and the reachable working space of the modules. Using a conventional master manipulator or an external console, it is possible that the slave manipulator cannot move owing to its mechanical constraints, while the master manipulator can still move. However, using the proposed master device, the surgeon can intuitively understand the mechanical constraints by manipulating the master device during practice/training. Furthermore, the position of the master arm can indicate where the robotic modules are, even if they are outside of the camera module's view. These characteristics increase the safety of the operation. This feature is important because the entire robotic system is placed inside the body. In other surgical robotic systems, the position or shape of the robotic arms is not important as they are placed outside of the body and can be seen during operation. Unlike other master devices, it is also possible for two or more surgeons to move the reconfigurable master device together at the same time using multi arms with redundant DOFs.

### **4.2 Evaluation**

A simulation-based evaluation setup was selected to simplify the preliminary evaluation of the feasibility of the reconfigurable master device. The authors previously developed the Slave Simulator to evaluate workloads for a master-slave surgical robot (Kawamura, K. 2006). The Slave Simulator can show the motion of the slave robot in CG (Computer Graphics), while the master input device is controlled by an operator. Because the simulator can virtually change the parameters of the slave robot or its control, it is easy to evaluate the parameters as well as the operability of the master device. This Slave Simulator was appropriately modified for the ARES system. The modified Slave Simulator presents the CG models of the robotic modules to the operator. The dimension and DOFs of each module in CG were determined based on the design of the robotic modules. The angle of each joint is given by the signal from the potentiometers of the reconfigurable master device, and the slave modules in CG move in real time to reproduce the configuration of the master device. This Slave Simulator is capable of altering joint positions and the number of joints of the slave arms in CG so that the workspace of the reconfigurable master device can be reproduced in a virtual environment for several types of topologies. The simulator is composed of a 3D viewer that uses OpenGL and a physical calculation function. This function was implemented to detect a collision between the CG modules and an object placed in the workspace.

To simplify the experiments to evaluate the feasibility of the proposed master device and usefulness of the developed simulator, only one arm of the reconfigurable master device was used. Three topologies that consist of one Biopsy Module and one or two Structural Module(s) were selected as illustrated in Fig.7. Topology I consists of a Structural Module and a Biopsy Module, and the base is fixed so that the arm appears with an angle of 45 degrees. One Structural Module is added to Topology I to configure Topology II, and Topology III is identical to Topology II, but placed at 0 degrees. Both Topology II and Topology III have redundant DOFs. The projection of the workspace of each arm and the shared workspace are depicted in Fig.8. A target object on which the arm works in the experiments must be placed in this shared area, which makes it easy to compare topologies. 12 Robotic Systems – Applications, Control and Programming

The advantages of the proposed master device include intuitive manipulation. For example, the rotational movement of a structural module used to twist the arm is limited to ± 180°, and the master module also has this limitation. This helps surgeons intuitively understand the range of the motion and the reachable working space of the modules. Using a conventional master manipulator or an external console, it is possible that the slave manipulator cannot move owing to its mechanical constraints, while the master manipulator can still move. However, using the proposed master device, the surgeon can intuitively understand the mechanical constraints by manipulating the master device during practice/training. Furthermore, the position of the master arm can indicate where the robotic modules are, even if they are outside of the camera module's view. These characteristics increase the safety of the operation. This feature is important because the entire robotic system is placed inside the body. In other surgical robotic systems, the position or shape of the robotic arms is not important as they are placed outside of the body and can be seen during operation. Unlike other master devices, it is also possible for two or more surgeons to move the reconfigurable master device together at the same time using

A simulation-based evaluation setup was selected to simplify the preliminary evaluation of the feasibility of the reconfigurable master device. The authors previously developed the Slave Simulator to evaluate workloads for a master-slave surgical robot (Kawamura, K. 2006). The Slave Simulator can show the motion of the slave robot in CG (Computer Graphics), while the master input device is controlled by an operator. Because the simulator can virtually change the parameters of the slave robot or its control, it is easy to evaluate the parameters as well as the operability of the master device. This Slave Simulator was appropriately modified for the ARES system. The modified Slave Simulator presents the CG models of the robotic modules to the operator. The dimension and DOFs of each module in CG were determined based on the design of the robotic modules. The angle of each joint is given by the signal from the potentiometers of the reconfigurable master device, and the slave modules in CG move in real time to reproduce the configuration of the master device. This Slave Simulator is capable of altering joint positions and the number of joints of the slave arms in CG so that the workspace of the reconfigurable master device can be reproduced in a virtual environment for several types of topologies. The simulator is composed of a 3D viewer that uses OpenGL and a physical calculation function. This function was implemented to detect a collision between the CG modules and an object

To simplify the experiments to evaluate the feasibility of the proposed master device and usefulness of the developed simulator, only one arm of the reconfigurable master device was used. Three topologies that consist of one Biopsy Module and one or two Structural Module(s) were selected as illustrated in Fig.7. Topology I consists of a Structural Module and a Biopsy Module, and the base is fixed so that the arm appears with an angle of 45 degrees. One Structural Module is added to Topology I to configure Topology II, and Topology III is identical to Topology II, but placed at 0 degrees. Both Topology II and Topology III have redundant DOFs. The projection of the workspace of each arm and the shared workspace are depicted in Fig.8. A target object on which the arm works in the experiments must be placed in this shared area, which makes it easy to compare topologies.

multi arms with redundant DOFs.

**4.2 Evaluation** 

placed in the workspace.

A bar was selected as the target object instead of a sphere because the height of the collision point is different for each topology when the object appears in the same position in the 2D plane.

The experiment was designed so that a bar appears at random in the shared workspace. The bar represents a target area at which the Biopsy Module needs to collect tissue samples, and this experiment is a simple example to select one topology among three choices given that the arm can reach the target. We assumed that this choice may vary depending on the user, and this experiment was designed to determine if the reconfigurability of the master device, i.e. customization of the robot, provides advantages and improves performance.

Fig. 7. Three topologies used in the experiments

During the experiment, the operator of the reconfigurable master device could hear a beeping sound when the distal end of the arm (i.e. the grasping part of the biopsy module) touched the bar. The task designed for the experiments was to move the arm of the reconfigurable master device as quickly as possible, touch the bar in CG, and then maintain its position for three seconds. The plane in which the bar stands is shown in grids (Fig.9), and the operator obtains 3D perception by observing these grids. The plane with the grids is the same for all topologies. The angle of the view was set so that the view is similar to that from the camera module in Fig.6.

Five subjects (a-e) participated in the experiments, none of whom were surgeons. Each subject was asked to freely move the master device to learn how to operate it; however, this practice was allowed for one minute before starting the experiments. Each subject started from Topology I, then tested Topology II and finally Topology III. The time needed to touch the bar and maintain it for three seconds was measured. This procedure was repeated ten times for each topology with a randomized position of the bar. During the procedure, the bar appeared at random; however, it always appeared in the shared workspace to ensure

Modular Robotic Approach in Surgical Applications

the Effort workload was high for Topology III.

robot and interface as they prefer.

Work Load: NASA-TLX Score

Table 1. Experimental results

by the subject.

– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 15

summarized in Table 1. For each item, a smaller value indicates a more favorable evaluation

Considering the results of the time and workload score, Topology II was most difficult. The difference between Topology I and III was interesting. Two of the subjects ((b) and (c)) preferred Topology I, which did not have a redundant DOF. Conversely, three of the subjects ((a), (d) and (e)) preferred Topology III because they could select the path to reach the target owing to the redundant DOF. The average scores of the NASA TLX parameters shown in Fig.10 suggest that the Physical Demand workload was high for Topology I, while

The two subjects who preferred Topology I rather than Topology III claimed that it was not easy to determine where the bar was located when Topology III was used owing to the lack of 3D perception. In addition, they reported that the bar seemed to be placed far from the base. However, the bar appeared randomly, but in the same area; therefore, the bar that appeared in the experiment that employed Topology III was not placed farther from the base when compared to the experiments that used Topology I or Topology II. Accordingly, these two subjects may have had difficulty obtaining 3D perception from the gridded plane. In Topology III, the arm was partially out of view in the initial position; thus, the operator needed to obtain 3D perception by seeing the grids. It is often said that most surgeons can obtain 3D perception even if they use a 2D camera, and our preliminary experimental results imply that this ability might differ by individual. Some people appear to obtain 3D perception primarily by seeing the relative positions between the target and the tool they move. Redundant DOFs may also be preferred by operators with better 3D perception capability. Although the experiments were preliminary, there must be other factors that accounted for the preference of the user. Indeed, it is likely that the preferable topology varies depending on the user, and the developed simulator would be useful to evaluate these variations. The proposed reconfigurable master device will enable individual surgeons to customize the

Topology Subject Average a b c d e

II 7.6 6.2 4.8 5.5 6.7 6.1 III 4.9 4.3 5.6 4.4 4.7 4.8

I 30.7 11.3 28.3 32.0 73.3 35.1 II 47.6 26.7 28.0 53.0 68.3 44.7 III 37.0 5.0 24.3 14.3 61.3 28.4

II 2 3 2 2 2 2.2 III 1 2 3 1 1 1.6

Time (s) I 5.7 4.1 4.0 5.8 5.0 4.9

Preference I 3 1 1 3 3 2.2

that the arm could reach it. After finishing the experiment, the subjects were asked to fill in a questionnaire (described below) for each topology. The subjects were also asked which topology they preferred.

Fig. 8. Workspace of each topology and the shared workspace

Fig. 9. Simulator and the master device during one test

A NASA TLX questionnaire (NASA TLX (website)) was used to objectively and quantitatively evaluate the workload that the subjects felt during the experiments. This method has versatile uses, and we selected this method also because it was used to evaluate the workload in a tele-surgery environment (Kawamura, K. 2006). This method evaluates Metal Demand, Physical Demand, Temporal Demand, Performance, Effort and Frustration, and gives a score that represents the overall workload that the subject felt during the task.

## **4.3 Results**

The time spent conducting the given task, the workload score evaluated using the NASA TLX questionnaire and the preference of the topology determined by the subjects are 14 Robotic Systems – Applications, Control and Programming

that the arm could reach it. After finishing the experiment, the subjects were asked to fill in a questionnaire (described below) for each topology. The subjects were also asked which

Fig. 8. Workspace of each topology and the shared workspace

Fig. 9. Simulator and the master device during one test

**4.3 Results** 

A NASA TLX questionnaire (NASA TLX (website)) was used to objectively and quantitatively evaluate the workload that the subjects felt during the experiments. This method has versatile uses, and we selected this method also because it was used to evaluate the workload in a tele-surgery environment (Kawamura, K. 2006). This method evaluates Metal Demand, Physical Demand, Temporal Demand, Performance, Effort and Frustration, and gives a score that represents the overall workload that the subject felt during the task.

The time spent conducting the given task, the workload score evaluated using the NASA TLX questionnaire and the preference of the topology determined by the subjects are

topology they preferred.

summarized in Table 1. For each item, a smaller value indicates a more favorable evaluation by the subject.

Considering the results of the time and workload score, Topology II was most difficult. The difference between Topology I and III was interesting. Two of the subjects ((b) and (c)) preferred Topology I, which did not have a redundant DOF. Conversely, three of the subjects ((a), (d) and (e)) preferred Topology III because they could select the path to reach the target owing to the redundant DOF. The average scores of the NASA TLX parameters shown in Fig.10 suggest that the Physical Demand workload was high for Topology I, while the Effort workload was high for Topology III.

The two subjects who preferred Topology I rather than Topology III claimed that it was not easy to determine where the bar was located when Topology III was used owing to the lack of 3D perception. In addition, they reported that the bar seemed to be placed far from the base. However, the bar appeared randomly, but in the same area; therefore, the bar that appeared in the experiment that employed Topology III was not placed farther from the base when compared to the experiments that used Topology I or Topology II. Accordingly, these two subjects may have had difficulty obtaining 3D perception from the gridded plane. In Topology III, the arm was partially out of view in the initial position; thus, the operator needed to obtain 3D perception by seeing the grids. It is often said that most surgeons can obtain 3D perception even if they use a 2D camera, and our preliminary experimental results imply that this ability might differ by individual. Some people appear to obtain 3D perception primarily by seeing the relative positions between the target and the tool they move. Redundant DOFs may also be preferred by operators with better 3D perception capability.

Although the experiments were preliminary, there must be other factors that accounted for the preference of the user. Indeed, it is likely that the preferable topology varies depending on the user, and the developed simulator would be useful to evaluate these variations. The proposed reconfigurable master device will enable individual surgeons to customize the robot and interface as they prefer.


Table 1. Experimental results

Modular Robotic Approach in Surgical Applications

In: *Surgeon*, 3(3), pp. 125–138.

15(6), pp. 373–377.

2699-2704.

235–243.

14(1), pp. 71–78.

18 (2), 025032.

procedures. In: *Robotica*, 28, pp. 199-207.

preliminary results, In: *Robotica*, 28, pp. 171-183.

*Conf. Advanced Intelligent Mechatronics*. pp. 1–6.

survival rate. In: *Arch Oncol*, 12, pp. 51–53.

In: *Sensors and Actuators A*, 156(1), pp.49-58.

NASA TLX. available from http://humansystems.arc.nasa.gov/groups/TLX/

*Robotics and Biomimetics*, pp.1420-25.

Springer-Verlag, 978-1441911254.

– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 17

Carpi, F. & Pappone, C. (2009). Magnetic maneuvering of endoscopic capsules by means of a robotic navigation system. In: *IEEE Trans Biomed Eng*, 56(5), pp. 1482-90. Ciuti, G.; Valdastri, P., Menciassi, A. & Dario, P. (2010). Robotic magnetic steering and

Cuschieri, A (2005). Laparoscopic surgery: current status, issues and future developments.

Giday, S.; Kantsevoy, S. & Kalloo, A. (2006). Principle and history of natural orifice

Harada, K.; Susilo, E., Menciassi, A. & Dario, P. (2009). Wireless reconfigurable modules for

Harada, K.; Oetomo, D., Susilo, E., Menciassi, A., Daney, D., Merlet, J.-P. & Dario, P. (2010).

Kawamura, K.; Kobayashi, Y & Fujie, M. G. (2006). Development of Real-Time Simulation

Menciassi, A.; Valdastri, P., Harada, K. & Dario, P. (2010). Single and Multiple Robotic

Moglia, A.; Menciassi, A. Schurr, M. & Dario, P. (2007). Wireless capsule endoscopy: From

Murata, S. & Kurokawa, H. (2007). Self-reconfigurable robots. In: *IEEE Rob. Autom Mag.*,

Nagy, Z.; Abbott, J. J. & Nelson, B. J. (2007). The magnetic self-aligning hermaphroditic

Park, S. K.; Koo, K. I., Bang, S. M., Park J. Y., Song, S. Y., & Cho, D. G. (2008). A novel

Pesic, M.; Karanikolic, A., Djordjevic, N., Katic, V., Rancic, Z., Radojkovic, M., Ignjatovic N.

Quirini, M.; Menciassi, A., Scapellato, S., Stefanini, C., and Dario, P. (2008). Design and

Susilo, E.; Valdastri, P., Menciassi, A. & Dario, P. (2009). A miniaturized wireless control

capsule endoscope. In: *IEEE Trans. Magnetics*, 39(5), pp. 3232–34.

locomotion of capsule endoscope for diagnostic and surgical endoluminal

translumenal endoscopic surgery (notes). In: *Minim. Invasive Ther. Allied Technol.,*

robotic endoluminal surgery. In: *Proc. IEEE Int. Conf. on Robotics and Automation*, pp.

A Reconfigurable Modular Robotic Endoluminal Surgical System: Vision and

for Workload Quantization in Robotic Tele-surgery. In: *Proc. IEEE Int. Conf. on* 

Capsules for Endoluminal Diagnosis and Surgery. In: *Surgical Robotics - System Applications and Visions*, Rosen, J., Hannaford, B. & Satava, M., pp. 313-354.,

diagnostic devices to multipurpose robotic systems. In: *Biomed Microdevices*, 9, pp.

connector: A scalable approach for modular microrobotics. In: *Proc. IEEE/ASME Int.* 

microactuator for microbiopsy in capsular endoscopes. In: *J. Micromech. Microeng*.,

& Pesic, I. (2004). The importance of primary gastric cancer location in 5-year

fabrication of a motor legged capsule for the active exploration of the gastrointestinal tract. In: *Proc. IEEE/ASME Trans. Mechatronics*, 13, pp. 169–179. Sendoh, M.; Ishiyama, K. & Arai, K. (2003). Fabrication of magnetic actuator for use in a

platform for robotic capsular endoscopy using advanced pseudokernel approach.

Fig. 10. NASA TLX parameters for three topologies

## **5. Conclusion**

A modular robot was proposed for endoluminal surgery. The design, prototyping and evaluation of the modules were reported. Although there are some issues related to the fabrication problems, the results of the performance tests show the feasibility of the modular surgical system. A reconfigurable master device has also been proposed, and its feasibility was evaluated by simulation-based experiments. The preliminary results showed that the preferred topology may vary depending on the user. Moreover, the reconfigurable master device would enable each surgeon to customize the surgical system according to his/her own preferences. Development of the robotic modules and the reconfigurable master device provided proof of concept of the modular robotic system for endoluminal surgery, suggesting that the modular approach has great potential for surgical applications.

## **6. Acknowledgments**

This study was supported in part by the European Commission in the framework of the ARES Project (Assembling Reconfigurable Endoluminal Surgical System, NEST-2003-1- ADVENTURE/15653), by the European Union Institute in Japan at Waseda University (EUIJ Waseda, http://www.euij-waseda.jp/eng/) within the framework of its Research Scholarship Programme, and by the Global COE Program "Global Robot Academia" from the Ministry of Education, Culture, Sports, Science and Technology of Japan. The authors are grateful to Mr. Nicodemo Funaro for manufacturing the prototypes and Ms. Sara Condino for her invaluable technical support.

## **7. References**

Bardaro, S. J. & Swanström, L. (2006). Development of advanced endoscopes for Natural Orifice Transluminal Endoscopic Surgery (NOTES). In: *Minim. Invasive Ther. Allied Technol.*, 15(6), pp. 378–383.

16 Robotic Systems – Applications, Control and Programming

Physical Demand

Topology I Topology II Topology III

Temporal Demand

Effort

Fig. 10. NASA TLX parameters for three topologies

**5. Conclusion** 

surgical applications.

**7. References** 

**6. Acknowledgments** 

Condino for her invaluable technical support.

*Technol.*, 15(6), pp. 378–383.

Frustration

Performance

A modular robot was proposed for endoluminal surgery. The design, prototyping and evaluation of the modules were reported. Although there are some issues related to the fabrication problems, the results of the performance tests show the feasibility of the modular surgical system. A reconfigurable master device has also been proposed, and its feasibility was evaluated by simulation-based experiments. The preliminary results showed that the preferred topology may vary depending on the user. Moreover, the reconfigurable master device would enable each surgeon to customize the surgical system according to his/her own preferences. Development of the robotic modules and the reconfigurable master device provided proof of concept of the modular robotic system for endoluminal surgery, suggesting that the modular approach has great potential for

This study was supported in part by the European Commission in the framework of the ARES Project (Assembling Reconfigurable Endoluminal Surgical System, NEST-2003-1- ADVENTURE/15653), by the European Union Institute in Japan at Waseda University (EUIJ Waseda, http://www.euij-waseda.jp/eng/) within the framework of its Research Scholarship Programme, and by the Global COE Program "Global Robot Academia" from the Ministry of Education, Culture, Sports, Science and Technology of Japan. The authors are grateful to Mr. Nicodemo Funaro for manufacturing the prototypes and Ms. Sara

Bardaro, S. J. & Swanström, L. (2006). Development of advanced endoscopes for Natural

Orifice Transluminal Endoscopic Surgery (NOTES). In: *Minim. Invasive Ther. Allied* 

Mental Demand


**2** 

*USA* 

**Target Point Manipulation** 

Jadav Das and Nilanjan Sarkar *Vanderbilt University, Nashville, TN* 

**Inside a Deformable Object** 

Target point manipulation inside a deformable object by a robotic system is necessary in many medical and industrial applications such as breast biopsy, drug injection, suturing, precise machining of deformable objects etc. However, this is a challenging problem because of the difficulty of imposing the motion of the internal target point by a finite number of actuation points located at the boundary of the deformable object. In addition, there exist several other important manipulative operations that deal with deformable objects such as whole body manipulation [1], shape changing [2], biomanipulation [3] and tumor manipulation [4] that have practical applications. The main focus of this chapter is the target point manipulation inside a deformable object. For instance, a positioning operation called linking in the manufacturing of seamless garments [5] requires manipulation of internal points of deformable objects. Mating of a flexible part in electric industry also results in the positioning of mated points on the object. In many cases these points cannot be manipulated directly since the points of interest in a mating part is inaccessible because of contact with a mated part. Additionally, in medical field, many diagnostic and therapeutic procedures require accurate needle targeting. In case of needle breast biopsy [4] and prostate cancer brachytherapy [6], needles are used to access a designated area to remove a small amount of tissue or to implant radio-active seed at the targeted area. The deformation causes the target to move away from its original location. To clarify the situation we present a schematic of needle insertion for breast biopsy procedure as shown in Figure 1. When tip of the needle reaches the interface between two different types of tissue, its further insertion will push the tissue, instead of piercing it, causing unwanted deformations. These deformations move the target away from its original location as shown in Figure 1(b). In this case, we cannot manipulate the targeted area directly because it is internal to the organ. It must be manipulated by controlling some other points where forces can be applied as shown in Figure 1(c). Therefore, in some cases one would need to move the positioned points to the desired locations of these deformable objects (e.g., mating two deformable parts for sewing seamlessly) while in other cases one may need to preserve the original target location (e.g., guiding the tumor to fall into the path of needle insertion). In either of these situations, the ability of a robotic system to control the target of the deformable object becomes important,

To control the position of the internal target point inside a deformable object requires appropriate contact locations on the surface of the object. Therefore, we address the issue of

**1. Introduction** 

which is the focus of this chapter.


## **Target Point Manipulation Inside a Deformable Object**

Jadav Das and Nilanjan Sarkar *Vanderbilt University, Nashville, TN USA* 

## **1. Introduction**

18 Robotic Systems – Applications, Control and Programming

World Health Organisation, "Fact sheet n.297," Available from: http://www.who.int/

Yim, M.; Shen, W., Salemi, B., Rus, D., Moll, M., Lipson, H., Klavins, E. & Chirikjian, G.

(2007). Modular self-reconfigurable robot systems [Grand Challenges of Robotics].

mediacen-ter/factsheets/ fs297, 2006.

In: *IEEE Rob. Autom Mag.*, 14(1), pp. 865–872.

Target point manipulation inside a deformable object by a robotic system is necessary in many medical and industrial applications such as breast biopsy, drug injection, suturing, precise machining of deformable objects etc. However, this is a challenging problem because of the difficulty of imposing the motion of the internal target point by a finite number of actuation points located at the boundary of the deformable object. In addition, there exist several other important manipulative operations that deal with deformable objects such as whole body manipulation [1], shape changing [2], biomanipulation [3] and tumor manipulation [4] that have practical applications. The main focus of this chapter is the target point manipulation inside a deformable object. For instance, a positioning operation called linking in the manufacturing of seamless garments [5] requires manipulation of internal points of deformable objects. Mating of a flexible part in electric industry also results in the positioning of mated points on the object. In many cases these points cannot be manipulated directly since the points of interest in a mating part is inaccessible because of contact with a mated part. Additionally, in medical field, many diagnostic and therapeutic procedures require accurate needle targeting. In case of needle breast biopsy [4] and prostate cancer brachytherapy [6], needles are used to access a designated area to remove a small amount of tissue or to implant radio-active seed at the targeted area. The deformation causes the target to move away from its original location. To clarify the situation we present a schematic of needle insertion for breast biopsy procedure as shown in Figure 1. When tip of the needle reaches the interface between two different types of tissue, its further insertion will push the tissue, instead of piercing it, causing unwanted deformations. These deformations move the target away from its original location as shown in Figure 1(b). In this case, we cannot manipulate the targeted area directly because it is internal to the organ. It must be manipulated by controlling some other points where forces can be applied as shown in Figure 1(c). Therefore, in some cases one would need to move the positioned points to the desired locations of these deformable objects (e.g., mating two deformable parts for sewing seamlessly) while in other cases one may need to preserve the original target location (e.g., guiding the tumor to fall into the path of needle insertion). In either of these situations, the ability of a robotic system to control the target of the deformable object becomes important, which is the focus of this chapter.

To control the position of the internal target point inside a deformable object requires appropriate contact locations on the surface of the object. Therefore, we address the issue of

Target Point Manipulation Inside a Deformable Object 21

and Isto [12] presented a motion planner for manipulating deformable linear objects using two cooperating robotic arms to tie self-knots and knots around simple static objects. Zhang et al. [13] presented a microrobotic system that is capable of picking up and releasing operation of microobjects. Sun et al. [14] presented a cooperation task of controlling the reference motion and the deformation when handling a deformable object by two manipulators. In [15], Tavasoli et al. presented two-time scale control design for trajectory tracking of two cooperating planar rigid robots moving a flexible beam. However, to the best of our knowledge the works on manipulating an internal target point inside a deformable object are rare [4, 5]. Mallapragada et al. [4] developed an external robotic system to position the tumor in image-guided breast biopsy procedures. In their work, three linear actuators manipulate the tissue phantom externally to position an embedded target inline with the needle during insertion. In [5] Hirai et al. developed a robust control law for manipulation of 2D deformable parts using tactile and vision feedback to control the motion of the deformable object with respect to the position of selected reference points. These works are very important to ours present application, but they did not address the optimal

A wide variety of modeling approaches have been presented in the literature dealing with computer simulation of deformable objects [16]. These are mainly derived from physicallybased models to produce physically valid behaviors. Mass-spring models are one of the most common forms of deformable objects. A general mass-spring model consists of a set of point masses connected to its neighbors by massless springs. Mass-spring models have been used extensively in facial animation [17], cloth motion [18] and surgical simulation [19]. Howard and Bekey [20] developed a generalized method to model an elastic object with the connections of springs and dampers. Finite element models have been used in the computer simulation to model facial tissue and predict surgical outcomes [21, 22]. However, the works

In order to manipulate the target point to the desired location, we must know the appropriate contact locations for effecting the desired motion. There can be an infinite number of possible ways of choosing the contact location based on the object shapes and task to be performed. Appropriate selection of the contact points is an important issue for performing certain tasks. The determination of optimal contact points for rigid object was extensively studied by many researchers with various stability criteria. Salisbury [23] and Kerr [24] discussed that a stable grasp was achieved if and only if the grasp matrix is full row rank. Abel et al. [25] modelled the contact interaction by point contact with Coulomb friction and they stated that optimal grasp has minimum dependency on frictional forces. Cutkosky [26] discussed that the size and shape of the object has less effect on the choice of grasp than by the tasks to be performed after examining a variety of human grasps. Ferrari et al. [27] defined grasp quality to minimize either the maximum value or sum of the finger forces as optimality criteria. Garg and Dutta [28] shown that the internal forces required for grasping deformable objects vary with size of object and finger contact angle. In [29], Watanabe and Yoshikawa investigated optimal contact points on an arbitrary shaped object in 3D using the concept of required external force set. Ding et al. proposed an algorithm for computing form closure grasp on a 3D polyhedral object using local search strategy in [30]. In [31, 32], various concepts and methodologies of robot grasping of rigid objects were reviewed. Cornella et al. [33] presented a mathematical approach to obtain optimal solution of contact points using the dual theorem of nonlinear programming. Saut et al. [34]

locations of the contact points for effecting the desired motion.

on controlling an internal point in a deformable object are not attempted.

determining the optimal contact locations for manipulating a deformable object such that the internal target point can be positioned to the desired location by three robotic fingers using minimum applied forces. A position-based PI controller is developed to control the motion of the robotic fingers such that the robotic fingers apply minimum force on the surface of the object to position the internal target point to the desired location. However, the controller for target position control is non-collocated since the internal target point is not directly actuated by the robotic fingers. It is known in the literature that non-collocated control of a deformable object is not passive, which may lead to instability [7]. In order to protect the object and the robotic fingers from physical damage and also in order to diminish the deterioration of performance caused by unwanted oscillation, it is indispensable to build stable interaction between the robotic fingers and the object. Here we consider that the plant (i.e., the deformable object) is passive and does not generate any energy. So, in order to have stable interaction, it is essential that the controller for the robotic fingers must be stable. Thus, we present a new passivity-based non-collocated controller for the robotic fingers to ensure safe and accurate position control of the internal target point. The passivity theory states that a system is passive if the energy flowing in exceeds the energy flowing out. Creating a passive interface adds the required damping force to make the output energy lower than the input energy. To this end we develop a passivity observer (PO) and a passivity controller (PC) based on [8] for individual robotic finger where PO monitors the net energy flow out of the system and PC will supply the necessary damping force to make PO positive. Our approach extends the concept of PO and PC in [8] to multipoint contacts with the deformable object.

Fig. 1. Schematics of needle breast biopsy procedure: (a) needle insertion, (b) target movement, and (c) target manipulation

The remainder of this chapter is organized as follows: we discuss various issues and prior research in Section 2. The problem description is stated in Section 3. Section 4 outlines the mathematical modelling of the deformable object. A framework for optimal contact locations is presented in Section 5. The control methods are discussed in Section 6. The effectiveness of the derived control law is demonstrated by simulation in Section 7. Finally, the contributions of this work and the future directions are discussed in Section 8.

## **2. Issues and prior research**

A considerable amount of work on multiple robotic systems has been performed during the last few decades [9-11, 12-15]. Mostly, the position and/or force control of multiple manipulators handling a rigid object were studied in [9-11]. However, there were some works on handling deformable object by multiple manipulators presented in [12-15]. Saha 20 Robotic Systems – Applications, Control and Programming

determining the optimal contact locations for manipulating a deformable object such that the internal target point can be positioned to the desired location by three robotic fingers using minimum applied forces. A position-based PI controller is developed to control the motion of the robotic fingers such that the robotic fingers apply minimum force on the surface of the object to position the internal target point to the desired location. However, the controller for target position control is non-collocated since the internal target point is not directly actuated by the robotic fingers. It is known in the literature that non-collocated control of a deformable object is not passive, which may lead to instability [7]. In order to protect the object and the robotic fingers from physical damage and also in order to diminish the deterioration of performance caused by unwanted oscillation, it is indispensable to build stable interaction between the robotic fingers and the object. Here we consider that the plant (i.e., the deformable object) is passive and does not generate any energy. So, in order to have stable interaction, it is essential that the controller for the robotic fingers must be stable. Thus, we present a new passivity-based non-collocated controller for the robotic fingers to ensure safe and accurate position control of the internal target point. The passivity theory states that a system is passive if the energy flowing in exceeds the energy flowing out. Creating a passive interface adds the required damping force to make the output energy lower than the input energy. To this end we develop a passivity observer (PO) and a passivity controller (PC) based on [8] for individual robotic finger where PO monitors the net energy flow out of the system and PC will supply the necessary damping force to make PO positive. Our approach extends the concept of PO and PC in [8] to multi-

Fig. 1. Schematics of needle breast biopsy procedure: (a) needle insertion, (b) target

(a) (b) (c)

the contributions of this work and the future directions are discussed in Section 8.

The remainder of this chapter is organized as follows: we discuss various issues and prior research in Section 2. The problem description is stated in Section 3. Section 4 outlines the mathematical modelling of the deformable object. A framework for optimal contact locations is presented in Section 5. The control methods are discussed in Section 6. The effectiveness of the derived control law is demonstrated by simulation in Section 7. Finally,

A considerable amount of work on multiple robotic systems has been performed during the last few decades [9-11, 12-15]. Mostly, the position and/or force control of multiple manipulators handling a rigid object were studied in [9-11]. However, there were some works on handling deformable object by multiple manipulators presented in [12-15]. Saha

point contacts with the deformable object.

movement, and (c) target manipulation

**2. Issues and prior research** 

and Isto [12] presented a motion planner for manipulating deformable linear objects using two cooperating robotic arms to tie self-knots and knots around simple static objects. Zhang et al. [13] presented a microrobotic system that is capable of picking up and releasing operation of microobjects. Sun et al. [14] presented a cooperation task of controlling the reference motion and the deformation when handling a deformable object by two manipulators. In [15], Tavasoli et al. presented two-time scale control design for trajectory tracking of two cooperating planar rigid robots moving a flexible beam. However, to the best of our knowledge the works on manipulating an internal target point inside a deformable object are rare [4, 5]. Mallapragada et al. [4] developed an external robotic system to position the tumor in image-guided breast biopsy procedures. In their work, three linear actuators manipulate the tissue phantom externally to position an embedded target inline with the needle during insertion. In [5] Hirai et al. developed a robust control law for manipulation of 2D deformable parts using tactile and vision feedback to control the motion of the deformable object with respect to the position of selected reference points. These works are very important to ours present application, but they did not address the optimal locations of the contact points for effecting the desired motion.

A wide variety of modeling approaches have been presented in the literature dealing with computer simulation of deformable objects [16]. These are mainly derived from physicallybased models to produce physically valid behaviors. Mass-spring models are one of the most common forms of deformable objects. A general mass-spring model consists of a set of point masses connected to its neighbors by massless springs. Mass-spring models have been used extensively in facial animation [17], cloth motion [18] and surgical simulation [19]. Howard and Bekey [20] developed a generalized method to model an elastic object with the connections of springs and dampers. Finite element models have been used in the computer simulation to model facial tissue and predict surgical outcomes [21, 22]. However, the works on controlling an internal point in a deformable object are not attempted.

In order to manipulate the target point to the desired location, we must know the appropriate contact locations for effecting the desired motion. There can be an infinite number of possible ways of choosing the contact location based on the object shapes and task to be performed. Appropriate selection of the contact points is an important issue for performing certain tasks. The determination of optimal contact points for rigid object was extensively studied by many researchers with various stability criteria. Salisbury [23] and Kerr [24] discussed that a stable grasp was achieved if and only if the grasp matrix is full row rank. Abel et al. [25] modelled the contact interaction by point contact with Coulomb friction and they stated that optimal grasp has minimum dependency on frictional forces. Cutkosky [26] discussed that the size and shape of the object has less effect on the choice of grasp than by the tasks to be performed after examining a variety of human grasps. Ferrari et al. [27] defined grasp quality to minimize either the maximum value or sum of the finger forces as optimality criteria. Garg and Dutta [28] shown that the internal forces required for grasping deformable objects vary with size of object and finger contact angle. In [29], Watanabe and Yoshikawa investigated optimal contact points on an arbitrary shaped object in 3D using the concept of required external force set. Ding et al. proposed an algorithm for computing form closure grasp on a 3D polyhedral object using local search strategy in [30]. In [31, 32], various concepts and methodologies of robot grasping of rigid objects were reviewed. Cornella et al. [33] presented a mathematical approach to obtain optimal solution of contact points using the dual theorem of nonlinear programming. Saut et al. [34]

Target Point Manipulation Inside a Deformable Object 23

Result [42]: The number of manipulated points must be greater than or equal to that of the

In our present case, we assume that the number of positioned points is one, since we are trying to control the position of the target. Hence, ideally the number of contact points would also be one. But practically, we assume that there are two constraints: (1) we do not want to apply shear force on the deformable object to avoid the damage to the surface, and (2) we can only apply control force directed into the deformable object. We cannot pull the surface since the robotic fingers are not attached to the surface. Thus we need to control the

In this context, there exists a theorem on the force direction closure in mechanics that helps us determining the equivalent number of compressive forces that can replace one

*Theorem* [43]: A set of wrenches **w** can generate force in any direction if and only if there exists a three-tuple of wrenches 123 {,,} **www** whose respective force directions 1**f** *,* 2**f** *,* 3**f**

> 1 23 αβγ

we need three control forces distributed around the object such that the end points of their direction vectors draw a non-zero triangle that includes their common origin point. With such an arrangement we can realize any arbitrary displacement of the target point. Thus the

*Problem statement*: Given the number of actuation points, the initial target and its desired locations, find appropriate contact locations and control action such that the target point is positioned to its desired location by controlling the boundary points of the object with

Consider a schematic in Figure 2 where three robotic fingers are positioning an internal target (point A) in a deformable object to the desired location (point B). We assume that all the end-effectors of the robotic fingers are in contact with the deformable object such that

The coordinate systems are defined as follows: *w* is the task coordinate system, *o* is the object coordinate system, fixed on the object and *i* is the *i*-th robotic finger coordinate system, fixed on the *i*-th end-effectors located at the grasping point. In order to formulate the optimal contact locations, we model the deformable object using mass-spring-damper systems. The point masses are located at the nodal points and a Voigt element [20] is inserted between them. Figure 3 shows a single layer of the deformable object. Each element is labeled as *Ej* for *j NE* = 1,2, , where *NE* is total number of elements in a single layer.

is total number of point masses. *k* and *c* are the spring stiffness and the damping coefficient, respectively. Assume that no moment exerts on each mesh point. Then, the

**f ff** + += 0 (1)

*i ii x y* , *i N* = 1,2,3,..., where, *N*

are constants. The ramification of this theorem for our problem is that

positioned points in order to realize any arbitrary displacement.

i. Two of the three directions 1**f** , 2**f** , 3**f** are independent ii. A strictly positive combination of the three directions is zero.

unconstrained force in a 2D plane.

γ

**4. Deformable object modelling** 

they can apply only push on the object as needed.

Position vector of the *i*-th mesh point is defined as **p** = [ ]*<sup>T</sup>*

resultant force exerted on the mesh point, **p***<sup>i</sup>* , can be calculated as

problem can be stated as:

minimum force.

satisfy:

where α , β, and

position of the target by applying only unidirectional compressive force.

presented a method for solving the grasping force optimization problem of multi-fingered dexterous hand by minimizing a cost function. All these works are based on grasp of rigid objects.

There are also a few works based on deformable object grasping. Like Gopalakrishnan and Goldberg [35] proposed a framework for grasping deformable parts in assembly lines based on form closure properties for grasping rigid parts. Foresti and Pellegrino [36] described an automatic way of handling deformable objects using vision technique. The vision system worked along with a hierarchical self-organizing neural network to select proper grasping points in 2D. Wakamatsu et al. [37] analyzed grasping of deformable objects and introduced bounded force closure. However, position control of an internal target point in a deformable object by multi-fingered gripper has not been attempted. In our work, we address the issue of determining the optimal contact locations for manipulating a deformable object such that the internal target point can be positioned to the desired location by three robotic fingers using minimum applied forces.

The idea of passivity can be used to guarantee the stable interaction without exact knowledge of model information. Anderson and Spong [38] published the first solid result by passivation of the system using scattering theory. A passivity based impedance control strategy for robotic grasping and manipulation was presented by Stramigioli et al. [39]. Recently, Hannaford and Ryu [40] proposed a time-domain passivity control based on the energy consumption principle. The proposed algorithm did not require any knowledge about the dynamics of the system. They presented a PO and a PC to ensure stability under a wide variety of operating conditions. The PO can measure energy flow in and out of one or more subsystems in real-time by confining their analysis to system with very fast sampling rate. Meanwhile the PC, which is an adaptive dissipation element, absorbs exactly net energy output measured by the PO at each time sample. In [41], a model independent passivity-based approach to guarantee stability of a flexible manipulator with a noncollocated sensor-actuator pair is presented. This technique uses an active damping element to dissipate energy when the system becomes active. In our work we use the similar concept of PO and PC to ensure stable interaction between the robotic fingers and the deformable object. Our work also extends the concept of PO and PC for multi-point contact with the deformable object.

## **3. Problem description**

Consider a case in which multiple robotic fingers are manipulating a deformable object in a 2D plane to move an internal target point to a desired location. Before we discuss the design of the control law, we present a result from [42] to determine the number of actuation points required to position the target at an arbitrary location in a 2D plane. The following definitions are given according to the convention in [42].

Manipulation points: are defined as the points that can be manipulated directly by robotic fingers. In our case, the manipulation points are the points where the external robotic fingers apply forces on the deformable object.

Positioned points: are defined as the points that should be positioned indirectly by controlling manipulation points appropriately. In our case, the target is the position point.

The control law to be designed is non-collocated since the internal target point is not directly actuated by the robotic fingers. The following result is useful in determining the number of actuation points required to accurately position the target at the desired location.

22 Robotic Systems – Applications, Control and Programming

presented a method for solving the grasping force optimization problem of multi-fingered dexterous hand by minimizing a cost function. All these works are based on grasp of rigid

There are also a few works based on deformable object grasping. Like Gopalakrishnan and Goldberg [35] proposed a framework for grasping deformable parts in assembly lines based on form closure properties for grasping rigid parts. Foresti and Pellegrino [36] described an automatic way of handling deformable objects using vision technique. The vision system worked along with a hierarchical self-organizing neural network to select proper grasping points in 2D. Wakamatsu et al. [37] analyzed grasping of deformable objects and introduced bounded force closure. However, position control of an internal target point in a deformable object by multi-fingered gripper has not been attempted. In our work, we address the issue of determining the optimal contact locations for manipulating a deformable object such that the internal target point can be positioned to the desired location by three robotic fingers

The idea of passivity can be used to guarantee the stable interaction without exact knowledge of model information. Anderson and Spong [38] published the first solid result by passivation of the system using scattering theory. A passivity based impedance control strategy for robotic grasping and manipulation was presented by Stramigioli et al. [39]. Recently, Hannaford and Ryu [40] proposed a time-domain passivity control based on the energy consumption principle. The proposed algorithm did not require any knowledge about the dynamics of the system. They presented a PO and a PC to ensure stability under a wide variety of operating conditions. The PO can measure energy flow in and out of one or more subsystems in real-time by confining their analysis to system with very fast sampling rate. Meanwhile the PC, which is an adaptive dissipation element, absorbs exactly net energy output measured by the PO at each time sample. In [41], a model independent passivity-based approach to guarantee stability of a flexible manipulator with a noncollocated sensor-actuator pair is presented. This technique uses an active damping element to dissipate energy when the system becomes active. In our work we use the similar concept of PO and PC to ensure stable interaction between the robotic fingers and the deformable object. Our work also extends the concept of PO and PC for multi-point contact with the

Consider a case in which multiple robotic fingers are manipulating a deformable object in a 2D plane to move an internal target point to a desired location. Before we discuss the design of the control law, we present a result from [42] to determine the number of actuation points required to position the target at an arbitrary location in a 2D plane. The following

Manipulation points: are defined as the points that can be manipulated directly by robotic fingers. In our case, the manipulation points are the points where the external robotic fingers

Positioned points: are defined as the points that should be positioned indirectly by controlling manipulation points appropriately. In our case, the target is the position point. The control law to be designed is non-collocated since the internal target point is not directly actuated by the robotic fingers. The following result is useful in determining the number of

actuation points required to accurately position the target at the desired location.

objects.

using minimum applied forces.

deformable object.

**3. Problem description** 

apply forces on the deformable object.

definitions are given according to the convention in [42].

Result [42]: The number of manipulated points must be greater than or equal to that of the positioned points in order to realize any arbitrary displacement.

In our present case, we assume that the number of positioned points is one, since we are trying to control the position of the target. Hence, ideally the number of contact points would also be one. But practically, we assume that there are two constraints: (1) we do not want to apply shear force on the deformable object to avoid the damage to the surface, and (2) we can only apply control force directed into the deformable object. We cannot pull the surface since the robotic fingers are not attached to the surface. Thus we need to control the position of the target by applying only unidirectional compressive force.

In this context, there exists a theorem on the force direction closure in mechanics that helps us determining the equivalent number of compressive forces that can replace one unconstrained force in a 2D plane.

*Theorem* [43]: A set of wrenches **w** can generate force in any direction if and only if there exists a three-tuple of wrenches 123 {,,} **www** whose respective force directions 1**f** *,* 2**f** *,* 3**f** satisfy:


$$
\alpha \mathbf{f}\_1 + \beta \mathbf{f}\_2 + \gamma \mathbf{f}\_3 = 0 \tag{1}
$$

where α , β , and γ are constants. The ramification of this theorem for our problem is that we need three control forces distributed around the object such that the end points of their direction vectors draw a non-zero triangle that includes their common origin point. With such an arrangement we can realize any arbitrary displacement of the target point. Thus the problem can be stated as:

*Problem statement*: Given the number of actuation points, the initial target and its desired locations, find appropriate contact locations and control action such that the target point is positioned to its desired location by controlling the boundary points of the object with minimum force.

### **4. Deformable object modelling**

Consider a schematic in Figure 2 where three robotic fingers are positioning an internal target (point A) in a deformable object to the desired location (point B). We assume that all the end-effectors of the robotic fingers are in contact with the deformable object such that they can apply only push on the object as needed.

The coordinate systems are defined as follows: *w* is the task coordinate system, *o* is the object coordinate system, fixed on the object and *i* is the *i*-th robotic finger coordinate system, fixed on the *i*-th end-effectors located at the grasping point. In order to formulate the optimal contact locations, we model the deformable object using mass-spring-damper systems. The point masses are located at the nodal points and a Voigt element [20] is inserted between them. Figure 3 shows a single layer of the deformable object. Each element is labeled as *Ej* for *j NE* = 1,2, , where *NE* is total number of elements in a single layer. Position vector of the *i*-th mesh point is defined as **p** = [ ]*<sup>T</sup> i ii x y* , *i N* = 1,2,3,..., where, *N* is total number of point masses. *k* and *c* are the spring stiffness and the damping coefficient, respectively. Assume that no moment exerts on each mesh point. Then, the resultant force exerted on the mesh point, **p***<sup>i</sup>* , can be calculated as

Target Point Manipulation Inside a Deformable Object 25

where, **n r**( ) *i i* is the unit inner normal of *i*-th contact and *<sup>i</sup> f* denotes the *i*-th finger's force. We assume that the contact forces should exist in the friction cone to manipulate objects

are the three contact point locations measured anti-clockwise with respect to the x axis as shown in Figure 4. In addition, we assume that the normal forces have to be non-negative to

A physically realizable grasping configuration can be achieved if the surface normals at three contact points positively span the plane so that they do not all lie in the same halfplane [44]. Therefore, a realizable grasp can be achieved if the pair-wise angles satisfy the

> θθ

A unique solution to realizable grasping may not always exist. Therefore, we develop an optimization technique that minimizes the total force applied on to the object to obtain a particular solution. The optimal locations of the contact points would be the solution of the

> 0 360 ≤ ≤ θ

3 1

= **w nr** <sup>=</sup> *iii i f*

≥ 0 *<sup>i</sup> f* , 1,2,3 *i* =

*<sup>i</sup>* , 1,2,3 *i* =

( )

θ , 2 2 **r** ( ) θ, and

≥ 0 *<sup>i</sup> f* , *i* = 1,2,3 (4)

low ≤ ≤*<sup>i</sup>* high , *i j* , 1,2,3 = , *i j* ≠ (5)

θ1 , θ<sup>2</sup> , and θ3

(6)

without slip of the fingertip. Now we need to find three distinct points, 1 1 **r** ( )

, on the boundary of the object such that Equation (3) is satisfied. Here,

avoid separation and slippage at the contact points, i.e.,

Fig. 4. Three fingers grasp of a planar object

θ

following optimization problem.

 θθ

θ

 θθ

 θmin ≤−≤ *i j* max , *i j* , 1,2,3 = , *i j* ≠

 θmin ≤−≤ | | *i j* max ,

θ

following constraints

min **f f***<sup>T</sup>* sunject to

3 3 **r** ( ) θ

$$\mathbf{w}\_{i} = -\frac{\partial \mathbf{U}}{\partial \mathbf{p}\_{i}}\tag{2}$$

where, *U* denotes the total potential energy of the object

Fig. 2. Schematic of the robotic fingers manipulating a deformable object

Fig. 3. Model of a deformable object with interconnected mass-spring-damper

### **5. Framework for optimal contact locations**

We develop an optimization technique that satisfies the force closure condition for three fingers planar grasp. The resultant wrench for the contacts of three robotic fingers is given by

$$\mathbf{w} = \sum\_{i=1}^{3} f\_i \mathbf{n}\_i(\mathbf{r}\_i) \; \text{ (} \forall \mathbf{w} \in \mathfrak{R}^2 \text{) (} \exists f\_i \ge 0, \ 1 \le i \le 3 \text{)}\tag{3}$$

24 Robotic Systems – Applications, Control and Programming

∂ = − ∂ **w**

*<sup>U</sup>* (2)

**p** *<sup>i</sup> i*

where, *U* denotes the total potential energy of the object

Fig. 2. Schematic of the robotic fingers manipulating a deformable object

Fig. 3. Model of a deformable object with interconnected mass-spring-damper

( )

We develop an optimization technique that satisfies the force closure condition for three fingers planar grasp. The resultant wrench for the contacts of three robotic fingers is given by

*f* , <sup>2</sup> ( ) ( 0, 1 3) ∀ ∈ℜ ∃ ≥ ≤ ≤ **w** *<sup>i</sup> f i* (3)

**5. Framework for optimal contact locations** 

3 1

= **w nr** <sup>=</sup> *iii i*

where, **n r**( ) *i i* is the unit inner normal of *i*-th contact and *<sup>i</sup> f* denotes the *i*-th finger's force. We assume that the contact forces should exist in the friction cone to manipulate objects without slip of the fingertip. Now we need to find three distinct points, 1 1 **r** ( ) θ , 2 2 **r** ( ) θ , and 3 3 **r** ( ) θ , on the boundary of the object such that Equation (3) is satisfied. Here, θ1 , θ <sup>2</sup> , and θ 3 are the three contact point locations measured anti-clockwise with respect to the x axis as shown in Figure 4. In addition, we assume that the normal forces have to be non-negative to avoid separation and slippage at the contact points, i.e.,

$$f\_i \ge 0 \text{, } i = 1 \text{, } 2 \text{,3} \tag{4}$$

Fig. 4. Three fingers grasp of a planar object

A physically realizable grasping configuration can be achieved if the surface normals at three contact points positively span the plane so that they do not all lie in the same halfplane [44]. Therefore, a realizable grasp can be achieved if the pair-wise angles satisfy the following constraints

$$\left|\theta\_{\text{min}} \le \theta\_i - \theta\_j \le \theta\_{\text{max}} \; \; \; \prime \; \theta\_{\text{low}} \le \theta\_i \le \theta\_{\text{high}} \; \; \; i, j = 1, 2, 3 \; \; \; i \ne j \tag{5}$$

A unique solution to realizable grasping may not always exist. Therefore, we develop an optimization technique that minimizes the total force applied on to the object to obtain a particular solution. The optimal locations of the contact points would be the solution of the following optimization problem.

min **f f***<sup>T</sup>* sunject to

$$\mathbf{w} = \sum\_{i=1}^{3} f\_i \mathbf{n}\_i(\mathbf{r}\_i)$$

$$\theta\_{\text{min}} \le \left| \theta\_i - \theta\_j \right| \le \theta\_{\text{max}} \quad i, j = 1, 2, 3 \; \; \; i \ne j \tag{6}$$

$$f\_i \ge 0 \; \; \; i = 1, 2, 3$$

$$0 \le \theta\_i \le 360^\circ \; \; \; i = 1, 2, 3$$

Target Point Manipulation Inside a Deformable Object 27

A passivity-based control approach based on energy monitoring is developed for deformable object manipulation to guarantee passivity (and consequently stability) of the system. The main reason to use passivity-based control is to ensure stability without the need of having an accurate model of the deformable object. It is not possible to develop a precise dynamic model of a deformable object due to complex nonlinear internal dynamics as well as variation in geometry and mechanical properties. Thus passivity based control is an ideal candidate to ensure stability since it is a model independent technique. The basic idea is to use a PO to monitor the energy generated by the controller and to dissipate the excess energy using a PC when the controller becomes active [41], without the need for

We develop a network model with PO and PC similar to [41] as shown in Figure 5. The PO monitors the net energy flow of the individual finger's controller. When the energy becomes negative, PC dissipates excess energy from the individual controller. Similar to [41] energy is defined as the integral of the inner product between conjugate input and output, which may or may not correspond to physical energy. Definition of passivity [41] states that the energy applied to a passive network must be positive for all time. Figure 5 shows a network representation of the energetic behavior of this control system. The block diagram in Figure 5 is partitioned into three elements: the trajectory generator, the controller and the plant. Each controller corresponds to one finger. Since three robotic fingers are used for planar

The connection between the controller and the plant is a physical interface at which conjugate variables ( *<sup>i</sup> f* , *<sup>i</sup> v* ; where *<sup>i</sup> f* is the force applied by *i*-th finger and *<sup>i</sup> v* is the velocity of *i*-th finger) define physical energy flow between controller and plant. The forces and

The desired target velocity is obtained by differentiating (8) with respect to time and is

*<sup>T</sup>*

where, *<sup>d</sup> x* and *<sup>d</sup> y* are the desired target velocities, respectively. The desired target velocity

The trajectory generator essentially computes the desired target velocity along the direction of actuation of the robotic fingers. If the direction of actuation of the robotic fingers, **n***<sup>i</sup>* , and desired target velocity, **p***<sup>d</sup>* , are known with respect to a global reference frame then the trajectory generator computes the desired target velocity along the direction of actuation of

*<sup>T</sup> fff* <sup>123</sup> **f** = [ ] (13)

*<sup>T</sup> vvv* <sup>123</sup> **v** = [ ] (14)

*d dd* **p** = [ ] *x y* (15)

*di d i v* = ⋅ **p n** (16)

**6.2 Passivity-based control** 

**Passivity Observer (PO)** 

velocities are given by

the fingers using Equation (16).

given by

modeling the dynamics of the plant (deformable object).

manipulation, three individual controller transfer energy to the plant.

along the direction of actuation of the *i*-th robotic finger is given by

Once we get the optimal contact locations, all three robotic fingers can be located at their respective positions to effect the desired motion at those contact points.

## **6. Design of the controller**

In this section, a control law for the robotic fingers is developed to guide a target from any point A to an arbitrary point B within the deformable object as shown in Figure 2.

### **6.1 Target position control**

At any given time-step, point A is the actual location of the target and point B is the desired location of the target. **n1**, **n2** and **n3** are unit vectors which determine the direction of force application of the actuation points with respect to the global reference frame *<sup>w</sup>* . Let assume, **p***d* is the position vector of point B and **p** is the position vector of point A. Referring to Figure 2, the position vector of point A is given by

$$\mathbf{p} = \begin{bmatrix} x & y \end{bmatrix}^{\mathrm{T}} \tag{7}$$

where, *x* and *y* are the position coordinates of point A in the global reference frame *<sup>w</sup>* . The desired target position is represented by point B whose position vector is given by

$$\mathbf{p}\_d = \begin{bmatrix} \mathbf{x}\_d & \mathbf{y}\_d \end{bmatrix}^r \tag{8}$$

where, *<sup>d</sup> x* and *<sup>d</sup> y* are the desired target position coordinates. The target position error, **e,** is given by

$$\mathbf{e} = \mathbf{p}\_d - \mathbf{p} \tag{9}$$

Once the optimal contact locations are determined from Equation (6), the planner generates the desired reference locations for these contact points by projecting the error vector between the desired and the actual target locations in the direction of the applied forces, which is given by

$$\mathbf{e}\mathbf{\bar{e}} = \mathbf{e} \cdot \mathbf{n}\_{\parallel} \tag{10}$$

where,

$$\mathbf{m}\_1 = \begin{bmatrix} m\_{xi} & m\_{yi} \end{bmatrix}^T \tag{11}$$

All robotic fingers are controlled by their individual controllers using the following proportional-integral (PI) control law

$$f\_i = \mathbf{K}\_{\text{Pl}} \mathbf{e}\_i^\* + \mathbf{K}\_{\text{il}} \Big| \mathbf{e}\_i^\* dt \quad , \quad i = 1, 2, 3 \tag{12}$$

where, *KPi* , and *KIi* are the proportional and integral gains. Note that in the control law (12), mechanical properties of the deformable object are not required. Forces applied by the fingers on the surface of the deformable object are calculated by projecting the error vector in the direction of the applied forces. But the Equation (12) does not guarantee that the system will be stable. Thus a passivity-based control approach based on energy monitoring is developed to guarantee the stability of the system.

### **6.2 Passivity-based control**

26 Robotic Systems – Applications, Control and Programming

Once we get the optimal contact locations, all three robotic fingers can be located at their

In this section, a control law for the robotic fingers is developed to guide a target from any

At any given time-step, point A is the actual location of the target and point B is the desired location of the target. **n1**, **n2** and **n3** are unit vectors which determine the direction of force application of the actuation points with respect to the global reference frame *<sup>w</sup>* . Let assume, **p***d* is the position vector of point B and **p** is the position vector of point A.

where, *x* and *y* are the position coordinates of point A in the global reference frame *<sup>w</sup>* . The

where, *<sup>d</sup> x* and *<sup>d</sup> y* are the desired target position coordinates. The target position error, **e,** is

Once the optimal contact locations are determined from Equation (6), the planner generates the desired reference locations for these contact points by projecting the error vector between the desired and the actual target locations in the direction of the applied forces,

= [ ] **ni**

All robotic fingers are controlled by their individual controllers using the following

where, *KPi* , and *KIi* are the proportional and integral gains. Note that in the control law (12), mechanical properties of the deformable object are not required. Forces applied by the fingers on the surface of the deformable object are calculated by projecting the error vector in the direction of the applied forces. But the Equation (12) does not guarantee that the system will be stable. Thus a passivity-based control approach based on energy monitoring

desired target position is represented by point B whose position vector is given by

**p** =[ ]*<sup>T</sup> x y* (7)

*d dd x y* (8)

**ep p** = −*d* (9)

\* = ⋅ **<sup>i</sup> e en** *<sup>i</sup>* (10)

*<sup>T</sup> n n xi yi* (11)

\* \* = + **e e** *i Pi i Ii i <sup>f</sup> K K dt* , 1,2,3 *<sup>i</sup>* <sup>=</sup> (12)

point A to an arbitrary point B within the deformable object as shown in Figure 2.

respective positions to effect the desired motion at those contact points.

Referring to Figure 2, the position vector of point A is given by

[] **p** = *<sup>T</sup>*

**6. Design of the controller** 

**6.1 Target position control** 

given by

where,

which is given by

proportional-integral (PI) control law

is developed to guarantee the stability of the system.

A passivity-based control approach based on energy monitoring is developed for deformable object manipulation to guarantee passivity (and consequently stability) of the system. The main reason to use passivity-based control is to ensure stability without the need of having an accurate model of the deformable object. It is not possible to develop a precise dynamic model of a deformable object due to complex nonlinear internal dynamics as well as variation in geometry and mechanical properties. Thus passivity based control is an ideal candidate to ensure stability since it is a model independent technique. The basic idea is to use a PO to monitor the energy generated by the controller and to dissipate the excess energy using a PC when the controller becomes active [41], without the need for modeling the dynamics of the plant (deformable object).

### **Passivity Observer (PO)**

We develop a network model with PO and PC similar to [41] as shown in Figure 5. The PO monitors the net energy flow of the individual finger's controller. When the energy becomes negative, PC dissipates excess energy from the individual controller. Similar to [41] energy is defined as the integral of the inner product between conjugate input and output, which may or may not correspond to physical energy. Definition of passivity [41] states that the energy applied to a passive network must be positive for all time. Figure 5 shows a network representation of the energetic behavior of this control system. The block diagram in Figure 5 is partitioned into three elements: the trajectory generator, the controller and the plant. Each controller corresponds to one finger. Since three robotic fingers are used for planar manipulation, three individual controller transfer energy to the plant.

The connection between the controller and the plant is a physical interface at which conjugate variables ( *<sup>i</sup> f* , *<sup>i</sup> v* ; where *<sup>i</sup> f* is the force applied by *i*-th finger and *<sup>i</sup> v* is the velocity of *i*-th finger) define physical energy flow between controller and plant. The forces and velocities are given by

$$\mathbf{f} = \begin{bmatrix} f\_1 & f\_2 & f\_3 \end{bmatrix}^r \tag{13}$$

$$\mathbf{v} = \begin{bmatrix} v\_1 & v\_2 & v\_3 \end{bmatrix}^r \tag{14}$$

The desired target velocity is obtained by differentiating (8) with respect to time and is given by

$$
\dot{\mathbf{p}}\_d = \begin{bmatrix} \dot{\mathbf{x}}\_d & \dot{y}\_d \end{bmatrix}^T \tag{15}
$$

where, *<sup>d</sup> x* and *<sup>d</sup> y* are the desired target velocities, respectively. The desired target velocity along the direction of actuation of the *i*-th robotic finger is given by

$$
\boldsymbol{\sigma}\_{d\boldsymbol{i}} = \dot{\mathbf{p}}\_d \cdot \mathbf{n}\_i \tag{16}
$$

The trajectory generator essentially computes the desired target velocity along the direction of actuation of the robotic fingers. If the direction of actuation of the robotic fingers, **n***<sup>i</sup>* , and desired target velocity, **p***<sup>d</sup>* , are known with respect to a global reference frame then the trajectory generator computes the desired target velocity along the direction of actuation of the fingers using Equation (16).

Target Point Manipulation Inside a Deformable Object 29

and ≠ 0 *<sup>i</sup> f* . Therefore the trajectory generator is not passive because it has a velocity source as a power source. It is shown that even if the system has an active term, stability is guaranteed as long as the active term is not dependent on the system states [45]. Therefore,

Here, we consider that the plant is passive. Now we design a PO for sufficiently small time-

( ) ( ( ) ( ) ( ) ( ))

*Et T ft v t ft vt* (17)

*T ft v t T ft vt* , ∀ ≥ 0 *<sup>k</sup> t* , 1,2,3 *i* = (18)

*E k f kv k* (19)

*E k f kv k* (20)

*E k f kv k* (21)

*E k f kv k* (22)

<sup>1122</sup> () () () () +≥+ *TPT P Ek Ek Ek Ek ii ii* , 0 ∀ ≥*k* (23)

*i k i j di j i j i j*

where, Δ*T* is the sampling period and *tj* = ×Δ *j T* . In normal passive operation, ( ) *E t <sup>i</sup> <sup>j</sup>* should always be positive. In case when ( ) 0 *E t i j* < , the PO indicates that the *i*-th controller is generating energy and going to be active. The sufficient condition to make the whole system

The active and passive port can be recognized by monitoring the conjugate signal pair of each port in real time. A port is active if *fv* < 0 that means energy is flowing out of the network system and it is passive if *fv* ≥ 0 , that means energy is flowing into the network

> ( 1) ( ) ( ) if ( ) ( ) 0 ( ) ( 1) if ( ) ( ) 0 − + <sup>&</sup>gt; <sup>=</sup> − ≤

> ( 1) ( ) ( ) if ( ) ( ) 0 ( ) ( 1) if ( ) ( ) 0 − − <sup>&</sup>lt; <sup>=</sup> − ≥

( 1) ( ) ( ) if ( ) ( ) 0 ( ) ( 1) if ( ) ( ) 0 − − <sup>&</sup>lt; <sup>=</sup> − ≥

( 1) ( ) ( ) if ( ) ( ) 0 ( ) ( 1) if ( ) ( ) 0 − + <sup>&</sup>gt; <sup>=</sup> − ≤

where, 1 ( ) *<sup>T</sup> E k <sup>i</sup>* and 2 ( ) *<sup>T</sup> E k <sup>i</sup>* are the energy flowing in and out at the trajectory side of the controller port, respectively, whereas 1 ( ) *<sup>P</sup> E k <sup>i</sup>* and 2 ( ) *<sup>P</sup> E k <sup>i</sup>* are the energy flowing in and out at the plant side of the controller port, respectively. So the time domain passivity condition is

*i i i*

*i i i*

*i i di*

*i i di*

*T i i di i di*

*T i i di i di*

*P i ii ii*

*P i ii ii*

*E k f kv k f kv k E k*

*E k f kv k f kv k E k*

*E k f kv k f kv k E k*

*E k f kv k f kv k E k*

= Δ <sup>−</sup> *k*

passivity of the plant and controllers is sufficient to ensure system stability.

0

() () ()()

*i j di j ij ij*

=

*j*

0 0

= = Δ ≥ <sup>Δ</sup> *k k*

*j j*

system. The input and output energy can be computed as [46]

*i T*

*i T*

*i P*

*i P*

Net energy output of an individual controller is given by

1

2

1

2

1

*T*

1

2

*T*

2

1

*P*

1

2

*P*

2

where *k* means the *k*-th step sampling time.

step Δ*T* as:

given by

passive can be written as

The connections between the trajectory generator and the controller, which traditionally consist of a one-way command information flow, are modified by the addition of a virtual feedback of the conjugate variable [41]. For the system shown in Figure 5, output of the trajectory generator is the desired target velocity, *di v* , along direction of *i*-th finger and output of the controller is calculated from Equation (12).

Fig. 5. Network representation of the control system. α<sup>1</sup>*<sup>i</sup>* andα<sup>2</sup>*<sup>i</sup>* are the adjustable damping elements at each port, *i*=1,2,3

For both connections, virtual feedback is the force applied by the robotic fingers. Integral of the inner product between trajectory generator output ( *di v* ) and its conjugate variable ( *<sup>i</sup> f* ) defines "virtual input energy." The virtual input energy is generated to give a command to the controller, which transmits the input energy to the plant through the controller in the form of "real output energy." Real output energy is the physical energy that enters to the plant (deformable object) at the point where the robotic finger is in contact with the object. Therefore, the plant is a three-port system since three fingers manipulate the object. The conjugate pair that represents the power flow is *<sup>i</sup> f* , *<sup>i</sup> v* (the force and the velocity of *i*-th finger, respectively). The reason for defining virtual input energy is to transfer the source of energy from the controllers to the trajectory generator. Thus the controllers can be represented as two-ports which characterize energy exchange between the trajectory generator and the plant. Note that the conjugate variables that define power flow are discrete time values and so the analysis is confined to systems having a sampling rate substantially faster than the system dynamics.

For regulating the target position during manipulation, = 0 *di v* . Hence the trajectory generator is passive since it does not generate energy. However, for target tracking, ≠ 0 *di v*

28 Robotic Systems – Applications, Control and Programming

The connections between the trajectory generator and the controller, which traditionally consist of a one-way command information flow, are modified by the addition of a virtual feedback of the conjugate variable [41]. For the system shown in Figure 5, output of the trajectory generator is the desired target velocity, *di v* , along direction of *i*-th finger and

> α<sup>1</sup>*<sup>i</sup>* andα

For both connections, virtual feedback is the force applied by the robotic fingers. Integral of the inner product between trajectory generator output ( *di v* ) and its conjugate variable ( *<sup>i</sup> f* ) defines "virtual input energy." The virtual input energy is generated to give a command to the controller, which transmits the input energy to the plant through the controller in the form of "real output energy." Real output energy is the physical energy that enters to the plant (deformable object) at the point where the robotic finger is in contact with the object. Therefore, the plant is a three-port system since three fingers manipulate the object. The conjugate pair that represents the power flow is *<sup>i</sup> f* , *<sup>i</sup> v* (the force and the velocity of *i*-th finger, respectively). The reason for defining virtual input energy is to transfer the source of energy from the controllers to the trajectory generator. Thus the controllers can be represented as two-ports which characterize energy exchange between the trajectory generator and the plant. Note that the conjugate variables that define power flow are discrete time values and so the analysis is confined to systems having a sampling rate

For regulating the target position during manipulation, = 0 *di v* . Hence the trajectory generator is passive since it does not generate energy. However, for target tracking, ≠ 0 *di v*

<sup>2</sup>*<sup>i</sup>* are the adjustable damping

output of the controller is calculated from Equation (12).

Fig. 5. Network representation of the control system.

substantially faster than the system dynamics.

elements at each port, *i*=1,2,3

and ≠ 0 *<sup>i</sup> f* . Therefore the trajectory generator is not passive because it has a velocity source as a power source. It is shown that even if the system has an active term, stability is guaranteed as long as the active term is not dependent on the system states [45]. Therefore, passivity of the plant and controllers is sufficient to ensure system stability.

Here, we consider that the plant is passive. Now we design a PO for sufficiently small timestep Δ*T* as:

$$E\_i(t\_k) = \Delta T \sum\_{j=0}^k \left( f\_i(t\_j) v\_{di}(t\_j) - f\_i(t\_j) v\_i(t\_j) \right) \tag{17}$$

where, Δ*T* is the sampling period and *tj* = ×Δ *j T* . In normal passive operation, ( ) *E t <sup>i</sup> <sup>j</sup>* should always be positive. In case when ( ) 0 *E t i j* < , the PO indicates that the *i*-th controller is generating energy and going to be active. The sufficient condition to make the whole system passive can be written as

$$
\Delta T \sum\_{j=0}^{k} f\_i(t\_j) v\_{il}(t\_j) \ge \Delta T \sum\_{j=0}^{k} f\_i(t\_j) v\_i(t\_j) \text{ , } \forall t\_k \ge 0 \text{ , } i = 1, 2, 3 \tag{18}
$$

where *k* means the *k*-th step sampling time.

The active and passive port can be recognized by monitoring the conjugate signal pair of each port in real time. A port is active if *fv* < 0 that means energy is flowing out of the network system and it is passive if *fv* ≥ 0 , that means energy is flowing into the network system. The input and output energy can be computed as [46]

$$E\_{1i}^{\top}(k) = \begin{cases} E\_{1i}^{\top}(k-1) + f\_i(k)\upsilon\_{\vec{u}i}(k) & \text{if } f\_i(k)\upsilon\_{\vec{u}i}(k) > 0\\ E\_{1i}^{\top}(k-1) & \text{if } f\_i(k)\upsilon\_{\vec{u}i}(k) \le 0 \end{cases} \tag{19}$$

$$E\_{2i}^{\top}(k) = \begin{cases} E\_{2i}^{\top}(k-1) - f\_i(k)\upsilon\_{di}(k) & \text{if } f\_i(k)\upsilon\_{di}(k) < 0\\ E\_{2i}^{\top}(k-1) & \text{if } f\_i(k)\upsilon\_{di}(k) \ge 0 \end{cases} \tag{20}$$

$$E\_{1i}^{\mathcal{P}}(k) = \begin{cases} E\_{1i}^{\mathcal{P}}(k-1) - f\_i(k)\upsilon\_i(k) & \text{if } f\_i(k)\upsilon\_i(k) < 0\\ E\_{1i}^{\mathcal{P}}(k-1) & \text{if } f\_i(k)\upsilon\_i(k) \ge 0 \end{cases} \tag{21}$$

$$E\_{2i}^{\mathcal{P}}(k) = \begin{cases} E\_{2i}^{\mathcal{P}}(k-1) + f\_i(k)\upsilon\_i(k) & \text{if } f\_i(k)\upsilon\_i(k) > 0\\ E\_{2i}^{\mathcal{P}}(k-1) & \text{if } f\_i(k)\upsilon\_i(k) \le 0 \end{cases} \tag{22}$$

where, 1 ( ) *<sup>T</sup> E k <sup>i</sup>* and 2 ( ) *<sup>T</sup> E k <sup>i</sup>* are the energy flowing in and out at the trajectory side of the controller port, respectively, whereas 1 ( ) *<sup>P</sup> E k <sup>i</sup>* and 2 ( ) *<sup>P</sup> E k <sup>i</sup>* are the energy flowing in and out at the plant side of the controller port, respectively. So the time domain passivity condition is given by

$$E\_{1i}^T(k) + E\_{1i}^P(k) \ge E\_{2i}^T(k) + E\_{2i}^P(k), \ \forall k \ge 0 \tag{23}$$

Net energy output of an individual controller is given by

Target Point Manipulation Inside a Deformable Object 31

We perform extensive simulations of positioning an internal target point to a desired location in a deformable object by external robotic fingers to demonstrate the feasibility of the concept. We discretize the deformable object with elements of mass-spring-damper. We choose m=0.006 kg for each point mass, k=10 N/m for spring constant and c=5 Ns/m for damping coefficient. With this parameter set up, we present four different simulation

In Task 1, we present the optimal contact locations of various objects using three robotic fingers such that an internal target point is positioned to the desired location with minimum force. The optimal contact locations are computed using Equation (6) as shown in Figures 6 to 8. In these figures, the base of the arrow represents the initial target location and the arrow head denotes the desired location of the target point. The contact locations are depicted by the bold red dots on the periphery of the deformable object. Note that in determining the optimal contact locations, we introduced minimum angle constraints between any two robotic fingers to achieve a physically realizable grasping configuration.

1

3

θ1 , θ <sup>2</sup> , θ

199.9o, (c) 7.54o, 182.54o, 327.54o, and (d) 48.59o, 88.59o, 234.39o

1

*<sup>i</sup> f k* are the PCs' outputs at trajectory and plant sides of the controller

*<sup>i</sup> f k* are the modified outputs at trajectory and plant sides

1 2

2

<sup>3</sup> ): (a) 59.98o, 204.9o, 244.9o, (b) 14.96o, 159.9o,

1

3

(a) (b)

(c) (d)

3

where, ( ) *<sup>t</sup>*

tasks. **Task 1:**  *<sup>i</sup> f k* and ( ) *<sup>p</sup>*

of the controller ports, respectively.

**7. Simulation and discussion** 

2

2

Fig. 6. Optimal contact locations (

3

*<sup>i</sup> f k* and ( ) *<sup>P</sup>*

ports, respectively. ( ) *<sup>T</sup>*

$$\begin{aligned} E\_i(k) &= E\_{1i}^T(k) - E\_{2i}^P(k) + E\_{1i}^P(k) - E\_{2i}^T(k) \\ &+ \alpha\_{1i}(k-1)\upsilon\_{il}(k-1)^2 + \alpha\_{2i}(k-1)\upsilon\_i(k-1)^2 \end{aligned} \tag{24}$$

where, the last two terms are the energy dissipated at the previous time step. 1 α *i* ( 1) *k* − and 2 α *i* ( 1) *k* − are the damping coefficient calculated based on PO discussed below.

### **Passivity Controller (PC)**

In order to dissipate excess energy of the controlled system, a damping force should be applied to its moving parts depending on the causality of the port. As it is well known, such a force is a function of the system's velocities giving the physical damping action on the system. Mathematically, the damping force is given by

$$f\_d = \alpha v \tag{25}$$

where α is the adjustable damping factor and *v* is the velocity. From this simple observation, it seems necessary to measure and use the velocities of the robotic fingers in the control algorithm in order to enhance the performance by means of controlling the damping forces acting on the systems. On the other hand, velocities measurements are not always available and in these cases position measurements can be used to estimate velocities and therefore to inject damping.

When the observed energy becomes negative, the damping coefficient is computed using the following relation (which obeys the constitutive Equation (25)). Therefore, the algorithm used for a 2-port network with impedance causality (i.e., velocity input, force output) at each port is given by the following steps:

1. Two series PCs are designed for several cases as given below:

Case 1: If *E k <sup>i</sup>*() 0 ≥ , i.e., if the output energy is less than the input energy, there is no need to activate any PCs.

Case 2: If *E k <sup>i</sup>* () 0 < , i.e., if the output energy is more than the input energy, i.e., 2 1 () () > *P T Ek Ek i i* , then we need to activate only the plant side PC.

$$\begin{aligned} \alpha\_{1i}(k) &= 0\\ \alpha\_{2i}(k) &= -\mathcal{E}\_i(k) \;/\; v\_i(k)^2 \end{aligned} \tag{26}$$

Case 3: Similarly, if *E k <sup>i</sup>*() 0 < , 2 1 () () > *T P Ek Ek i i* , then we need to activate only the trajectory side PC.

$$\begin{aligned} \alpha\_{1i}(k) &= -E\_i(k) / \upsilon\_{di}(k)^2\\ \alpha\_{2i}(k) &= 0 \end{aligned} \tag{27}$$

2. The contributions of PCs are converted into power variables as

$$\begin{aligned} f\_i^t(k) &= \alpha\_{1i}(k)\upsilon\_{il}(k) \\ f\_i^p(k) &= \alpha\_{2i}(k)\upsilon\_i(k) \end{aligned} \tag{28}$$

3. Modified outputs are

$$\begin{aligned} f\_i^{\;\!\!\! }(k) &= f\_i(k) + f\_i^{\;\!\!\! }(k) \\ f\_i^{\;\!\!\!\/ }(k) &= f\_i(k) + f\_i^{\;\!\!\/ }(k) \end{aligned} \tag{29}$$

where, ( ) *<sup>t</sup> <sup>i</sup> f k* and ( ) *<sup>p</sup> <sup>i</sup> f k* are the PCs' outputs at trajectory and plant sides of the controller ports, respectively. ( ) *<sup>T</sup> <sup>i</sup> f k* and ( ) *<sup>P</sup> <sup>i</sup> f k* are the modified outputs at trajectory and plant sides of the controller ports, respectively.

## **7. Simulation and discussion**

We perform extensive simulations of positioning an internal target point to a desired location in a deformable object by external robotic fingers to demonstrate the feasibility of the concept. We discretize the deformable object with elements of mass-spring-damper. We choose m=0.006 kg for each point mass, k=10 N/m for spring constant and c=5 Ns/m for damping coefficient. With this parameter set up, we present four different simulation tasks.

### **Task 1:**

30 Robotic Systems – Applications, Control and Programming

( 1) ( 1) ( 1) ( 1)

In order to dissipate excess energy of the controlled system, a damping force should be applied to its moving parts depending on the causality of the port. As it is well known, such a force is a function of the system's velocities giving the physical damping action on the

> *f v <sup>d</sup>* =α

observation, it seems necessary to measure and use the velocities of the robotic fingers in the control algorithm in order to enhance the performance by means of controlling the damping forces acting on the systems. On the other hand, velocities measurements are not always available and in these cases position measurements can be used to estimate velocities and

When the observed energy becomes negative, the damping coefficient is computed using the following relation (which obeys the constitutive Equation (25)). Therefore, the algorithm used for a 2-port network with impedance causality (i.e., velocity input, force output) at

Case 1: If *E k <sup>i</sup>*() 0 ≥ , i.e., if the output energy is less than the input energy, there is no

( ) ( )/ ( )

*k Ek vk*

( ) ( )/ ( )

1 2

*f k kv k f k kv k* α

*i i di*

*i ii*

α

= =

() () () () () ()

() () () () () () = + = + *T t i ii P p i ii*

*f k f k f k*

*k Ek v k*

Case 3: Similarly, if *E k <sup>i</sup>*() 0 < , 2 1 () () > *T P Ek Ek i i* , then we need to activate only the trajectory

*i ii*

() 0 < , i.e., if the output energy is more than the input energy, i.e.,

2

2

is the adjustable damping factor and *v* is the velocity. From this simple

+ − −+ − −

*i di i i*

 α

2 2

*k v k k vk* (24)

α*i*

(25)

(26)

(27)

(28)

*<sup>f</sup> <sup>k</sup> <sup>f</sup> <sup>k</sup> <sup>f</sup> <sup>k</sup>* (29)

( 1) *k* − and

1212

*T P PT iiiii*

=−+−

*Ek E k E k E k E k*

where, the last two terms are the energy dissipated at the previous time step. 1

( 1) *k* − are the damping coefficient calculated based on PO discussed below.

α

system. Mathematically, the damping force is given by

2 α*i*

where

α

therefore to inject damping.

each port is given by the following steps:

need to activate any PCs.

Case 2: If *E k <sup>i</sup>*

side PC.

3. Modified outputs are

1. Two series PCs are designed for several cases as given below:

2 1 () () > *P T Ek Ek i i* , then we need to activate only the plant side PC.

1

α

α

*i*

() 0

() 0 = − = *i idi*

*k*

*t*

*p*

*k*

= = −

2

1 2

α

α

2. The contributions of PCs are converted into power variables as

*i*

**Passivity Controller (PC)** 

1 2 () () () () ()

> In Task 1, we present the optimal contact locations of various objects using three robotic fingers such that an internal target point is positioned to the desired location with minimum force. The optimal contact locations are computed using Equation (6) as shown in Figures 6 to 8. In these figures, the base of the arrow represents the initial target location and the arrow head denotes the desired location of the target point. The contact locations are depicted by the bold red dots on the periphery of the deformable object. Note that in determining the optimal contact locations, we introduced minimum angle constraints between any two robotic fingers to achieve a physically realizable grasping configuration.

Fig. 6. Optimal contact locations (θ1 , θ <sup>2</sup> , θ<sup>3</sup> ): (a) 59.98o, 204.9o, 244.9o, (b) 14.96o, 159.9o, 199.9o, (c) 7.54o, 182.54o, 327.54o, and (d) 48.59o, 88.59o, 234.39o

Target Point Manipulation Inside a Deformable Object 33

In Task 2, we present a target positioning operation when the robotic fingers are not located at their optimal contact locations. For instance, we choose that the robotic fingers are located at 0, 120, and 240 degrees with respect to the x-axis as shown in Figure 9. We assume that the initial position of the target is at the center of the section of the deformable object, i.e., (0, 0) mm. The goal is to position the target at the desired location (5, 5) mm with a smooth

*y*

2

Fig. 9. Deformable object with contact points located at 0, 120 and 240 degrees with respect


x (m)

Fig. 10. The desired (red dashed) and the actual (blue solid) straight lines when robotic

0 1 2 3 4 5 6

desired actual

*x* 1

x (mm)

fingers are located at 0, 120, and 240 degrees with respect to *x*-axis

0

1

2

3

y (mm)

4

5

6


3


0

y (m)

0.02

0.04

0.06

**Task 2:** 

to *x*-axis

Fig. 7. Optimal contact locations (θ1 , θ <sup>2</sup> , θ <sup>3</sup> ): (a) 0o, 170o, 253.8o, (b) 29.07o, 116.93o, 233.86o, (c) 0o, 175o, 320o, and (d) 76.93o, 116.93o, 261.94o

Fig. 8. Optimal contact locations (θ1 , θ <sup>2</sup> , θ<sup>3</sup> ): (a) 25.18o, 199.48o, 262.22o, (b) 0o, 175o, 262.62o, (c) 141.05o, 303.66o, 343.66o and (d) 96.37o, 169.35o, 288.29o

### **Task 2:**

32 Robotic Systems – Applications, Control and Programming

1

2

3

1 2

(b)

3

(d)

<sup>3</sup> ): (a) 0o, 170o, 253.8o, (b) 29.07o, 116.93o, 233.86o,

3

(b)

1

3

(d)

<sup>3</sup> ): (a) 25.18o, 199.48o, 262.22o, (b) 0o, 175o, 262.62o,

2 1

1 2

(a)

1 2

(c)

(c) 0o, 175o, 320o, and (d) 76.93o, 116.93o, 261.94o

3

(a)

(c)

(c) 141.05o, 303.66o, 343.66o and (d) 96.37o, 169.35o, 288.29o

Fig. 8. Optimal contact locations (

Fig. 7. Optimal contact locations (

2

1

3

1

θ1 , θ <sup>2</sup> , θ

2

θ1 , θ <sup>2</sup> , θ

3

2

3

In Task 2, we present a target positioning operation when the robotic fingers are not located at their optimal contact locations. For instance, we choose that the robotic fingers are located at 0, 120, and 240 degrees with respect to the x-axis as shown in Figure 9. We assume that the initial position of the target is at the center of the section of the deformable object, i.e., (0, 0) mm. The goal is to position the target at the desired location (5, 5) mm with a smooth

Fig. 9. Deformable object with contact points located at 0, 120 and 240 degrees with respect to *x*-axis

Fig. 10. The desired (red dashed) and the actual (blue solid) straight lines when robotic fingers are located at 0, 120, and 240 degrees with respect to *x*-axis

Target Point Manipulation Inside a Deformable Object 35

In Task 3, we consider the same task as discussed above under Task 2 but the robotic fingers are positioned at their optimal contact locations (Figure 8(a)) and the target is following the desired straight line trajectory. In this case, PCs are not turned on while performing the task. A simple position based PI controller generates the control command based on the error between the desired and the actual location of the target. Figure 13 shows that the target

Fig. 13. The desired (red dashed) and the actual (blue solid) straight lines when PCs are not

0 1 2 3 4 5 6

desired actual

x (mm)

0 1 2 3 4 5 6 7 8 9 10

time (sec.)

Fig. 14. Controller forces when PCs are not turned on

f 1 f 2 f 3


Controller forces (N)

0

1

2

3

y (mm)

4

5

6

**Task 3:** 

turned on

straight line trajectory. In this simulation, we choose *KPi* =1000 and *KIi* =1000, *i*=1,2,3. Figure 10 shows the actual and desired position trajectories of the target point. It is noticed that there is some error present in the tracking of the desired trajectory. Robotic fingers forces generated by the PI controller are presented in Figure 11 and the POs are falling to negative as shown in Figure 12. Negative values of POs signify that the interaction between the robotic fingers and the deformable object is not stable.

Fig. 11. Controller forces when robotic fingers are located at 0, 120, and 240 degrees with respect to *x*-axis

Fig. 12. POs when robotic fingers are located at 0, 120, and 240 degrees with respect to *x*-axis

### **Task 3:**

34 Robotic Systems – Applications, Control and Programming

straight line trajectory. In this simulation, we choose *KPi* =1000 and *KIi* =1000, *i*=1,2,3. Figure 10 shows the actual and desired position trajectories of the target point. It is noticed that there is some error present in the tracking of the desired trajectory. Robotic fingers forces generated by the PI controller are presented in Figure 11 and the POs are falling to negative as shown in Figure 12. Negative values of POs signify that the interaction between the

Fig. 11. Controller forces when robotic fingers are located at 0, 120, and 240 degrees with

0 1 2 3 4 5 6 7 8 9 10

time (sec.)

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

Fig. 12. POs when robotic fingers are located at 0, 120, and 240 degrees with respect to *x*-axis

0 1 2 3 4 5 6 7 8 9 10

time (sec.)

robotic fingers and the deformable object is not stable.

f 1 f 2 f 3

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8



0.05


0

E1 (Nm)

E2 (Nm)

E3 (Nm)

0

0.01

Controller forces (N)

respect to *x*-axis

In Task 3, we consider the same task as discussed above under Task 2 but the robotic fingers are positioned at their optimal contact locations (Figure 8(a)) and the target is following the desired straight line trajectory. In this case, PCs are not turned on while performing the task. A simple position based PI controller generates the control command based on the error between the desired and the actual location of the target. Figure 13 shows that the target

Fig. 13. The desired (red dashed) and the actual (blue solid) straight lines when PCs are not turned on

Fig. 14. Controller forces when PCs are not turned on

Target Point Manipulation Inside a Deformable Object 37

tracked the desired position trajectory. Robotic fingers forces generated by the PI controller are presented in Figure 14. Force values in Figure 14 are quite less than those in Figure 11 because of the optimal contact location of the robotic fingers. However, the POs for robotic fingers 2 and 3 are become negative as shown in Figure 15. Negative values of the POs signify that the output energy of the 2-port network is greater than the input energy. Since the plant is considered to be passive, the only source of generating extra energy is the controller that makes the whole system unstable. So we must engage passivity controller to modify the controller output by dissipating the extra amount of

In Task 4, the PCs are turned on and the robotic fingers are commanded to effect the desired motion of the target. The PCs are activated when the POs cross zero from a positive value. The required damping forces are generated to dissipate only the excess amount of energy generated by the controller. In this case, the target tracks the desired straight line trajectory well with the POs remaining positive. Figure 16 represents the actual and the desired trajectories of the target when PCs are turned on. For this case, the PCs on the plant side are only activated whereas the PCs on the trajectory side remain idle. Figure 17 shows the PCs forces generated at the plant side when the POs cross zero. The POs become positive during interaction between the robotic fingers and the object as shown in Figure 18. Hence, the stability of the overall system is guaranteed. The PCs on the trajectory side are shown in Figure 19, which are all zeros. The modified controller outputs to move the target point are

Fig. 16. The desired (red dashed) and the actual (blue solid) straight lines when PCs are

0 1 2 3 4 5 6

desired actual

x (mm)

energy. **Task 4:** 

shown in Figure 20.

turned on

0

1

2

3

y (mm)

4

5

6

Fig. 15. (a) POs for three robotic fingers when PCs are not turned on, (b) a magnified version of (a) for first few seconds

tracked the desired position trajectory. Robotic fingers forces generated by the PI controller are presented in Figure 14. Force values in Figure 14 are quite less than those in Figure 11 because of the optimal contact location of the robotic fingers. However, the POs for robotic fingers 2 and 3 are become negative as shown in Figure 15. Negative values of the POs signify that the output energy of the 2-port network is greater than the input energy. Since the plant is considered to be passive, the only source of generating extra energy is the controller that makes the whole system unstable. So we must engage passivity controller to modify the controller output by dissipating the extra amount of energy.

### **Task 4:**

36 Robotic Systems – Applications, Control and Programming

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

time (sec.)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Fig. 15. (a) POs for three robotic fingers when PCs are not turned on, (b) a magnified version

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

time (sec.)

(b)

(a)

of (a) for first few seconds







0

5 x 10-7

0

E2 (Nm)

E3 (Nm)

1 x 10-6

E1 (Nm)

0

0.01

0

5 x 10-3

0

E2 (Nm)

E3 (Nm)

5 x 10-3

E1 (Nm)

0

0.01

In Task 4, the PCs are turned on and the robotic fingers are commanded to effect the desired motion of the target. The PCs are activated when the POs cross zero from a positive value. The required damping forces are generated to dissipate only the excess amount of energy generated by the controller. In this case, the target tracks the desired straight line trajectory well with the POs remaining positive. Figure 16 represents the actual and the desired trajectories of the target when PCs are turned on. For this case, the PCs on the plant side are only activated whereas the PCs on the trajectory side remain idle. Figure 17 shows the PCs forces generated at the plant side when the POs cross zero. The POs become positive during interaction between the robotic fingers and the object as shown in Figure 18. Hence, the stability of the overall system is guaranteed. The PCs on the trajectory side are shown in Figure 19, which are all zeros. The modified controller outputs to move the target point are shown in Figure 20.

Fig. 16. The desired (red dashed) and the actual (blue solid) straight lines when PCs are turned on

Target Point Manipulation Inside a Deformable Object 39

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

time (sec.)

Fig. 19. PCs forces at the trajectory side when PCs are turned on

f 1 P f 2 P f 3 P

Fig. 20. Modified controller forces when PCs are turned on


0

0.1

0.2

0.3

Modified controller forces (N)

0.4

0.5

0.6

0.7

0.8


1


1


0

0

0

t

t

t

f

3 (N)

f

2 (N)

f

1 (N)

1

0 1 2 3 4 5 6 7 8 9 10

time (sec.)

Fig. 17. Required forces supplied by PCs at the plant side when PCs are turned on

Fig. 18. POs for three robotic fingers when PCs are turned on

38 Robotic Systems – Applications, Control and Programming

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

time (sec.)

Fig. 17. Required forces supplied by PCs at the plant side when PCs are turned on

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

0 1 2 3 4 5 6 7 8 9 10

time (sec.)

Fig. 18. POs for three robotic fingers when PCs are turned on




E1 (Nm)

E2 (Nm)

E3 (Nm)

0

0.01


0.4

0

0

0.2

0.4

0.2

p

f

p

p

f

3 (N)

f

2 (N)

1 (N)

0

0.1

Fig. 19. PCs forces at the trajectory side when PCs are turned on

Fig. 20. Modified controller forces when PCs are turned on

Target Point Manipulation Inside a Deformable Object 41

[11] M. Namvar and F. Aghili, Adaptive force-motion control of coordinated robots

[12] M. Saha and P. Isto, Manipulation planning for deformable linear objects, IEEE

[13] Y. Zhang, B. K. Chen, X. Liu and Y. Sun, Autonomous robotic pick-and-place of microobjects, IEEE Transactions on Robotics, vol. 26, issue 1, pp. 200-207, 2010. [14] D. Sun and Y. H. Liu, Modeling and impedance control of a two-manipulator system

[15] A. Tavasoli, M. Eghtesad and H. Jafarian, Two-time scale control and observer design

[17] Y. Zhang, E. C. Prakash and E. Sung, A new physical model with multilayer architecture

on Visualization and Computer Graphics, vol. 10, issue 3, pp. 339-352, 2004. [18] M. Meyer, G. Debunne, M. Desbrun and A. H. Barr, Interactive animation of cloth-like

[19] A. Joukhadar and C. Laugier, Dynamic simulation: model, basic algorithms, and

[20] A. M. Howard, and G. A. Bekey, Recursive learning for deformable object manipulation, Proc. of International Conference on Advanced Robotics, pp. 939-944, 1997. [21] R. M. Koch, M. H. Gross, F. R. Carls, D. F. Von Buren, G. Fankhauser, Y. I. H. Parish,

[22] S. D. Pieper, D. R. Laub, Jr., and J. M. Rosen, A finite element facial model for simulating plastic surgery, Plastic and Reconstructive Surgery, 96(5), pp. 1100-1105, Oct. 1995. [23] J. K. Salisbury, Kinematic and force analysis of articulated hands, PhD Thesis,

[24] J. Kerr, Analysis of multifingered hand, PhD Thesis, Department of Mechanical

[25] J. M. Abel, W. Holzmann and J. M. McCarthy, On grasping planar objects with two

[26] M. R. Cutkosky, Grasping and fine manipulation for automated manufacturing, PhD

[27] C. Ferrari and J. Canny, Planning optimal grasp, IEEE International Conference on

[28] S. Garg and A. Dutta, Grasping and manipulation of deformable objects based on

Motion and Manipulation, pp. 419-434, A. K. Peters Ltd., 1997.

Engineering, Standford University, Standford, CA, 1984.

Robotics and Automation, pp. 2290-2295, 1992.

Transactions on Robotics, vol. 23, issue 6, pp. 1141-1150, 2007.

Robotics, vol. 21, issue 4, pp. 678-694, 2005.

Control, vol. 119, no. 4, pp. 736-742 ,1997.

MERL Technical Report, TR97-19, 1997.

vol. 12, no. 1, pp. 1-12(12), 2001.

Conf. Proc., pp. 421-428, 1996.

1985.

Pittsburgh, PA, 1985.

Vol. 3, No. 2, pp. 107-114, 2006.

interacting with geometrically unknown environments, IEEE Transactions on

handling a flexible beam, ASME Journal of Dynamic Systems, Measurement, and

for trajectory tracking of two cooperating robot manipulators moving a flexible beam, Robotics and Autonomous Systems, vol. 57, issue 2, pp. 212-221, 2009. [16] S. F. F. Gibson and B. Mirtich, A survey of deformable modeling in computer graphics,

for facial expression animation using dynamics adaptive mesh, IEEE Transactions

objects in virtual reality, The Journal of Visualization and Computer Animation,

optimization, In J.-P. Laumond and M. Overmars, editors, Algorithms for Robotic

Simulating facial surgery using finite element models, In ACM SIGGRAPH 96

Department of Mechanical Engineering, Standford University, Standford, CA, 1982.

articulated fingers, IEEE Journal of Robotics and Automation, vol. 1, pp. 211-214,

Thesis, Department of Mechanical Engineering, Carnegie Mellon University,

internal force requirements, International Journal of Advanced Robotic Systems,

### **8. Conclusion and future work**

In this chapter, an optimal contact formulation and a control action are presented in which a deformable object is manipulated externally by three robotic fingers such that an internal target point is positioned to the desired location. First, we formulated an optimization technique to determine the contact locations around the periphery of the object so that the target can be manipulated with minimum force applied on the object. The optimization technique considers a model of the deformable object. However, it is difficult to build an exact model of the deformable object in general due to nonlinear elasticity, friction, parameter variations, and other uncertainties. Therefore, we considered a coarse model of the deformable object to determine the optimal contact locations which is more realizable. A time-domain passivity control scheme with adjustable dissipative elements has been developed to guarantee the stability of the whole system. Extensive simulation results validate the optimal contact formulation and stable interaction between the robotic fingers and the object.

## **9. References**


40 Robotic Systems – Applications, Control and Programming

In this chapter, an optimal contact formulation and a control action are presented in which a deformable object is manipulated externally by three robotic fingers such that an internal target point is positioned to the desired location. First, we formulated an optimization technique to determine the contact locations around the periphery of the object so that the target can be manipulated with minimum force applied on the object. The optimization technique considers a model of the deformable object. However, it is difficult to build an exact model of the deformable object in general due to nonlinear elasticity, friction, parameter variations, and other uncertainties. Therefore, we considered a coarse model of the deformable object to determine the optimal contact locations which is more realizable. A time-domain passivity control scheme with adjustable dissipative elements has been developed to guarantee the stability of the whole system. Extensive simulation results validate the optimal contact formulation and stable interaction between the robotic fingers

[1] D. Sun and Y. H. Liu, Modeling and impedance control of a two-manipulator system

[2] P. Dang, F. L. Lewis, K. Subbarao and H. Stephanou, Shape control of flexible structure

[3] X. Liu, K. Kim, Y. Zhang and Y. Sun, Nanonewton force sensing and control in

[4] V. G. Mallapragada, N. Sarkar and T. K. Podder, Robot-assisted real-time tumor

[5] S. Hirai and T. Wada, Indirect simultaneous positioning of deformable objects with multi

[6] S. Nath, Z. Chen, N. Yue, S. Trumpore and R. Peschel, Dosimetric effects of needle

[7] A. Albu-Schaffer, C. Ott and G. Hirzinger, Constructive energy shaping based

International Conference on Robotics and Automation, pp. 1387-1393, 2005. [8] B. Hannaford and R. Jee-Hwan, Time-domain passivity control of haptic interfaces, IEEE

[9] S. Ali A. Moosavian and R. Rastegari, Multiple-arm space free-flying robots for

[10] Z. Li, S. S. Ge and Z. Wang, Robust adaptive control of coordinated multiple mobile

Transactions on Robotics and Automation, vol. 18, pp. 1-10, 2002.

manipulators, Mechatronics, vol. 18, issues 5-6, pp. 239-250, 2008.

handling a flexible beam, ASME Journal of Dynamic Systems, Measurement, and

using potential field method, 17th IEEE International Conference on Control

microrobotic cell manipulation, The International Journal of Robotics Research, vol.

manipulation for breast biopsy, IEEE Transactions on Robotics, vol. 25, issue 2, pp.

pinching fingers based on uncertain model, Robotica, Millennium Issue on

divergence in prostate seed implant using 125I and 103Pd radioactive seeds,

impedance control for a class of underactuated Euler-Lagrange systems, IEEE

manipulating objects with force tracking restrictions, Robotics and Autonomous

**8. Conclusion and future work** 

Control, vol. 119, pp. 736-742, 1997.

28, issue 8, pp. 1065-1076, 2009.

316-324, 2009.

Applications, Texas, USA, pp. 540-546, 2008.

Grasping and Manipulation, vol. 18, pp. 3-11, 2000.

Medical Physics, vol. 27, pp. 1058-1066, 2000.

System, vol. 54, issue 10, pp. 779-788, 2006.

and the object.

**9. References** 


**3** 

*Korea* 

**Novel Assistive Robot for Self-Feeding** 

*Korea National Rehabilitation Research Institute, Korea National Rehabilitation Center* 

Assistive robots, with which users can interact directly, have attracted worldwide attention. They can assist people with disabilities and older persons in the activities of daily living. Assistive robots could be employed for improving quality of life as they can be adjusted according to demographic changes. There are several crucial issues to be considered with regard to these robots, such as customizing them according to the specific culture of the

In Korea, the official number of registered people with disabilities due to illnesses, injuries, and the natural aging process has already exceeded two million (Employment Development Institute, 2009). More than one-third of these disabled people are the elderly. Moreover, due to longer life spans and a decline in birthrate, the elderly make up over 10% of the population in

In order to achieve efficient caregiving for people with disabilities and elderly persons, caregivers should physically interact with the people. For example, caregivers have to assist people in performing the routine activities of their daily lives, such as eating, changing clothes, changing their posture, moving from one location to another, and bathing. Among these activities, eating meals is one of the most essential daily activities. In this regard, caregivers must interact with people frequently to assist with food selection, feeding interval, etc. Existing robotic technologies can be utilized to take over the functions of the caregivers. Thus, assistive robots represent one of the solutions by which disabled or elderly

The design of assistive robots to help with self-feeding depends strongly on the specific culture of the user. Korean food consists chiefly of boiled rice, soup, and side dishes such as Kimchi. The procedure of having a meal is as follows: the user eats the boiled rice first and then the side dishes. These steps are performed repetitively. In comparison with foreign boiled rice, Korean boiled rice sticks together very well after cooking. Handling this sticky boiled rice can be problematic. In addition, Korean soup includes meat, noodles, and various vegetables, thus the existing feeding robots find it difficult to handle Korean foods. Various assistive robots have been developed since the late 1980s, as shown in Fig. 1. Handy1 (Topping & Smith, 1999) is an assistive robot for daily activities such as eating, drinking, washing, shaving, teeth cleaning, and applying make-up. Handy1 consists of a five-DOF (degree-of-freedom) robot, a gripper, and a tray unit. The major function of Handy1 is to help with eating. Handy1 allows a user to select food from any part of the tray. A cup is attached to enable users to drink water with their meal. The walled columns of a

Korea. As a result, effective caregiving with restricted resources is an urgent problem.

people can receive support for performing the activities of daily life.

users as well as ensuring cost-effectiveness (Mann, 2005).

**1. Introduction** 

Won-Kyung Song and Jongbae Kim


## **3**

## **Novel Assistive Robot for Self-Feeding**

Won-Kyung Song and Jongbae Kim

*Korea National Rehabilitation Research Institute, Korea National Rehabilitation Center Korea* 

### **1. Introduction**

42 Robotic Systems – Applications, Control and Programming

[29] T. Watanabe and T. Yoshikawa, Grasping optimization using a required external force

[30] D. Ding, Y. H. Liu and S. Wang, Compuattion of 3D form-closure grasps, IEEE Transactions on Robotics and Autonamtion, Vol. 17, pp. 515-522, 2001. [31] A. Bicchi and V. Kumar, Robotic grasping and contact: a review, Proc. IEEE International Conference on Robotics and Automation, pp. 348-353, 2000.

[33] J. Cornella, R. Suarez, R. Carloni and C. Melchiorri, Dual programming based approach

[34] P. Saut, C. Remond, V. Perdereau and M. Drouin, Online computation of grasping force

[35] K. Gopalakrishanan and K. Goldberg, D-space and deform closure grasps of deformable parts, International Journal of Robotics Research, vol. 24, pp. 889-910, 2005. [36] G. L. Foresti and F. A. Pellegrino, Automatic visual recognition of deformable objects for

[37] H. Wakamatsu, S. Hirai and K. Iwata, Static analysis of deformable object grasping

[38] R. J. Anderson and M. W. Spong, Asymptotic stability for force reflecting teleoperators

[39] S. Stramigioli, C. Melchiorri and S. Andreotti, A passivity-based control scheme for

[40] B. Hannaford and J.-H. Ryu, Time domain passivity control of haptic interfaces, IEEE International Conference on Robotics and Automation, vol.2, pp. 1863-1869, 2001. [41] J.-H. Ryu, D.-S. Kwon and B. Hannaford, Control of a flexible manipulator with

[42] T. Wada, S. Hirai, S. Kawamura and N. Kamiji, Robust manipulation of deformable

[43] V. D. Nguyen, Constructing force-closure grasps, IEEE International Conference on

[44] J. Ponce and B. Faverjon, On computing three-finger force-closure grasps of polygonal

[45] J. E. Colgate and N. Hogan, Robust control of dynamically interacting systems,

[46] J. J.-H. Ryu and C. Preusche, Stable bilateral control of teleoperators under time-varying

for optimal grasping force distribution, Journal of Mechatronics, Vol. 18, pp. 348-

in multi-fingered hands, Proc. IEEE/RSJ International Conference on Intelligent

grasping and manipulation, IEEE Transaction on Systems, Man, and Cybernetics:

based on bounded force closure, IEEE International Conference on Robotics and

with time delays, in, 1989. IEEE International Conference on Robotics and

robotic grasping and manipulation, The 38th Conference on Decision and Control,

noncollocated feedback: time-domain passivity approach, IEEE Transactions on

objects by a simple PID feedback, IEEE International Conference on Robotics and

objects, Fifth International Conference on Advanced Robotics, vol.2, pp. 1018-1023,

communication delay: time domain passivity approach, IEEE International

[32] M. T. Mason, Mechanics of robotic manipulation, The MIT Press.

Applications and Reviews, vol. 34, pp. 325-333, 2004.

Robots and Systems, pp. 1223-1228, 2005.

Automation, vol.4, pp. 3324-3329, 1996.

Automation, vol.3, pp. 1618-1625, 1989.

Phoenix, Arizona, USA, 1999.

Robotics, vol. 20, pp. 776-780, 2004.

Automation, vol.1, pp. 85-90, 2001.

1991.

Robotics and Automation, pp. 1368-1373, 1986.

International Journal of Control, vol. 48, pp. 65 - 88, 1988.

Conference on Robotics and Automation, pp. 3508-3513, 2007.

2007.

356, 2008.

set, IEEE Transactions on Automation Science and Engineering, Vol. 4, pp. 52-66,

Assistive robots, with which users can interact directly, have attracted worldwide attention. They can assist people with disabilities and older persons in the activities of daily living. Assistive robots could be employed for improving quality of life as they can be adjusted according to demographic changes. There are several crucial issues to be considered with regard to these robots, such as customizing them according to the specific culture of the users as well as ensuring cost-effectiveness (Mann, 2005).

In Korea, the official number of registered people with disabilities due to illnesses, injuries, and the natural aging process has already exceeded two million (Employment Development Institute, 2009). More than one-third of these disabled people are the elderly. Moreover, due to longer life spans and a decline in birthrate, the elderly make up over 10% of the population in Korea. As a result, effective caregiving with restricted resources is an urgent problem.

In order to achieve efficient caregiving for people with disabilities and elderly persons, caregivers should physically interact with the people. For example, caregivers have to assist people in performing the routine activities of their daily lives, such as eating, changing clothes, changing their posture, moving from one location to another, and bathing. Among these activities, eating meals is one of the most essential daily activities. In this regard, caregivers must interact with people frequently to assist with food selection, feeding interval, etc. Existing robotic technologies can be utilized to take over the functions of the caregivers. Thus, assistive robots represent one of the solutions by which disabled or elderly people can receive support for performing the activities of daily life.

The design of assistive robots to help with self-feeding depends strongly on the specific culture of the user. Korean food consists chiefly of boiled rice, soup, and side dishes such as Kimchi. The procedure of having a meal is as follows: the user eats the boiled rice first and then the side dishes. These steps are performed repetitively. In comparison with foreign boiled rice, Korean boiled rice sticks together very well after cooking. Handling this sticky boiled rice can be problematic. In addition, Korean soup includes meat, noodles, and various vegetables, thus the existing feeding robots find it difficult to handle Korean foods.

Various assistive robots have been developed since the late 1980s, as shown in Fig. 1. Handy1 (Topping & Smith, 1999) is an assistive robot for daily activities such as eating, drinking, washing, shaving, teeth cleaning, and applying make-up. Handy1 consists of a five-DOF (degree-of-freedom) robot, a gripper, and a tray unit. The major function of Handy1 is to help with eating. Handy1 allows a user to select food from any part of the tray. A cup is attached to enable users to drink water with their meal. The walled columns of a food dish serve an important purpose: the food can be scooped on to the dish without any resultant mixing of food items.

(a) Handy1 (b) Winsford feeder (c) Neater Eater

2009).

food.

Section 5.

**2.1 Users** 

**2. Self-feeding robot system** 

configurations of the self-feeding robot system.

Novel Assistive Robot for Self-Feeding 45

The Mealtime Partner Dining System (Mealtime Partners) is positioned in front of a user's mouth. Three bowls can rotate in front of the mouth. The spoon picks up the food and then moves a short distance toward the preset location of the mouth. This system reduces the chances of the spoon slipping on wet food because the underside of the spoon is wiped off after scooping. Because of the way the system is positioned, the user does not need to lean toward the feeder. In some systems, a beverage straw is located beside the spoon (Pourmohammadali, 2007). Other systems are designed for multiple users (Guglielmelli,

Most feeding systems scoop the food with a spoon. Those systems are not suitable for use in the case of boiled rice, which is a staple food in Korea. In addition, some systems have a single dish, and thus different types of food might be mixed during scooping. My Spoon uses the grasping function to pick up food, but this system has difficulty serving Korean rice due to its fixed grasping strength and the grasping openness of the gripper. As a result, My Spoon's gripper sometimes gets a lot of rice attached to its surface. The previously mentioned self-feeding robotic systems also have difficulty scooping this staple Korean

Feeding robots enable users to enjoy food independently during mealtimes. After preparing food, users can choose when they want to eat the desired food. We developed an assistive robot for self-feeding by taking into consideration the feedback of user candidates and clinical experts. We evaluated the self-feeding robot by performing a series of user tests. The overall process, i.e., formulating a concept, design, and evaluation involves feedback from users and clinical experts. The development process is performed on the basis of the

In this paper, we introduce a newly designed self-feeding robot that will be suitable in the case of Korean food, including sticky rice, and we report the results of tests that were performed with several disabled people. In Section 2, we will present an overview of the new self-feeding robot for Korean food. Basic operational procedures of the self-feeding robot will be presented in Section 3. Section 4 contains the results and discussions of tests performed with users with spinal cord injuries. Finally, we will present the conclusion in

In this section, we will present an overview of the users, requirements, and system

The primary users of self-feeding robots are people with physical disabilities who have difficulty moving their upper limbs. Such people include those suffering from high-level spinal cord injuries, cerebral palsy, and muscular diseases. For example, people with cervical level-4 spinal cord injuries have difficulty moving their upper limbs and retain full movement only above their necks. Some people afflicted with cerebral palsy cannot move their arms and hands, and they often have difficulty moving their necks. When the spoon of a self-feeding robot approaches such a user's mouth, that person has a hard time putting the food in his or her mouth. People with muscular diseases have weak muscle movements. Even though they can move their hands, they have limited motor functions in their elbows and shoulder joints. We can also include senior citizens who have difficulties with the motor

philosophy of participatory action design (Ding et al., 2007).

 (d) My Spoon (e) Meal Buddy (f) Mealtime Partner Dining System

Fig. 1. Feeding systems

The Winsford feeder (Sammons Preston; Hermann et al., 1999) is a mechanical self-feeding system. It uses a mechanical pusher to fill a spoon and a pivoting arm to raise the spoon to the user's mouth that is at a preset position. The plate is rotated to place more food in front of the pusher. The user can choose from two input devices: a chin switch and a rocker switch.

Neater Eater (Neater Solutions) has two versions: a manual–operation-type and an automatic-operation-type system. Neater Eater consists of a two-DOF arm and one dish. Two types of food can be present on the one dish. The manual type can be used to suppress the tremors of a user's upper limbs while he or she eats.

My Spoon (Soyama et al., 2003) is suitable in the case of Japanese food. It consists of a five-DOF manipulator, a gripper, and a meal tray. The meal tray has four rectangular cells. My Spoon combines several pre-programmed motions: automatic operation, semiautomatic operation, and manual operation. The semiautomatic operation allows a user to select food. The manual operation can change the position in which the food is held. The input device can be selected from among the following: the chin joystick, reinforcement joystick, and switch. The end-effector of the robotic arm has one spoon and one fork, which move together to realize the grasping motion. During the grasping process, the gap between the spoon and the fork changes and thus the end-effector grasps the food. Then the robot moves to a predefined position in front of the user's mouth, and the fork moves backward to enable the user to eat the food off the spoon.

Meal Buddy (Sammons Preston) has a three-DOF robotic arm and three bowls that can be mounted on a board using magnets. After the system scoops the food, the robotic arm scrapes the surplus food off the spoon with the rod on the bowls.

44 Robotic Systems – Applications, Control and Programming

food dish serve an important purpose: the food can be scooped on to the dish without any

(a) Handy1 (b) Winsford feeder (c) Neater Eater

(d) My Spoon (e) Meal Buddy (f) Mealtime Partner

The Winsford feeder (Sammons Preston; Hermann et al., 1999) is a mechanical self-feeding system. It uses a mechanical pusher to fill a spoon and a pivoting arm to raise the spoon to the user's mouth that is at a preset position. The plate is rotated to place more food in front of the pusher. The user can choose from two input devices: a chin switch and a rocker switch. Neater Eater (Neater Solutions) has two versions: a manual–operation-type and an automatic-operation-type system. Neater Eater consists of a two-DOF arm and one dish. Two types of food can be present on the one dish. The manual type can be used to suppress

My Spoon (Soyama et al., 2003) is suitable in the case of Japanese food. It consists of a five-DOF manipulator, a gripper, and a meal tray. The meal tray has four rectangular cells. My Spoon combines several pre-programmed motions: automatic operation, semiautomatic operation, and manual operation. The semiautomatic operation allows a user to select food. The manual operation can change the position in which the food is held. The input device can be selected from among the following: the chin joystick, reinforcement joystick, and switch. The end-effector of the robotic arm has one spoon and one fork, which move together to realize the grasping motion. During the grasping process, the gap between the spoon and the fork changes and thus the end-effector grasps the food. Then the robot moves to a predefined position in front of the user's mouth, and the fork moves backward to enable

Meal Buddy (Sammons Preston) has a three-DOF robotic arm and three bowls that can be mounted on a board using magnets. After the system scoops the food, the robotic arm

Dining System

resultant mixing of food items.

Fig. 1. Feeding systems

the tremors of a user's upper limbs while he or she eats.

scrapes the surplus food off the spoon with the rod on the bowls.

the user to eat the food off the spoon.

The Mealtime Partner Dining System (Mealtime Partners) is positioned in front of a user's mouth. Three bowls can rotate in front of the mouth. The spoon picks up the food and then moves a short distance toward the preset location of the mouth. This system reduces the chances of the spoon slipping on wet food because the underside of the spoon is wiped off after scooping. Because of the way the system is positioned, the user does not need to lean toward the feeder. In some systems, a beverage straw is located beside the spoon (Pourmohammadali, 2007). Other systems are designed for multiple users (Guglielmelli, 2009).

Most feeding systems scoop the food with a spoon. Those systems are not suitable for use in the case of boiled rice, which is a staple food in Korea. In addition, some systems have a single dish, and thus different types of food might be mixed during scooping. My Spoon uses the grasping function to pick up food, but this system has difficulty serving Korean rice due to its fixed grasping strength and the grasping openness of the gripper. As a result, My Spoon's gripper sometimes gets a lot of rice attached to its surface. The previously mentioned self-feeding robotic systems also have difficulty scooping this staple Korean food.

Feeding robots enable users to enjoy food independently during mealtimes. After preparing food, users can choose when they want to eat the desired food. We developed an assistive robot for self-feeding by taking into consideration the feedback of user candidates and clinical experts. We evaluated the self-feeding robot by performing a series of user tests. The overall process, i.e., formulating a concept, design, and evaluation involves feedback from users and clinical experts. The development process is performed on the basis of the philosophy of participatory action design (Ding et al., 2007).

In this paper, we introduce a newly designed self-feeding robot that will be suitable in the case of Korean food, including sticky rice, and we report the results of tests that were performed with several disabled people. In Section 2, we will present an overview of the new self-feeding robot for Korean food. Basic operational procedures of the self-feeding robot will be presented in Section 3. Section 4 contains the results and discussions of tests performed with users with spinal cord injuries. Finally, we will present the conclusion in Section 5.

## **2. Self-feeding robot system**

In this section, we will present an overview of the users, requirements, and system configurations of the self-feeding robot system.

## **2.1 Users**

The primary users of self-feeding robots are people with physical disabilities who have difficulty moving their upper limbs. Such people include those suffering from high-level spinal cord injuries, cerebral palsy, and muscular diseases. For example, people with cervical level-4 spinal cord injuries have difficulty moving their upper limbs and retain full movement only above their necks. Some people afflicted with cerebral palsy cannot move their arms and hands, and they often have difficulty moving their necks. When the spoon of a self-feeding robot approaches such a user's mouth, that person has a hard time putting the food in his or her mouth. People with muscular diseases have weak muscle movements. Even though they can move their hands, they have limited motor functions in their elbows and shoulder joints. We can also include senior citizens who have difficulties with the motor

Novel Assistive Robot for Self-Feeding 47

system can be made with the bowls located in front of a user's mouth. However, some senior user candidates hate having the bowls right in front of their mouth; they prefer to eat

Other comments of user candidates are as follows: plain machines that can serve simple dishes with water are required. When a caregiver goes out for a while, a user needs to be able to eat cereal with milk with the help of a self-feeding robot. The water supply tool should be located next to the user's body. The meal tray must have a cover to protect the food from dust contamination. The cost should be reasonable, e.g., the price should be between US\$1800 and \$2700. Obviously, a low-cost system is preferred. The feeding robot should be able to deal with noodles. The feeding robot should be able to accommodate the

We concentrate on rice handling. We do not take into account the handling of soup in this development. We will handle the requirements of feeding Korean soup in a future version's design. Instead of handling Korean soup via a self-feeding robot, a user drinks soup stored

Technically, we considered four types of feeding robots in order to ensure that it can grip

In the first concept, a number of bowls are located in front of a user's mouth, and the food is presented by the spoon with a short traveling distance. For example, if there are three bowls, one bowl has rice and two bowls have side dishes. However, two side dishes are not enough to constitute a satisfying meal. In general, Korean people eat three or four side dishes with

(a) First concept (b) Second concept

(c) Third concept (d) Fourth concept

In the second concept, the bowls are located in the upper front of a user's mouth, and then the food drops from the bottom of a bowl. The food is placed in the spoon by a dropping motion, and then the spoon approaches the user's mouth. This method requires the mechanism of food

Fig. 3. Design concepts of a feeding robot. (Third concept is chosen.)

dropping on the spoon. This method could be suitable for a bite-sized rice cake.

in a cup. Generally, we assume that a user drinks soup or water through a straw.

the food like ordinary people. Thus, we focus mainly on using a tabletop tray.

posture of the user. Finally, the feeding robot should be lightweight.

and release boiled rice effectively, as shown in Fig. 3.

boiled rice at a time. Therefore, we need four or five bowls.

functions of their upper limbs, e.g., the fragile elderly, among the abovementioned disabled people. It is clear that the number of overall target users of self-feeding robots will be growing in the near future.

## **2.2 Requirements of a self-feeding robot**

We surveyed a group of people with disabilities as well as clinical experts to learn about the requirements of a feeding robot (Song et al., 2010a, 2010b). The focus group consisted of a person with a spinal cord injury and a person with cerebral palsy. The clinical experts included occupational therapists and medical doctors of physical medicine and rehabilitation.

The major findings of the survey are as follows. Firstly, a user should be able to control the feeding interval for the desired food. In the case of caregiving, one of the common problems is the difficulty in controlling the feeding interval. People with spinal cord injury are able to talk quickly and can therefore manage a short feeding interval. However, people with cerebral palsy have difficulty representing their intentions quickly when the feeding interval is too short.

Fig. 2. Korean food on a standard food container. (From the lower left-hand side going counterclockwise: rice, soup, and side dishes.)

Secondly, the specialists and the user candidates believe that the feeding systems are designed more for western-style food. Those systems are not suitable for Korean food, which includes boiled rice, soup, and side dishes. A user eats one of the side dishes and then the boiled rice in turn. These steps are performed repetitively during mealtime. In comparison with foreign boiled rice, Korean boiled rice sticks together very well after cooking. One of the problems of self-feeding systems is handling this sticky boiled rice. In addition, Korean soup includes meat, noodles, and various vegetables. Therefore, existing feeding robots have difficulty handling Korean foods (Fig. 2).

Thirdly, a feeding robot should be suitable for use in private homes and facilities. From an economic point of view, a feeding robot is effective in facilities that have many persons with upper-limb disability. Such facilities do not have enough caregivers to help with feeding due to the heavily time-consuming nature of this task. Thus, a robot reduces the burden of caregiving for feeding. A feeding robot can also be used in an ordinary home to improve the quality of life of the users and their families. Members of a family can face each other and freely enjoy talking. The other members of the family can go out for a few hours because they are freed from the burden of having to help with feeding.

The location of bowls or a tray is another important factor. According to Korean culture, the location of bowls or a tray is strongly related to the dignity of the person. A simple feeding 46 Robotic Systems – Applications, Control and Programming

functions of their upper limbs, e.g., the fragile elderly, among the abovementioned disabled people. It is clear that the number of overall target users of self-feeding robots will be

We surveyed a group of people with disabilities as well as clinical experts to learn about the requirements of a feeding robot (Song et al., 2010a, 2010b). The focus group consisted of a person with a spinal cord injury and a person with cerebral palsy. The clinical experts included occupational therapists and medical doctors of physical medicine and

The major findings of the survey are as follows. Firstly, a user should be able to control the feeding interval for the desired food. In the case of caregiving, one of the common problems is the difficulty in controlling the feeding interval. People with spinal cord injury are able to talk quickly and can therefore manage a short feeding interval. However, people with cerebral palsy have difficulty representing their intentions quickly when the feeding interval

Fig. 2. Korean food on a standard food container. (From the lower left-hand side going

Secondly, the specialists and the user candidates believe that the feeding systems are designed more for western-style food. Those systems are not suitable for Korean food, which includes boiled rice, soup, and side dishes. A user eats one of the side dishes and then the boiled rice in turn. These steps are performed repetitively during mealtime. In comparison with foreign boiled rice, Korean boiled rice sticks together very well after cooking. One of the problems of self-feeding systems is handling this sticky boiled rice. In addition, Korean soup includes meat, noodles, and various vegetables. Therefore, existing

Thirdly, a feeding robot should be suitable for use in private homes and facilities. From an economic point of view, a feeding robot is effective in facilities that have many persons with upper-limb disability. Such facilities do not have enough caregivers to help with feeding due to the heavily time-consuming nature of this task. Thus, a robot reduces the burden of caregiving for feeding. A feeding robot can also be used in an ordinary home to improve the quality of life of the users and their families. Members of a family can face each other and freely enjoy talking. The other members of the family can go out for a few hours because

The location of bowls or a tray is another important factor. According to Korean culture, the location of bowls or a tray is strongly related to the dignity of the person. A simple feeding

growing in the near future.

rehabilitation.

is too short.

**2.2 Requirements of a self-feeding robot** 

counterclockwise: rice, soup, and side dishes.)

feeding robots have difficulty handling Korean foods (Fig. 2).

they are freed from the burden of having to help with feeding.

system can be made with the bowls located in front of a user's mouth. However, some senior user candidates hate having the bowls right in front of their mouth; they prefer to eat the food like ordinary people. Thus, we focus mainly on using a tabletop tray.

Other comments of user candidates are as follows: plain machines that can serve simple dishes with water are required. When a caregiver goes out for a while, a user needs to be able to eat cereal with milk with the help of a self-feeding robot. The water supply tool should be located next to the user's body. The meal tray must have a cover to protect the food from dust contamination. The cost should be reasonable, e.g., the price should be between US\$1800 and \$2700. Obviously, a low-cost system is preferred. The feeding robot should be able to deal with noodles. The feeding robot should be able to accommodate the posture of the user. Finally, the feeding robot should be lightweight.

We concentrate on rice handling. We do not take into account the handling of soup in this development. We will handle the requirements of feeding Korean soup in a future version's design. Instead of handling Korean soup via a self-feeding robot, a user drinks soup stored in a cup. Generally, we assume that a user drinks soup or water through a straw.

Technically, we considered four types of feeding robots in order to ensure that it can grip and release boiled rice effectively, as shown in Fig. 3.

In the first concept, a number of bowls are located in front of a user's mouth, and the food is presented by the spoon with a short traveling distance. For example, if there are three bowls, one bowl has rice and two bowls have side dishes. However, two side dishes are not enough to constitute a satisfying meal. In general, Korean people eat three or four side dishes with boiled rice at a time. Therefore, we need four or five bowls.

Fig. 3. Design concepts of a feeding robot. (Third concept is chosen.)

In the second concept, the bowls are located in the upper front of a user's mouth, and then the food drops from the bottom of a bowl. The food is placed in the spoon by a dropping motion, and then the spoon approaches the user's mouth. This method requires the mechanism of food dropping on the spoon. This method could be suitable for a bite-sized rice cake.

Novel Assistive Robot for Self-Feeding 49

intact round-shaped spoon to serve food to a user's mouth safely. If an end-effector has an unusual shape, then it might pose a danger to the user as it approaches his or her face. The two proposed arms with their different end-effectors mimic Korean eating behavior. Specifically, Koreans use a spoon and steel chopsticks during mealtime, as shown in Fig. 5 (a) and (b). Some people use a spoon and chopsticks simultaneously [Fig. 5 (c)]. In the designed system, the gripper of a grab-arm and the spoon of a spoon-arm take on the roles of chopsticks and a spoon, respectively. Many Korean caregivers pick up food with chopsticks and then put it on a spoon in order to serve food to users such as children and patients. In that sense, the proposed two-armed system stems from Korean eating tools.

(a) (b) (c)

Fig. 5. (a) A spoon for scooping food. (b) Chopsticks for picking up food. (c) A person using

A spoon-arm has two degrees of freedom (DOF) in order to transfer food on the spoon without changing the orientation of the spoon. A grab-arm includes a three-DOF SCARA joint for the planar motion, a one-DOF prismatic joint for the up and down motion, and a gripper. The overall number of DOFs of a dual-arm without a gripper is six, as shown in Fig.

R3

Axis #3

P1\*

Connection Bar

Axis #1

Arm #2

Fig. 6. The joint configuration of a novel feeding robot for Korean foods. P1 (Prismatic Joint

R1

Spoon Arm #1

R2

Axis #2

Gripper

R4 Axis #4

Axis #3

Tray

P2

Axis #6

Axis #5

R5

The feeding robot uses a microcontroller unit to control a spoon-arm and a grab-arm, as shown in Fig. 7. We add a small-sized PC with a touch screen to enable the user to enjoy

a spoon and chopsticks

6. The feeding robot can use an ordinary cafeteria tray.

#1) is optionally applied. R = Revolute. P = Prismatic

In the third concept, the system with a food tray is located on a table. The robotic arm picks up food and then moves it to a user's mouth. These tasks are divided into two steps: one is picking up the food and the other is moving the food to the user's mouth. Two arms can be used to perform the above two tasks, respectively. One of the user candidates pointed out the easy installation of the feeding robots, especially a dual-arm manipulator. This is significant because some caregivers might be elderly people who are not familiar with brand-new machines.

Finally, one bowl is located in front of the user's mouth. The mixed food with rice is loaded in that bowl. Some users do not like the mixed food, even though they do prefer a simple feeding system.

We decided on the third concept, which is located on a table, based on the opinions of specialists and user candidates.

## **2.3 Design of the feeding robot**

We have developed a simple robotic system that has a dual-arm manipulator that can handle Korean food such as boiled rice in an ordinary food container, as shown in Fig. 4. We divide a self-feeding task into two subtasks: picking up/releasing food and transferring food to a user's mouth. The first robotic arm (a spoon-arm, Arm #1) uses a spoon to transfer the food from a container on a table to a user's mouth. The second robotic arm (a grab-arm, Arm #2) picks food up from a container and then puts it on the spoon of a spoon-arm.

Fig. 4. Assistive robot for self-feeding. Spoon-arm (Arm #1) uses a spoon to transfer the food from a container to a user's mouth. Grab-arm (Arm #2) picks up the food from a container and then loads it onto the spoon of Arm #1

The two arms have different functions. The design of the end-effectors of the two arms could be chosen effectively. To pick up or release the food stably, a grab-arm can use an odd-shaped gripper, as shown in the bottom left-hand side of Fig. 4, because that gripper does not need to approach a user's mouth. However, the end-effector of a spoon-arm has an 48 Robotic Systems – Applications, Control and Programming

In the third concept, the system with a food tray is located on a table. The robotic arm picks up food and then moves it to a user's mouth. These tasks are divided into two steps: one is picking up the food and the other is moving the food to the user's mouth. Two arms can be used to perform the above two tasks, respectively. One of the user candidates pointed out the easy installation of the feeding robots, especially a dual-arm manipulator. This is significant because some caregivers might be elderly people who are not familiar with

Finally, one bowl is located in front of the user's mouth. The mixed food with rice is loaded in that bowl. Some users do not like the mixed food, even though they do prefer a simple

We decided on the third concept, which is located on a table, based on the opinions of

We have developed a simple robotic system that has a dual-arm manipulator that can handle Korean food such as boiled rice in an ordinary food container, as shown in Fig. 4. We divide a self-feeding task into two subtasks: picking up/releasing food and transferring food to a user's mouth. The first robotic arm (a spoon-arm, Arm #1) uses a spoon to transfer the food from a container on a table to a user's mouth. The second robotic arm (a grab-arm, Arm #2) picks food up from a container and then puts it on the spoon of a spoon-arm.

Fig. 4. Assistive robot for self-feeding. Spoon-arm (Arm #1) uses a spoon to transfer the food from a container to a user's mouth. Grab-arm (Arm #2) picks up the food from a container

Gripper Spoon

Tray

Spoon-Arm (Arm #1)

The two arms have different functions. The design of the end-effectors of the two arms could be chosen effectively. To pick up or release the food stably, a grab-arm can use an odd-shaped gripper, as shown in the bottom left-hand side of Fig. 4, because that gripper does not need to approach a user's mouth. However, the end-effector of a spoon-arm has an

brand-new machines.

specialists and user candidates.

**2.3 Design of the feeding robot** 

Grab-arm (Arm #2)

Gripper

and then loads it onto the spoon of Arm #1

feeding system.

intact round-shaped spoon to serve food to a user's mouth safely. If an end-effector has an unusual shape, then it might pose a danger to the user as it approaches his or her face.

The two proposed arms with their different end-effectors mimic Korean eating behavior. Specifically, Koreans use a spoon and steel chopsticks during mealtime, as shown in Fig. 5 (a) and (b). Some people use a spoon and chopsticks simultaneously [Fig. 5 (c)]. In the designed system, the gripper of a grab-arm and the spoon of a spoon-arm take on the roles of chopsticks and a spoon, respectively. Many Korean caregivers pick up food with chopsticks and then put it on a spoon in order to serve food to users such as children and patients. In that sense, the proposed two-armed system stems from Korean eating tools.

Fig. 5. (a) A spoon for scooping food. (b) Chopsticks for picking up food. (c) A person using a spoon and chopsticks

A spoon-arm has two degrees of freedom (DOF) in order to transfer food on the spoon without changing the orientation of the spoon. A grab-arm includes a three-DOF SCARA joint for the planar motion, a one-DOF prismatic joint for the up and down motion, and a gripper. The overall number of DOFs of a dual-arm without a gripper is six, as shown in Fig. 6. The feeding robot can use an ordinary cafeteria tray.

Fig. 6. The joint configuration of a novel feeding robot for Korean foods. P1 (Prismatic Joint #1) is optionally applied. R = Revolute. P = Prismatic

The feeding robot uses a microcontroller unit to control a spoon-arm and a grab-arm, as shown in Fig. 7. We add a small-sized PC with a touch screen to enable the user to enjoy

Novel Assistive Robot for Self-Feeding 51

provide enough space behind a food container; therefore, the grab-arm should be located on

 Fig. 9. The design of a novel feeding robot for Korean food. A spoon-arm (lower left-handside figure) for transferring food and a grab-arm (lower right-hand-side figure) for picking

The spoon-arm has two additional variables, namely the motorized prismatic motion toward a user's mouth, and the manual change of the link length between the first axis and the second axis of the grab-arm. Fig. 11 shows the overall workspace of the spoon of a grabarm. In accordance with the position of a user's mouth, the predefined location in front of

The height of the spoon on the spoon-arm is 250–381 mm with respect to the surface of a table. The height of the spoon-arm depends on the table height. We assume that the height of a table is 730–750 mm. The spoon could be located at a height of 980–1131 mm with respect to the ground. According to the statistics of the Korean disabled, the height of a user's mouth could be 1018 mm (female) and 1087 mm (male), as shown in Table 1. Thus,

> Male 1261 174 1087 Female 1187 169 1018

The distance from a crown to a mouth

Mouse height on a wheelchair

the height of the spoon corresponds with the height of the user's mouth.

wheelchair

the left-hand side of a food container.

up and releasing food

the mouth is adjusted when the system is installed.

Item Sitting height on a

Table 1. Statistics of wheelchair users (unit: mm)

entertainment and to test various kinds of user interfaces. During mealtime, a user wants to enjoy multimedia such as movies or music. In addition, the small-sized PC has a Windows operating system, and we can effectively add assistive devices for human computer interaction, i.e., switches, a joystick, and biosignal interface devices.

Fig. 7. Block diagram of feeding robot

The microcontroller unit allows a user or a caregiver to select the following: operation modes (automatic/semiautomatic/manual mode), the shape and size of a container, the location of the mouth, the robot's speed, the time to stay in front of the mouth, and so on. Depending on the types of food, a user also selects the divided grasping region in each container and the grab strength of the gripper of the grab-arm. Our system will be capable of selecting the above parameters. A user can save the parameters of various kinds of food. We expect that in a community, different members will be able to exchange their robots effectively by exchanging their individual parameters via the Internet.

The grasping regions of boiled rice in a bowl could be defined in 3D space because the bowl should be over 50 mm in height. The grasping volume of dishes could be defined as shown in Fig. 8. Our team is making the prototype of the proposed feeding robot. Fig. 9 shows the designed appearance of the proposed self-feeding robot.

Fig. 8. The definition of grasping volume in containers

In order to use a conventional food container, we decided that the length of links of the grab-arm should cover the whole area of a food container. A grab-arm is located behind a container or on the left-hand side of a container. A grab-arm can chiefly be located behind a food container on a standard table, as shown in Fig. 10. The lap board of a bed does not 50 Robotic Systems – Applications, Control and Programming

entertainment and to test various kinds of user interfaces. During mealtime, a user wants to enjoy multimedia such as movies or music. In addition, the small-sized PC has a Windows operating system, and we can effectively add assistive devices for human computer

> PC (Mobile Internet Device)

(Additional) Input Device

The microcontroller unit allows a user or a caregiver to select the following: operation modes (automatic/semiautomatic/manual mode), the shape and size of a container, the location of the mouth, the robot's speed, the time to stay in front of the mouth, and so on. Depending on the types of food, a user also selects the divided grasping region in each container and the grab strength of the gripper of the grab-arm. Our system will be capable of selecting the above parameters. A user can save the parameters of various kinds of food. We expect that in a community, different members will be able to exchange their robots

USB Hub

Micro Controller Unit

Joystick/ Switch

(Battery)

The grasping regions of boiled rice in a bowl could be defined in 3D space because the bowl should be over 50 mm in height. The grasping volume of dishes could be defined as shown in Fig. 8. Our team is making the prototype of the proposed feeding robot. Fig. 9 shows the

In order to use a conventional food container, we decided that the length of links of the grab-arm should cover the whole area of a food container. A grab-arm is located behind a container or on the left-hand side of a container. A grab-arm can chiefly be located behind a food container on a standard table, as shown in Fig. 10. The lap board of a bed does not

12 3 4 5 6

9 10 11 15 16 17

1 <sup>2</sup> <sup>3</sup> 4

Gripper Direction

Gripper Tip

Gripper Tip

<sup>6</sup> <sup>7</sup>

interaction, i.e., switches, a joystick, and biosignal interface devices.

Arm #2 Power Unit

effectively by exchanging their individual parameters via the Internet.

Gripper Direction Gripper Tip Gripper Direction

designed appearance of the proposed self-feeding robot.

Fig. 8. The definition of grasping volume in containers

1 2 3

4 5 6

7 8 9

Fig. 7. Block diagram of feeding robot

Arm #1

provide enough space behind a food container; therefore, the grab-arm should be located on the left-hand side of a food container.

Fig. 9. The design of a novel feeding robot for Korean food. A spoon-arm (lower left-handside figure) for transferring food and a grab-arm (lower right-hand-side figure) for picking up and releasing food

The spoon-arm has two additional variables, namely the motorized prismatic motion toward a user's mouth, and the manual change of the link length between the first axis and the second axis of the grab-arm. Fig. 11 shows the overall workspace of the spoon of a grabarm. In accordance with the position of a user's mouth, the predefined location in front of the mouth is adjusted when the system is installed.

The height of the spoon on the spoon-arm is 250–381 mm with respect to the surface of a table. The height of the spoon-arm depends on the table height. We assume that the height of a table is 730–750 mm. The spoon could be located at a height of 980–1131 mm with respect to the ground. According to the statistics of the Korean disabled, the height of a user's mouth could be 1018 mm (female) and 1087 mm (male), as shown in Table 1. Thus, the height of the spoon corresponds with the height of the user's mouth.


Table 1. Statistics of wheelchair users (unit: mm)

Novel Assistive Robot for Self-Feeding 53

only the spoon-arm, and a caregiver takes the role of the grab-arm. We explain the two arm

A dual-arm robotic system is applied in accordance with an original design concept. If a caregiver prepares food, users can eat the food on the basis of their intentions. A grab-arm picks up the desired food on a food container, and the arm releases the food on the spoon of a spoon-arm. The spoon-arm moves the spoon to the user's mouth. Then the user can eat the

Fig. 12. The sequential motions of the self-feeding robot in dual-arm configuration. From the

The self-feeding robots have three operation modes: an automatic mode, a semiautomatic mode, and a manual mode. The automatic mode has a fixed serving sequence of dishes; users only push a start button when they want to eat the next food on a spoon. In a semiautomatic mode, a user can choose the dishes on the basis of their intention. In this mode, a user can have the dishes that they want to eat. In a manual mode, the user can choose food and control the posture of the robot. In all three modes, the user can select the

In experiments on handling boiled rice, we observed that releasing rice is as important as picking up rice. The stickiness of the boiled rice can change depending on its temperature. Slightly cool rice is difficult to release from the gripper. In order to solve this problem, the feeding robot automatically puts the gripper of the grab-arm in water before grasping the food. The water is located in a bowl next to the rice. When this is done, the gripper can release the rice on the spoon because the stickiness of the rice has decreased. Fig. 12 shows

The amount of rice picked up is adjusted on the basis of actual experiments on rice grasping. A gripper's mechanism is the simple opening/closing of gripper fingers via a linear

top left-hand side, the robot puts a gripper into a bowl of water and then grasps rice

configurations of the feeding robot in the following sections.

**3.1 Dual-arm configuration** 

feeding timing when they want eat.

the whole operation of the self-feeding robot.

food on the spoon.

Fig. 10. The configuration of a grab-arm

Fig. 11. The work space of a spoon-arm

### **3. Basic operation of the self-feeding robot**

We built two arm configurations of the developed self-feeding robot: a dual-arm configuration and a single-arm configuration. A dual-arm configuration follows an original design concept using both a spoon-arm and a grab-arm. A single-arm configuration uses 52 Robotic Systems – Applications, Control and Programming

Fig. 10. The configuration of a grab-arm

Fig. 11. The work space of a spoon-arm

**3. Basic operation of the self-feeding robot** 

We built two arm configurations of the developed self-feeding robot: a dual-arm configuration and a single-arm configuration. A dual-arm configuration follows an original design concept using both a spoon-arm and a grab-arm. A single-arm configuration uses only the spoon-arm, and a caregiver takes the role of the grab-arm. We explain the two arm configurations of the feeding robot in the following sections.

## **3.1 Dual-arm configuration**

A dual-arm robotic system is applied in accordance with an original design concept. If a caregiver prepares food, users can eat the food on the basis of their intentions. A grab-arm picks up the desired food on a food container, and the arm releases the food on the spoon of a spoon-arm. The spoon-arm moves the spoon to the user's mouth. Then the user can eat the food on the spoon.

Fig. 12. The sequential motions of the self-feeding robot in dual-arm configuration. From the top left-hand side, the robot puts a gripper into a bowl of water and then grasps rice

The self-feeding robots have three operation modes: an automatic mode, a semiautomatic mode, and a manual mode. The automatic mode has a fixed serving sequence of dishes; users only push a start button when they want to eat the next food on a spoon. In a semiautomatic mode, a user can choose the dishes on the basis of their intention. In this mode, a user can have the dishes that they want to eat. In a manual mode, the user can choose food and control the posture of the robot. In all three modes, the user can select the feeding timing when they want eat.

In experiments on handling boiled rice, we observed that releasing rice is as important as picking up rice. The stickiness of the boiled rice can change depending on its temperature. Slightly cool rice is difficult to release from the gripper. In order to solve this problem, the feeding robot automatically puts the gripper of the grab-arm in water before grasping the food. The water is located in a bowl next to the rice. When this is done, the gripper can release the rice on the spoon because the stickiness of the rice has decreased. Fig. 12 shows the whole operation of the self-feeding robot.

The amount of rice picked up is adjusted on the basis of actual experiments on rice grasping. A gripper's mechanism is the simple opening/closing of gripper fingers via a linear

Novel Assistive Robot for Self-Feeding 55

arm. The next step is similar to that of the dual-arm robotic arm. A caregiver only provides a user with food when the spoon is empty in the home position. From the perspective of the caregiver, he or she can reduce the amount of time needed to check or wait while the user is chewing. From the perspective of the user, the food can be eaten when he or she wants to eat. Although a user may have difficulty choosing food in an automatic manner, he or she can chew the food sufficiently without considering the next spoon serving from a caregiver. From an economical point of view, the single-arm configuration has a lower cost in comparison with the dual-arm configuration. Applying a spoon-arm has advantages in facilities such as hospitals or nursing homes. One caregiver supports more than one consumer. That means one caregiver can serve food on the spoons of multiple users' spoonarms in turn. This configuration is especially useful if the labor cost of a caregiver is not

At first, we designed the self-feeding robot on the basis of users' opinions in order to develop a practical system. After constructing the prototype of the self-feeding robot, we performed a user test using seven people with disabilities via the developed self-feeding robot. Participants used the self-feeding robot to eat Korean food, and we collected

In the users' opinions, a self-feeding robot could be useful when a user stays at home alone. The self-feeding robot can be used in two situations: one is solitary eating and the other is eating together. The most important role is in supporting self-feeding without a caregiver when people with disabilities stay in their homes alone and caregivers prepare the food in

Some users prefer using a large spoon. For example, a spoon could be a Chinese-style spoon. If a spoon is large, then it can be used to feed wet food. However, a female user could prefer to use a small-sized spoon. The size of the spoon should be customized according to the user preferences. We will consider several spoons with various sizes as well as various depths. Users sometimes request quick motion of the robotic arm. A user wants to be able to adjust the motion speed of the self-feeding robot. The adjusting speed should be customized to the

The spoon should tilt to the user's mouth in order to unload the food on a spoon easily when the spoon arm is positioned in the user's mouth. Technically, the spoon should tilt to a user with cerebral palsy because such a user can move his/her head to a limited extent. If the self-feeding robot does not have a tilting function, then the user will struggle to eat the food on the spoon. Specifically, if the robot has a range detection sensor around the spoon, then the robot could move more intelligently in front of the user's mouth. That means a spoon automatically tilts in front of a user's mouth. If a user's mouth moves, the preprogrammed motion is not suitable. If a spoon tilts in the wrong position, food could drop down on the floor. Some people with cerebral palsy or muscular disease have trouble

Users and experts want to eat food comfortably using a spoon. For instance, the proposed system moves the spoon in front of the user's face. At that time, the spoon moves to the user's mouth along the sagittal plane, as shown in Figs. 15 (a) and (b). Some users complain

moving their neck, and thus the tilting function of a spoon should be an option.

high, as in developing countries.

**4.1 Users' overall feedback on the self-feeding robot** 

**4. User test** 

feedback.

advance.

user.

actuator. The weight of the rice corresponding to one grasping motion increases depending on the open/close width (Fig. 13) of the fingers of the gripper when grasping begins, as shown in Fig. 13. The default open/close width of the gripper is 32 mm in order to grasp an average of 10 g rice. The close width of the gripper makes the grasping force to food. Thus, we can grasp various foods by adjusting the open/close width of the gripper.

Fig. 13. Amount of rice in a single grasp

## **3.2 Single-arm configuration**

A spoon-arm could be used independently without a grab-arm, as shown in Fig. 14. The caregiver manually picks up the food on a tray and puts the food on the spoon of a spoon-

Fig. 14. The motions of the self-feeding robot in single-arm configuration. The caregiver picks food up on the spoon of the self-feeding robot, and then the user presses the button to receive the food

arm. The next step is similar to that of the dual-arm robotic arm. A caregiver only provides a user with food when the spoon is empty in the home position. From the perspective of the caregiver, he or she can reduce the amount of time needed to check or wait while the user is chewing. From the perspective of the user, the food can be eaten when he or she wants to eat. Although a user may have difficulty choosing food in an automatic manner, he or she can chew the food sufficiently without considering the next spoon serving from a caregiver. From an economical point of view, the single-arm configuration has a lower cost in comparison with the dual-arm configuration. Applying a spoon-arm has advantages in facilities such as hospitals or nursing homes. One caregiver supports more than one consumer. That means one caregiver can serve food on the spoons of multiple users' spoonarms in turn. This configuration is especially useful if the labor cost of a caregiver is not high, as in developing countries.

## **4. User test**

54 Robotic Systems – Applications, Control and Programming

actuator. The weight of the rice corresponding to one grasping motion increases depending on the open/close width (Fig. 13) of the fingers of the gripper when grasping begins, as shown in Fig. 13. The default open/close width of the gripper is 32 mm in order to grasp an average of 10 g rice. The close width of the gripper makes the grasping force to food. Thus,

A spoon-arm could be used independently without a grab-arm, as shown in Fig. 14. The caregiver manually picks up the food on a tray and puts the food on the spoon of a spoon-

Fig. 14. The motions of the self-feeding robot in single-arm configuration. The caregiver picks food up on the spoon of the self-feeding robot, and then the user presses the button to

Open/Close Width of a Gripper Fingers

we can grasp various foods by adjusting the open/close width of the gripper.

Fig. 13. Amount of rice in a single grasp

**3.2 Single-arm configuration** 

receive the food

At first, we designed the self-feeding robot on the basis of users' opinions in order to develop a practical system. After constructing the prototype of the self-feeding robot, we performed a user test using seven people with disabilities via the developed self-feeding robot. Participants used the self-feeding robot to eat Korean food, and we collected feedback.

### **4.1 Users' overall feedback on the self-feeding robot**

In the users' opinions, a self-feeding robot could be useful when a user stays at home alone. The self-feeding robot can be used in two situations: one is solitary eating and the other is eating together. The most important role is in supporting self-feeding without a caregiver when people with disabilities stay in their homes alone and caregivers prepare the food in advance.

Some users prefer using a large spoon. For example, a spoon could be a Chinese-style spoon. If a spoon is large, then it can be used to feed wet food. However, a female user could prefer to use a small-sized spoon. The size of the spoon should be customized according to the user preferences. We will consider several spoons with various sizes as well as various depths.

Users sometimes request quick motion of the robotic arm. A user wants to be able to adjust the motion speed of the self-feeding robot. The adjusting speed should be customized to the user.

The spoon should tilt to the user's mouth in order to unload the food on a spoon easily when the spoon arm is positioned in the user's mouth. Technically, the spoon should tilt to a user with cerebral palsy because such a user can move his/her head to a limited extent. If the self-feeding robot does not have a tilting function, then the user will struggle to eat the food on the spoon. Specifically, if the robot has a range detection sensor around the spoon, then the robot could move more intelligently in front of the user's mouth. That means a spoon automatically tilts in front of a user's mouth. If a user's mouth moves, the preprogrammed motion is not suitable. If a spoon tilts in the wrong position, food could drop down on the floor. Some people with cerebral palsy or muscular disease have trouble moving their neck, and thus the tilting function of a spoon should be an option.

Users and experts want to eat food comfortably using a spoon. For instance, the proposed system moves the spoon in front of the user's face. At that time, the spoon moves to the user's mouth along the sagittal plane, as shown in Figs. 15 (a) and (b). Some users complain

Novel Assistive Robot for Self-Feeding 57

gripper should be cleaned frequently. The amount of grabbed food could be adjustable. A

Other comments are as follows: the robots should be less noisy, have a small-sized input device, serve water, and enable users to eat Korean soup. Users who do not have experience eating food by themselves have a hard time using a self-feeding robot for the first time. Such a person needs to be trained on how to use it. Some users want to eat noodles, such as

The filtering of an involuntary input signal of a user should be considered. For instance, in order to reduce malfunctions, a user's input could be ignored immediately after a previous input. One user who can push a button by himself prefers to use buttons rather than joysticks. One user prefers using a small-sized joystick. Two joysticks are better than one joystick with buttons. The length of a joystick should be adjustable. Buttons should be large sized. The operation without a PC is required. A user who has limited head motion likes to use the long joystick because that user cannot make a fine motion. The unification of an input device of a wheelchair and a self-feeding robot should be considered. Wireless input

In the single-arm configuration, the caregiver picks food up instead of the grab-arm and then loads food on a spoon. The user makes an input signal to move the spoon to the user's mouth. After the user eats food on a spoon, it returns to the home position upon receiving a

The single-arm configuration is useful to a caregiver as well as a user. From the perspective of a user, the feeding timing could be changed freely on the basis of a user's intention. The user can chew food sufficiently. When a user watches television, she leisurely eats food. Some users complain about a single-arm configuration. That means a caregiver must stay with a user even though the single arm is used. The single-arm configuration is useful for a caregiver when a user has a meal with his family because the caregiver does not need to move food to the user's mouth. Thus, a caregiver likes to use

The users and caregivers are satisfied even though picking up food should be performed manually. Food frequently drops down on the floor when a caregiver serves food to the user's mouth. However, a user can reduce the instances of dropping food in the case of a single-arm configuration because the spoon is fixed on the spoon-arm and thus the user can

A system should be easy to control for people with disabilities and the elderly. Many users

In this study, we analyzed the self-feeding robot on the basis of two input devices, namely buttons and joysticks, as shown in Fig. 16. Most input devices are developed for hand manipulation. If a user has hand function, then the easiest way is to use the system is with his or her hands. That means the simplest input device is a set of buttons. However, when a user only uses neck motion to make a control input, he or she has difficulty handling input

have only head motion, and thus the importance of input devices was mentioned.

devices with dexterity. Table 2 shows the properties of the input devices.

stop or pause function is required.

devices are preferred.

the single-arm configuration.

**4.3 Input devices** 

estimate the spoon posture correctly.

spaghetti. Most users wish to use a low-cost feeding robot.

**4.2 Discussion of a single-arm configuration** 

command or some time interval, as with a dual-arm configuration.

about a threatening feeling when the spoon moves on the sagittal plane. If the spoon approaches a user's mouth from the side, then a user can feel safer. Those motions are similar to most people's motions when they eat food. In addition, a user wants to touch the side surface of a spoon. As a remedy, we will consider how the spoon approaches, as shown in Figs. 15 (c), (d), and (e). However, the sideways approach may require a large installation area for the self-feeding robot.

Fig. 15. Spoon approaching modes. (a) The sagittal plane on which a spoon moves. (b), (c), (d), and (e) represent the top view of a spoon when a robot approaches a user's mouth. The red object means a spoon. (c), (d) and (e) are more comfortable then (b)

A spoon should have a safety function because the spoon frequently comes in contact with a user's mouth. A spoon can be fixed at the tip of an arm with a spring as a component to guarantee safety. The spoon can be connected to magnets. When a large force acts on the spoon, the magnet's connection with the spoon could be detached for the user's safety. A user who has spasticity needs the compliance of a spoon.

Users request a smoother motion of a spoon when it contains food. In addition, the vibration of a spoon should be reduced at that time.

Users want a small-sized self-feeding robot for easy installation on a desk. In addition, a user wants to be able to adjust the spoon level of the self-feeding robot. For example, a user who uses a power wheelchair has various height levels from a table. When users first use the self-feeding robot, the level of the spoon on the robotic arm should be adjustable.

As a basic function, food rarely drops on a table when a user fails to eat the round-shaped food. When a user eats food that is too stiff, grasping failure occurs. Therefore, we consider the motion, i.e., the speed, of a spoon as well as the shape of a spoon.

Some users want to eat rice with a side dish simultaneously. In general, the disabled person who lives alone and receives a caregiving service usually eats rice and a side dish on one spoon at the same time. Some people eat rice mixed with side dishes. The self-feeding robot should mimic that task. The self-feeding robot optionally puts two kinds of food, i.e., rice and a side dish, on a large spoon simultaneously.

The spoon should be returned to the home position after a predefined time interval. The robot has two ways to return to the home position of a spoon: one is by making an input signal, and the other is determining a time interval.

Sometimes a robot should regrasp food in order to avoid grabbing too much of a side dish. When a robot tries to grab some curry rice, the gripper shape should be changed. The 56 Robotic Systems – Applications, Control and Programming

about a threatening feeling when the spoon moves on the sagittal plane. If the spoon approaches a user's mouth from the side, then a user can feel safer. Those motions are similar to most people's motions when they eat food. In addition, a user wants to touch the side surface of a spoon. As a remedy, we will consider how the spoon approaches, as shown in Figs. 15 (c), (d), and (e). However, the sideways approach may require a large installation

Fig. 15. Spoon approaching modes. (a) The sagittal plane on which a spoon moves. (b), (c), (d), and (e) represent the top view of a spoon when a robot approaches a user's mouth. The

A spoon should have a safety function because the spoon frequently comes in contact with a user's mouth. A spoon can be fixed at the tip of an arm with a spring as a component to guarantee safety. The spoon can be connected to magnets. When a large force acts on the spoon, the magnet's connection with the spoon could be detached for the user's safety. A

Users request a smoother motion of a spoon when it contains food. In addition, the vibration

Users want a small-sized self-feeding robot for easy installation on a desk. In addition, a user wants to be able to adjust the spoon level of the self-feeding robot. For example, a user who uses a power wheelchair has various height levels from a table. When users first use

As a basic function, food rarely drops on a table when a user fails to eat the round-shaped food. When a user eats food that is too stiff, grasping failure occurs. Therefore, we consider

Some users want to eat rice with a side dish simultaneously. In general, the disabled person who lives alone and receives a caregiving service usually eats rice and a side dish on one spoon at the same time. Some people eat rice mixed with side dishes. The self-feeding robot should mimic that task. The self-feeding robot optionally puts two kinds of food, i.e., rice

The spoon should be returned to the home position after a predefined time interval. The robot has two ways to return to the home position of a spoon: one is by making an input

Sometimes a robot should regrasp food in order to avoid grabbing too much of a side dish. When a robot tries to grab some curry rice, the gripper shape should be changed. The

the self-feeding robot, the level of the spoon on the robotic arm should be adjustable.

red object means a spoon. (c), (d) and (e) are more comfortable then (b)

the motion, i.e., the speed, of a spoon as well as the shape of a spoon.

user who has spasticity needs the compliance of a spoon.

of a spoon should be reduced at that time.

and a side dish, on a large spoon simultaneously.

signal, and the other is determining a time interval.

area for the self-feeding robot.

gripper should be cleaned frequently. The amount of grabbed food could be adjustable. A stop or pause function is required.

Other comments are as follows: the robots should be less noisy, have a small-sized input device, serve water, and enable users to eat Korean soup. Users who do not have experience eating food by themselves have a hard time using a self-feeding robot for the first time. Such a person needs to be trained on how to use it. Some users want to eat noodles, such as spaghetti. Most users wish to use a low-cost feeding robot.

The filtering of an involuntary input signal of a user should be considered. For instance, in order to reduce malfunctions, a user's input could be ignored immediately after a previous input. One user who can push a button by himself prefers to use buttons rather than joysticks. One user prefers using a small-sized joystick. Two joysticks are better than one joystick with buttons. The length of a joystick should be adjustable. Buttons should be large sized. The operation without a PC is required. A user who has limited head motion likes to use the long joystick because that user cannot make a fine motion. The unification of an input device of a wheelchair and a self-feeding robot should be considered. Wireless input devices are preferred.

### **4.2 Discussion of a single-arm configuration**

In the single-arm configuration, the caregiver picks food up instead of the grab-arm and then loads food on a spoon. The user makes an input signal to move the spoon to the user's mouth. After the user eats food on a spoon, it returns to the home position upon receiving a command or some time interval, as with a dual-arm configuration.

The single-arm configuration is useful to a caregiver as well as a user. From the perspective of a user, the feeding timing could be changed freely on the basis of a user's intention. The user can chew food sufficiently. When a user watches television, she leisurely eats food. Some users complain about a single-arm configuration. That means a caregiver must stay with a user even though the single arm is used. The single-arm configuration is useful for a caregiver when a user has a meal with his family because the caregiver does not need to move food to the user's mouth. Thus, a caregiver likes to use the single-arm configuration.

The users and caregivers are satisfied even though picking up food should be performed manually. Food frequently drops down on the floor when a caregiver serves food to the user's mouth. However, a user can reduce the instances of dropping food in the case of a single-arm configuration because the spoon is fixed on the spoon-arm and thus the user can estimate the spoon posture correctly.

### **4.3 Input devices**

A system should be easy to control for people with disabilities and the elderly. Many users have only head motion, and thus the importance of input devices was mentioned.

In this study, we analyzed the self-feeding robot on the basis of two input devices, namely buttons and joysticks, as shown in Fig. 16. Most input devices are developed for hand manipulation. If a user has hand function, then the easiest way is to use the system is with his or her hands. That means the simplest input device is a set of buttons. However, when a user only uses neck motion to make a control input, he or she has difficulty handling input devices with dexterity. Table 2 shows the properties of the input devices.

Novel Assistive Robot for Self-Feeding 59

We carried out the user tests with user candidates, including people with spinal cord injuries. After they actually ate food using the developed self-feeding robot, we collected their feedback to determine their satisfaction score in accordance with input devices. The users ate food using the self-feeding robot with each of the input devices. The results of the users' feedback pertained to the self-feeding robot as well as the input devices. The users rated their satisfaction with the input device activity on a scale of 1 to 10, as with the Canadian Occupational Performance Measure (Pendleton, 2001). A score of 10 indicates the highest level of satisfaction. Most users were satisfied with the self-feeding system that had a dual joystick, as shown in Table 3. This indicates that the use of joysticks is more comfortable than that of buttons. In addition, the self-feeding system operates well. Based on the analysis of users' feedback, the key factors affecting the handling of joysticks with regard to a user's neck motion are as follows: the distance between the joysticks, the moving angle of the joysticks, and the length of the joysticks. We will perform a comparison study

Input Device Buttons Joysticks SCI #1 6 8 SCI #2 8 8 SCI #3 1 8 SCI #4 1 8 SCI #5 1 6 SCI #6 1 7 SCI #7 1 8 Average Score 2.7 7.6 Table 3. Satisfaction score of input devices when users eat food via a self-feeding robot

We have developed a novel assistive robot for self-feeding that is capable of handling Korean food, including sticky rice. This paper presents the overall operation of the selffeeding robot. The proposed robot has three distinguishing points: handling sticky rice, using an ordinary meal tray, and a modular design that can be divided into two arms. Users are people with physical disabilities who have limited arm function. During the development of the robot, we considered the feedback provided by several users and experts. In addition, the user candidates tested the actual the self-feeding robot. It was determined that the input device has the most important role. Many users prefer a dual joystick for self-feeding. Most of the users who participated in the experiments gave us positive feedback. Some users were impressed that they were able to eat their desired food when they wanted to eat it. In future work, we will add several functions to the robot, including improving the reliability of basic operations and adding a safety feature. We will

**4.4 Satisfaction evaluation of the self-feeding robot** 

between commercialized feeding systems and the developed system.

also simplify the system components and perform user evaluations.

(max = 10)

**5. Concluding remarks** 

Fig. 16. Input devices of a self-feeding robot. (a) Buttons, (b) Joysticks


Table 2. Input devices for the self-feeding robot

## **4.3.1 Buttons**

The self-feeding robot has a basic input device consisting of a set of buttons. We usually use six buttons that correspond with start, return, and four directions. The buttons were originally developed to check the basic operations of the self-feeding robot. However, quadriplegics who can only move their neck and head would have difficulty pushing the button with their chin. Because the buttons are out of the field of view, the user has a hard time knowing where the buttons are and whether or not they are pushed. Additionally, pushing these buttons requires excessive force and can result in muscle fatigue in a user's neck. Thus, a user who uses his or her neck would have difficulty pushing the buttons. Users prefer to use joysticks rather than buttons. On the basis of users' opinions, we tested input devices that have joysticks.

## **4.3.2 Joysticks**

Two joysticks are employed. Originally, it was determined that a user wants to use two joysticks rather than one joystick and buttons. The length of the joystick is modified from 10 to 53 mm according to users' opinions. Because of the gap between the two joysticks, a user can easily manipulate one of the joysticks without any resulting malfunction of the other joystick.

The length of the joystick depends on user preference. Some users prefer a long joystick while others like a short one. Most users prefer the wide gap between the two joysticks because a short gap can result in the malfunction of the unused joystick. The moving angles of a joystick are ±25°. Users express satisfaction with the flexibility of the joystick and its silicon cover, which can be connected to the user's skin.

58 Robotic Systems – Applications, Control and Programming

(a) (b)

After modification

Input Device Buttons Dual Shock Type Joypad
