**4.2.4 Morphological computation**

Closely related to embodiment is the concept of *morphological computation* (Pfeifer & Iida, 2005; Pfeifer et al., 2006). This explicitly takes into account that the body can do more than just provide a grounding for concepts; it can actually, through its very morphology, perform computations which no longer need to be taken care of by the controller itself.

The classic example of this is the passive dynamic walker (McGeer, 1990), a pair of legs that can walk down a slope in a biologically realistic manner with no active controllers (or indeed any form of computation) involved at all. Pfeifer et al. (2006) offer additional examples: the eye of a house-fly is constructed to intrinsically compensate for motion parallax (Franceschini et al., 1992) and a similar design can facilitate the vision of a moving robot. Another example is the "Yokoi hand" ((Yokoi et al., 2004)), whose flexible and elastic material allows it to naturally adapt to an object it is grasping without any need for an external controller to evaluate the shape of the object beforehand.

A completely different type of gripper that fulfils a similar role as the Yokoi hand (namely the ability to grasp an object without the need to evaluate its shape beforehand) is presented by Brown et al. (2010). This gripper is in the shape of a ball filled with a granular material that contracts and hardens when a vacuum is applied. When in contact with an object, it reshapes to fit the object and can lift it. Here, the morphology of the hand goes as far as removing the need for any joints in the gripper, thus significantly reducing the computational requirements associated with it.

From the perspective of Rob's robot, the key insight from the examples given here is that a suitably designed robot can reduce the computations required within the controller itself by offloading them onto the embodiment. At the same time, he also identifies a shortcoming, namely that there are no examples of humanoids that take this into account to a significant degree. Rather, the demonstrations mostly focus on small robots with a very limited behavioral repertoire designed solely to illustrate the utility of the particular embodiment in that specific case. Although Rob finds the approach itself promising, he feels that it still needs to overcome the restrictions in form of the focus on limited behavioral repertoires.

### **4.2.5 Dynamic Field Theory**

14 Will-be-set-by-IN-TECH

these are relatively simple examples since the main role of the body in these cases is to enable the grounding of concepts. For instance, a robot would "know" what a grasp is because it can

However, from Rob's perspective, there is still substantial work that needs to be done in this direction. In essence, what the field is currently missing are robots that can display higher-level cognition which goes beyond simple toy problems. For example, most of the examples above dealing with learning by imitation understand imitation as reproducing the trajectory of, for instance, the end-effector. However, imitation learning is more complex than that and involves decisions on whether it is really the trajectory that should be copied or the overall goal of the action (see *e.g.* Breazeal & Scassellati, 2002, for a discussion of such issues). Similarly, while there is work on endowing robots with embodied language skills (*e.g.* Wermter et al., 2005), it rarely goes beyond associating verbal labels to sensory-motor experiences (Although see Cangelosi & Riga (2006) for an attempt to build more abstract concepts using linguistic labels for such experiences). Again, while this is a worthwhile

Closely related to embodiment is the concept of *morphological computation* (Pfeifer & Iida, 2005; Pfeifer et al., 2006). This explicitly takes into account that the body can do more than just provide a grounding for concepts; it can actually, through its very morphology, perform

The classic example of this is the passive dynamic walker (McGeer, 1990), a pair of legs that can walk down a slope in a biologically realistic manner with no active controllers (or indeed any form of computation) involved at all. Pfeifer et al. (2006) offer additional examples: the eye of a house-fly is constructed to intrinsically compensate for motion parallax (Franceschini et al., 1992) and a similar design can facilitate the vision of a moving robot. Another example is the "Yokoi hand" ((Yokoi et al., 2004)), whose flexible and elastic material allows it to naturally adapt to an object it is grasping without any need for an external controller to evaluate the

A completely different type of gripper that fulfils a similar role as the Yokoi hand (namely the ability to grasp an object without the need to evaluate its shape beforehand) is presented by Brown et al. (2010). This gripper is in the shape of a ball filled with a granular material that contracts and hardens when a vacuum is applied. When in contact with an object, it reshapes to fit the object and can lift it. Here, the morphology of the hand goes as far as removing the need for any joints in the gripper, thus significantly reducing the computational requirements

From the perspective of Rob's robot, the key insight from the examples given here is that a suitably designed robot can reduce the computations required within the controller itself by offloading them onto the embodiment. At the same time, he also identifies a shortcoming, namely that there are no examples of humanoids that take this into account to a significant degree. Rather, the demonstrations mostly focus on small robots with a very limited behavioral repertoire designed solely to illustrate the utility of the particular embodiment in that specific case. Although Rob finds the approach itself promising, he feels that it still needs to overcome the restrictions in form of the focus on limited behavioral repertoires.

relate it to its own grasping movements via the mirror neurons.

exercise, it is not really an example of a robot with true linguistic abilities.

computations which no longer need to be taken care of by the controller itself.

**4.2.4 Morphological computation**

shape of the object beforehand.

associated with it.

Dynamic Field Theory (DFT) is a mathematical framework based on the concepts of Dynamical Systems and the guidelines from Neurophysiology. A field represents a population of neurons and their activations follow continuous responses to external stimuli. Amari (1977) studied the properties of these networks as a model of the activation observed in cortical tissue. Fields have the same structure of a recurrent neural network since their connections can have, depending on the relative location within the network, a local excitation or a global inhibition, Fig. 4.

Fig. 4. Typical activations in a dynamics field, from Schöner (2008).

Fields are used to represent perceptual features, motion or cognitive decisions, e.g. position, orientation, color, speed. The dynamics of these fields allow the creation of peaks which are the units of representation in DFT (Schöner, 2008). Different configurations of one or more fields are possible, being the designer responsible for creating a proper connectivity and tuning of parameters. The result of activating this type of network is a continuously adaptive system that responds dynamically to any change coming from external stimuli.

Rob has learned about the different properties and potentials of dynamic fields for using it as part of a robust cognitive architecture. Some of the most attractive features of this approach include the possibility of having a hebbian-type of learning by exploiting the short-term memory features implicit in the dynamics of this algorithm. Long-term memory, decision making mechanism and noise robustness (also implicit in the dynamics of fields), and single-shot learning are all important tools that can and must be included in any cognitive architecture. Several applications modeling experiments on human behavior (Dineva (2005); Johnson et al. (2008); Lowe et al. (2010)) and robotic implementations (Bicho et al. (2000); Erlhagen et al. (2006); Zibner et al. (2011)) have demonstrated DFT's potential.

Nonetheless, from Rob's perspective, the current work with dynamic fields still needs to overcome a number of challenges. Dynamic field controllers are currently designed by hand, through elaborate parameter space explorations, to solve a very specific problem. Although these models nicely illustrate that decision-making can be grounded directly in sensory-motor experiences, their learning ability is limited and any particular model does not generalize well to other tasks, even though modular combinations of different models each solving a particular task seems possible (Johnson et al. (2008); Simmering et al. (2008); Simmering & Spencer (2008)).

for Humanoid Robots 17

Rob's Robot: Current and Future Challenges for Humanoid Robots 295

abilities in humanoid robots will be very slow and the use of humanoids in the future very limited. The above mentioned mega-project should have as common goal the construction of a humanoid robot with state of the art sensors and actuators. If anything can be learned from the embodiment approach 4.2, it is that a complete and sophisticated body like this is needed mainly because of the biggest challenge for the future of humanoid robotics, i.e. the development of cognitive abilities through dynamic interactions with unconstrained

Amari, S.-I. (1977). Dynamics of pattern formation in lateral-inhibition type neural fields,

Ambrose, R. O., Aldridge, H., Askew, R. S., Burridge, R. R., Bluethmann, W., Diftler, M.,

Anderson, M. L. (2003). Embodied cognition: a field guide., *Artificial Intelligence* 149: 91–130.

Bao, Z., McCulloch, I., (Society), S., Staff, S. S., Company, A. C., Staff, A. C. C., Incorporated,

Bicho, E., Mallet, P. & Schoner, G. (2000). Target representation on an autonomous vehicle with low-level sensors., *The International Journal of Robotics Research* 19(5): 424–447.

Bonini, L., Rozzi, S., Serventi, F. U., Simone, L., Ferrari, P. F. & Fogassi, L. (2010).

Brown, E., Rodenberg, N., Amend, J., Mozeika, A., Steltz, E., Zakin, M. R., Lipson, H. & Jaeger,

Cangelosi, A. & Riga, T. (2006). An embodied model for sensorimotor grounding and

Cattaneo, L., Fabbri-Destro, M., Boria, S., Pieraccini, C., Monti, A., Cossu, G. & Rizzolatti,

Chersi, F., Thill, S., Ziemke, T. & Borghi, A. M. (2010). Sentence processing: linking language to motor chains, *Frontiers in Neurorobotics* 4(4): DOI:10.3389/fnbot.2010.00004. Chrisley, R. & Ziemke, T. (2003). *Encyclopedia of Cognitive Science*, Macmillan Publishers,

*Proceedings of the National Academy of Sciences USA* 107(44): 18809–18814. Bullock, I. & Dollar, A. (2011). Classifying human manipulation behavior, *IEEE International*

organization and intention understanding, *Cerebral Cortex* 20: 1372–1385. Breazeal, C. & Scassellati, B. (2002). Robots that imitate humans, *Trends in Cognitive Sciences*

Ventral premotor and inferior parietal cortices make distinct contributions to action

H. M. (2010). Universal robotic gripper based on the jamming of granular material,

grounding transfer: experiments with epigenetic robots, *Cognitive Science* 30: 673–689.

G. (2007). Impairment of actions chains in autism and its possible role in intention understanding, *Proceedings of the National Academy of Sciences USA*

Lovchik, C., Magruder, D. & Rehnmark, F. (2000). Robonaut: Nasa's space humanoid,

C., Staff, C. I., (Firm), S. S. A. & Staff, S. S. A. F. (2009). *Organic Field-Effect Transistors VIII: 3-5 August 2009, San Diego, California, United States*, Proceedings of SPIE–the International Society for Optical Engineering, S P I E-International Society for Optical

environments.

**6. References**

Anybots (2008). Dexter.

Engineering.

6(11): 481–487.

104(45): 17825–17830.

chapter Embodiment, pp. 1102–1108.

*Biological Cybernetics* 27: 77–87.

*IEEE Intelligent Systems* 15: 57–63.

URL: *http://www.anybots.com/*

URL: *http://dx.doi.org/10.1109/5254.867913*

URL: *http://books.google.com/books?id=IjGvSgAACAAJ*

*Conference on Rehabilitation Robotics, Proceedings of the.*

URL: *http://dx.doi.org/10.1177/02783640022066950*

### **4.3 Interim summary 2**

To summarize, it seems likely that embodied approaches are the future, both from a theoretical perspective (given insights in the cognitive sciences about the functioning of the human mind) and practically speaking (given the numerous examples of the associated benefits in robotic applications) and thus, this is the approach that Rob would prefer when designing his robot. However, he thinks about the successful applications of symbolic approaches and leaves open the door to possible interactions between both approaches. It could be possible to use dynamic tools to face unexpected circumstances and once that information looks stable let a symbolic algorithm work with it and give some feedback to the body.

There are significant challenges still to overcome for embodied approaches as well. Most of the existing work (while demonstrating that an embodied approach is indeed viable) currently focuses on proof-of-concept solutions to specific problems. These limitations go hand-in-hand with the development of physical platforms to test new models.
