**1. Introduction**

Learning in humans involves approximately 100 billion of neurons (brain cells). Neurons in humans are useless individually but extremely useful and powerful when working together.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Each neuron is formed by a body or soma, and axon that sends information and thousands of dendrites for receiving data. The more dendrite connections there are, the more learning there would be. It can be seen than at its basic functional level, learning is without a doubt a com‐ plex activity. It involves not only the brain cells and their connections but also certain factors such as attention, memory, motivation and stress. There are even different ways in which learning can occur, such as empiricism, innatism and constructivism [1].

In humans, learning is not only directly engaged with the amount of connection in the neurons but is also influenced by external factors relative to the subject state. There is a necessity to learn in an entity free of unnecessary passions and no underpinnings that could hinder or limit its learning ability. Machine learning is the response to this paradigm, several algorithms have arisen having as main objective; making computers learn as can be seen in [2], even they may have different particular objectives according to their taks. Machine learning algorithms have already demonstrated their competence at learning in different engineering and science problems [3–5]. However, there is still no algorithm that is more versatile and generalised, i.e., a "do-it-all" algorithm that can be used in several situations no matter the nature of the task itself.

In machine learning, there is no single algorithm that could solve all the problems [6]. To address this, a machine learning algorithm from three principles is created: Learning from Demonstration, Reinforcement Learning and Artificial Immune System. This aims to obtain an algorithm with several advantages from the mixture of those techniques. The resulting algorithm would keep all the advantages from the techniques used plus some of the Artificial Immune System characteristics from **Table 2**. **Table 1** shows advantages of CODA.


**Table 1.** Advantages that characterise CODA algorithm.

This document is organised as follows. Section 1 contains a discussion on the theory on how learning from demonstration, reinforcement learning and Artificial Immune Systems has been used to develop CODA algorithm in order to help the reader understand how useful these methods are for the algorithm presented in Section 3.

Section 5 explains the CODA algorithm by presenting the pseudocode and simulations. Section 6, on the other hand, explains the application where CODA will be used. Sections 7 and 8 give further discussion and conclusion, respectively, on this chapter.
