3.4. Variations of the model

There are several hardware implementations for cellular automata reported in the scientific literature (for a recent review, see [12]). Most of them are particular implementations for specific applications. The so-called "cellular automata machines," including CAM (Cellular Automata Machine, project started in the 1980s at MIT [13]) and CEPRA (Cellular Processing Architectures, first prototype in the 1990s at the Darmstadt University, in Germany [14]) never reached industry. Both projects combine serial processing and pipeline techniques to emulate

Hence, the first paradox of cellular automata: a model of massive parallelism, reduced to sequen-

The synthesis of cellular automata implies to establish the structure and functionality (including the states per cell, the dimension, the topology, and the local rule) that may solve a certain problem or perform a specific computation. This problem of synthesis is the root problem of massive parallel computing the decomposition of a computing problem in elementary tasks to be performed repeatedly by a large number of cells for a certain number of cycles. For cellular automata, the problem of synthesis was approached, for instance, from the perspective of evolutionary computing (similar with genetic algorithms) [15], but still remains the biggest

In terms of computation theory, Wolfram considers that the model of cellular automata is universal, which means that it can solve any computable problem (claiming the equivalence with the Turing machine [16]). There is no general acceptance of the computing universality of the model, but particular rules were proved to be computational universal [16–18], meaning that they can perform any function (which, again, does not mean that they can solve any computing problem). In spite of such theoretical results, a huge challenge still remains: there

This is the second paradox of cellular automata: its computing capability is not reflected in applications, as there is no synthesis method. As massive parallel computing systems, cellular automata seem ideal for hardware implementation. Theoretically, they can compute any function; however, the

In the introductory section, we have mentioned two features of the cellular automata model: the self-organization and the complexity. Depending on the context, different authors looked at cellular automata from both perspectives: as complex systems that, starting from a random initial state, manifest the self-organization property, or as simple, regular systems that exhibit a

In [19], a possible reconciliation of the two perspectives includes them both in the concept of "apparent complexity," understood as a "complex phenomenal appearance backed by a structural simple generative rule" (in order to be efficiently used, the term "apparent complexity"

is no algorithm or method for synthesis of particular cellular automata.

difficulty of synthesis was a major drawback in the development of applications.

the parallelism of the cellular automata architecture.

172 From Natural to Artificial Intelligence - Algorithms and Applications

tial computation.

3.2. The problem of synthesis

challenge of cellular automata applications.

3.3. Complexity and self-organization

very complex, random behavior [1, 4, 7, 8].

The last important issue is related to the fact that the actual field of cellular automata research, particularly in application development, has adopted a lot of variations of the ideal model (for which the theory was developed). We will mention the most important ones, those who are significant for the topic of this chapter. For a detailed overview, see [3].

Hybrid or inhomogeneous cellular automata: either the cellular space is inhomogeneous (different structures of neighborhoods and cell properties, topological variations), or local rules vary in the cellular space. Local rules may also be modified after a number of time steps, in order to obtain a specific processing of the global configuration. Block hybrid cellular automata are a particular case of inhomogeneous cellular automata. The cellular structure is here divided in homogenous subdomains or blocks.

Automata with structured states' space: the cell's state is considered to be the combination of some significant parameters or state values. As in the case of finite-state machine design, such structuration leads to a global simplification, mainly regarding the dimension of the local rules space.

Multilevel cellular automata: a more complex model, built as a hierarchical structure of interconnected cellular automata (the model is not simply a multidimensional structure).

Self-programmable cellular automata: the local rules change in time, depending on the evolution, and may be implemented as multilevel cellular automata.

Asynchronous automata: the cells' states are updated in a certain order, taking into account the new values obtained for the neighboring cells already updated.

Cellular automata with supplementary memory layer: the computation of the following states takes into account the "history" of the system, or a number of previous states of each cell. These previous states are loaded in the memory layer. This model is practically a network of elementary processors.

Nondeterministic and probabilistic cellular automata: the next state is established in a nondeterministic manner or according to a certain distribution of probability. Due to its versatility, this model can be successfully used in complicated modeling applications.

In composite systems, the basic cellular automata model is considered as a space that interferes with autonomous mobile structures or agents that evolve in the cellular space. This model simplifies some modeling effort for example in the case of particles diffusion.

The list above is not exhaustive but gives an image of the versatility of cellular automata developments, starting from the very restrictive, regular, infinite ideal model. On the other extreme, there are random Boolean networks, often called n-k networks that allow any inhomogeneous set of rules and also random connectivity.

two ways to consider the output of such a generator, either as a one-bit digital signal (given by the evolution of the state of one particular cell), or an n-bit output (a subset of the global configuration). The statistical properties depend both on the rule and on the initial state, and

Cellular Automata and Randomization: A Structural Overview

http://dx.doi.org/10.5772/intechopen.79812

175

Note that all cellular automata will eventually enter into an attraction cycle—the length of the trajectory is related not only to the rule, the initial state but also to the dimension of the automata. This is why the "infinity" assumption often is mentioned in theoretical demonstrations regarding statistic properties of cellular automata randomizers. For better properties,

Wolfram analyzed the properties of several rules and proposed a generator based on the socalled "rule 30," for a one-bit signal generated by the central cell [10]. Figure 5 presents the results of the simulation of a 64-cells linear cellular automata with rule 30 (Verilog HDL

Further research was oriented in discovering better rules and initial configurations (seeds) for improving the statistical properties of the sequence. A simple solution in this direction implies the use of two different rules, thus obtaining heterogenous cellular automata. Hortensius et al. proved in [20] that the results obtained with heterogenous rules (two different local rules) are better than in homogenous automata using either of those rules. Commonly used rules are

Figure 5. The output of a 64-cells linear cellular automata with rule 30 (Verilog HDL simulation).

the choice of the n-bit vector (because of intrinsic correlation of neighboring cells).

larger automata will be used.

simulation), for 25 time steps.

listed in Appendix 1.

A global theory of all models derived from cellular automata does not exist by now and probably will never exist. Because most of the applications in this field are derived empirically and also empirically tested, this is not truly a weak point, as it can be compensated by appropriate experiments.
