**3.1 Hawkins' new theory of neocortical computation**

In 2021, Jeff Hawkins published a fascinating book entitled *"A Thousand Brains: A New Theory of Intelligence"* [9]. In this book, accompanied by additional publications

### *Cortical Columns Computing Systems: Microarchitecture Model, Functional Building Blocks… DOI: http://dx.doi.org/10.5772/intechopen.110252*

from others at Numenta [50–53], Hawkins proposes a new theory of neocortical computation informed by extensive neuroscience research. Hawkins suggests Cortical Columns (CCs) as the key compute units within the human neocortex, which contains about 150,000 of such CCs (see **Figure 2**). The neocortex gains its intelligence through CC's ability to model sensory information in structured Reference Frames (RFs) and continuously update its models as the sensor interacts with the environment.

Each CC is computationally powerful and capable of modeling complete objects. As illustrated in **Figure 2**, complete models of an object exist in multiple CCs across different sensory modalities (e.g., vision, hearing, touch) and across different hierarchy layers at different scales. Multiple CCs, across sensory modalities and layer hierarchies, can communicate and reach consensus via a voting process. This is in contrast to the traditional strict hierarchical view, involving increasing complexity and sophistication with ascending hierarchical layers. Hawkins suggests the increasing level of intelligence demonstrated by higher-functioning mammals is mainly due to the increase in the total number of CCs and not necessarily due to increased complexity or sophistication of the CCs. Furthermore, in contrast to current DNNs that separate training and inference, CCs that store and process information in RFs can support online, concurrent, and continuous learning and can dynamically adapt to sensory input changes.

Recent discussions in the DL community seem to align with the basic ideas of Hawkins' theory. Continually learning structured models of the world via interactions with the environment can help overcome the brittleness and catastrophic forgetting associated with DNNs [54]. With the ability to continually learn and adapt to new

#### **Figure 2.**

*Figure taken from [51, 53]. Left: Traditional hierarchical view where complexity of features recognized increases up the hierarchy with complete objects detected only at the top. Right: Hawkins' view where neocortex contains about 150 K cortical columns (CCs) and multiple CCs across different sensory modalities and different hierarchy layers can all learn complete models of objects and can communicate to reach a consensus on the output. For example, a coffee cup can be quickly recognized through touch and vision, wherein two sets of CCs with modalityspecific models of the coffee cup at different scales all vote together to determine the object.*

inputs from the environment, the need to achieve near-perfect accuracy through extensive offline training process can be alleviated. This can potentially reduce the offline training cost and complexity.
