**4.2 Cortical columns with RFs (ongoing work)**

The current phase of our research aims to leverage the high synergy between cortical columns and TNN mini-columns. We plan to extend the TNN microarchitecture model to incorporate CCs consisting of RFs, making it generic enough to accomplish integration across diverse sensory modalities (see **Figure 7**).

In contrast to TNNs that employ feed-forward processing, CCs store structured information about inputs in Ref. Frames (RFs), and process and update the stored information through feedback connections. This continuous feedback loop of predictsense-update effectively introduces an additional dimension of "memory" that remembers past observations and patterns. This "sequential" behavior is missing in feed-forward TNNs. Hence, TNN mini-columns combined with feedback mechanism implementing the predict-sense-update loop can be used to build cortical columns. As shown in **Figure 7**, multiple CCs targeting different sensory modalities can interact with each other and seek consensus on the output via voting within and across sensory modalities. Each CC, irrespective of its sensory modality, broadly implements two components: 1) *a Reference Frame* that maintains a "map" of the sensory information, and 2) an *Agent* that achieves goal-oriented behavior based on information from the Reference Frame and the input signals. Agent comprises of two TNN-type minicolumns performing unsupervised clustering and supervised classification. Reference Frame involves three functionalities which can be mapped to three types of minicolumns: *Where* Column, *What* Column and *Output* Column. In the context of visual object recognition, the three types of mini-columns together create models of objects by tracking locations of features on the object. The *Output* Column is envisioned to be very similar to the TNN mini-column discussed previously and determines the object identity. The *Where* and *What* Columns take the outputs from the *Output* Column as feedback information. The Where Column generates location of sensor on the object based on this feedback and the latest movement information from the agent. The *What* Column predicts object features based on the result from the *Where* Column and updates its model based on the actual sensory input and the feedback from the *Output* Column. In order to apply this model to other time-series applications, a key

*Cortical Columns Computing Systems: Microarchitecture Model, Functional Building Blocks… DOI: http://dx.doi.org/10.5772/intechopen.110252*

#### **Figure 7.**

*Cortical columns computing system (C3S) architecture consisting of multiple CCs targeting multiple sensory modalities interacting with each other to form a consensus on the output via voting. Each CC broadly consists of five TNN-style mini-columns: Where, What and Output mini-columns that together implement the Reference Frame (RF), and unsupervised and supervised mini-columns comprising the agent. For visual object recognition, the respective functionalities of the three RF mini-columns are: derive locations of sensor on the object, map features to locations, and derive the object ID based on the feature map. In contrast to feedforward TNNs, each CC learns through feedback from output and possesses a form of "memory" in the learning process.*

component that needs to be investigated is pre-processing of the diverse sensory signals to extract abstract "location-feature" pairs from raw sensory signals. Developing a detailed parameterized design template of a generic Cortical Column (CC) containing the five mini-column types is largely ongoing research.
