**Towards Model-Based Brain Imaging with Multi-Scale Modeling**

### Lars Schwabe and Youwei Zheng

*Adaptive and Regenerative Software Systems Dept of Computer Science and Electrical Engineering Universität Rostock, Rostock, Germany* 

### **1. Introduction**

98 Neuroimaging – Methods

Mallat, S. (1989). A theory for multiresolution signal decomposition: the wavelet

Jöbsis, FF. (1977). Non-invasive infrared monitoring of cerebral and myocardial oxygen sufficiency and circulatory parameters, *Science*, Vol.198, pp.1264-1267 Kohno, S. , Ishikawa, A., Tsuneishi, S., Amita, T., Shimizu, K. and Mukuta, Y. (2006).

Huettel, S. A. (2004). *Functional Magnetic Resonance Imaging,* Sinauer Associate, Inc., ISBN

Daubechies, I. (1992). *Ten Lectures on Wavelets,* CBMS-NSF Regional Conference Series In

Mallat, S. (1989). A theory for multiresolution signal decomposition: the wavelet

Uchiyama, Y., Ebe, K., Kozato, A., Okada, T., Sadato, N. (2003). The neural substrates of

Spiers, H. J. and Maguire, E. A. (2007). Neural substrates of driving behaviour, *Neuro Image*,

Matsumura, H. and Nakagawa, M. (2006). An EEG-based Humanoid Robot Control System,

Nagaoka, T., Sakatani, K., Awano, T., Yokose, N., Hoshino, T., Murata, Y., Katayama, Y.,

*Experimental Medicine and Biology*, Vol.662, pp. 497-503

Vol.11, No.7, pp.674-693

978-0878932887, USA

ISBN 978-0898712742

Vol.11, No.7, pp.674-693

pp.63-68, ISSN 09135685

pp.199-202

No.36, pp.245-255

*Review*, Vol. 63, No.3.4, pp.195-200

representation, *IEEE Transactions on Pattern Recognition and Machine Intelligence*,

Application development of functional near-infrared imaging system, *Shimadzu* 

Applied Mathematics; Society for Industrial and Applied Mathematics, No.61,

representation, *IEEE Transactions on Pattern Recognition and Machine Intelligence*,

driving at a safe distance: a functional MRI study, *Neuroscience Letters*, Vol.352-3,

*The institute of electronics, information and communication engineers*, Vol.106, No.345,

Ishikawa, A., Eda, H. (2010). Development of a new rehabilitation system based on a brain-computer interface using near infrared spectroscopy, *Advances in*  Brain imaging has been a key technology in advancing our understanding of the neuronal basis of cognition. However, in order to fully unleash the power of brain imaging it needs to be combined with other sources of information such as gene expression, behavioural performance, as well as with computational models.

In so-called "model-based brain imaging" computational models of how the brain processes information are employed in order to interpret the data. A prominent example is the use of models from reinforcement learning in order to interpret responses in basal ganglia, or frontal cortex. However, while such a model-based brain imaging certainly adds a new level of explanation beyond the mere description of the evoked neuronal activations, it will always be limited by the spatial and temporal resolution of the employed imaging technologies.

Here, we argue that the application of multi-scale modeling, which bridges the gap between the various spatial and temporal scales, is a necessary next step in the analysis of brain imaging data. For example, this way it will be possible to simulate the effects of altered membrane currents under pharmacological manipulation on brain-wide network dynamics and compare simulation results with recorded brain imaging data.

We emphasize that one important benefit of models is to operationalize as many assumptions as possible (Section 2). Then, we summarize the state-of-the-art in Neuroinformatics tool support, which is necessary for multi-scale models in model-based brain imaging. We also argue that even without complete physical models, which bridge the various scales, a data-driven approach using partly phenomenological model can be pursued, but this calls for new Neuroinformatics tools (Section 3). We summarize some of our recent work in network modeling of the visual system, where we performed systematic model comparisons (Section 4), and we close with identifying a challenging test case for multi-scale modeling in model-based brain imaging, namely the investigation of the neuronal basis of the self (Section 5).

### **2. Two dimensions of theory-dependence of observations in neuroscience**

Brain imaging studies are already heavily dependent on various kinds of models. We argue that a key advantage of models in brain imaging (and in the interpretation of neuronal data

Towards Model-Based Brain Imaging with Multi-Scale Modeling 101

electroencephalography (EEG), where source localization methods aim at solving the inverse problem of estimating the sources of electrical activity inside the head based on the measured EEG activity outside the head. This kind of theory-dependence is usually *implicit*, but can be made explicit in terms of a model for the measurement process as part of the data

Another more recent kind of theory-dependence in neuroscience is not implicit, but explicit theory-dependence. Here, the neuronal activity in the nervous system is interpreted in terms of a (computational) function for the organism in a teleological manner. Teleological explanations in biology have a long history, but what makes their re-appearance in neuroscience attractive is that nowadays they are often articulated in at least a semi-formal way, but now more often also in an explicit formal manner. A prime example is reward learning. Here, algorithms from reinforcement learning are used to provide a basis for the semantic interpretation of the neuronal activations (Schultz, 2002). In other words, the neuronal circuitry in the brain is considered as a "wetware" on which algorithms are running (with all the associated constraints brought about by the slow processing speed and sluggishness of nervous systems compared to today's computers). Here, the empirical observations are *intentionally* theory-dependent, which we think is noteworthy and a new

Both kinds of theory-dependence can be made explicit using models, which then allows for systematic comparisons between competing models on the basis of experimental observations. The approach of Bayesian model comparison is widely accepted as a principled method for comparing different models. **Figure 1b** shows the graphical model (graphical models are marriage of graph and probability theory used artificial intelligence and statistics) for Bayesian model comparison. Different models *M* can be parameterized

can be used in order to compare different models *M1* and *M2* using the so-called posterior

certain model can be inspected. Performing the necessary calculations, such as evaluating or

 

can serve as a general framework for model-based brain imaging even with multi-scale

Neuroinformatics is an emerging discipline, which provides methods and tool support for neuroscience. Prime examples are databases for neuronal data, and tools for modeling and simulation. Let us briefly review selected advances in this field in order to identify ways in which Neuroinformatics could contribute to model-based brain imaging with multi-scale

, or the posterior distributions over model parameters for a

**<sup>x</sup>** , is technically demanding and defining the

can be non-trivial. Still, however, this way of model comparison

are specified, observed data **x**

will need to incorporate both kinds of

tool in the methodological toolbox to investigate complex biological systems.

**2.2 Theory-dependence of the interpretation of the data** 

**2.3 Models make the theory-dependencies explicit** 

with a parameter vector . Once prior distributions P |M

estimating the integral P | ,M P |M d

theory-dependence (see Section 3.3).

**3. Neuroinformatics support** 

models, i. e. the particular definition of P | **x**

odds P M | /P M | 1 2 

prior distributions P |M

models.

analysis.

in general) is to make assumptions explicit in order to deal with the fact that all observations are theory-dependent. Interestingly, we can distinguish two dimensions of theorydependence in brain imaging (**Figure 1a**): *First*, the dependence on the physical theories upon which the measurement devices like magnetic resonance imaging (MRI) scanners are built; *second*, the dependence on computational theories to derive teleological explanations of brain activity. While the former is shared with other scientific disciplines, the latter is a more recent advancement. Let us consider these two dimensions of theory-dependence in greater detail.

Fig. 1. Theory-dependence of observations and Bayesian model comparison. **a)** In brain imaging one can identify two dimensions of theory-dependence of observations: an implicit dependence on the theories underlying the measurement devices (such as the working of an MRT scanner), and a more recent explicit theory-dependence, where computational theories are used in order to interpret neuronal activations in a teleological manner with formally defined computational theories. **b)** Graphical model for Bayesian model comparison. Models *M* are parameterized with parameters . Once prior distributions P(|*M*) and forward models P(**x**|) are defined, different models *M* (each of which is parameterized with a ) can be compared with each other.

### **2.1 Theory-dependence of the measurement process**

Since the work of Fleck (Fleck, 1979), Kuhn (Kuhn, 1962), and also Popper (Popper, 1972) in the last century we know that all scientific observations are theory-laden, i. e. even the most basic "facts" depend on a certain theoretical background. These ideas have been developed with physics as the main application domain, because in the last century physics experienced many radical changes of its fundamental theories. In brain imaging the theorydependence of observations becomes most visible when considering the measurement process. For example, it is still not clear how signals from functional magnetic resonance imaging (fMRI) should best be interpreted in terms of neuronal and synaptic activations, and models are needed to ensure a proper interpretation of the measured brain activation in terms of neuronal activity (Almeida and Stetter, 2002). The situation is similar for research in

in general) is to make assumptions explicit in order to deal with the fact that all observations are theory-dependent. Interestingly, we can distinguish two dimensions of theorydependence in brain imaging (**Figure 1a**): *First*, the dependence on the physical theories upon which the measurement devices like magnetic resonance imaging (MRI) scanners are built; *second*, the dependence on computational theories to derive teleological explanations of brain activity. While the former is shared with other scientific disciplines, the latter is a more recent advancement. Let us consider these two dimensions of theory-dependence in

Fig. 1. Theory-dependence of observations and Bayesian model comparison. **a)** In brain imaging one can identify two dimensions of theory-dependence of observations: an implicit dependence on the theories underlying the measurement devices (such as the working of an MRT scanner), and a more recent explicit theory-dependence, where computational theories are used in order to interpret neuronal activations in a teleological manner with formally defined computational theories. **b)** Graphical model for Bayesian model comparison. Models *M* are parameterized with parameters . Once prior distributions P(|*M*) and forward models P(**x**|) are defined, different models *M* (each of which is parameterized

Since the work of Fleck (Fleck, 1979), Kuhn (Kuhn, 1962), and also Popper (Popper, 1972) in the last century we know that all scientific observations are theory-laden, i. e. even the most basic "facts" depend on a certain theoretical background. These ideas have been developed with physics as the main application domain, because in the last century physics experienced many radical changes of its fundamental theories. In brain imaging the theorydependence of observations becomes most visible when considering the measurement process. For example, it is still not clear how signals from functional magnetic resonance imaging (fMRI) should best be interpreted in terms of neuronal and synaptic activations, and models are needed to ensure a proper interpretation of the measured brain activation in terms of neuronal activity (Almeida and Stetter, 2002). The situation is similar for research in

with a ) can be compared with each other.

**2.1 Theory-dependence of the measurement process** 

greater detail.

electroencephalography (EEG), where source localization methods aim at solving the inverse problem of estimating the sources of electrical activity inside the head based on the measured EEG activity outside the head. This kind of theory-dependence is usually *implicit*, but can be made explicit in terms of a model for the measurement process as part of the data analysis.

### **2.2 Theory-dependence of the interpretation of the data**

Another more recent kind of theory-dependence in neuroscience is not implicit, but explicit theory-dependence. Here, the neuronal activity in the nervous system is interpreted in terms of a (computational) function for the organism in a teleological manner. Teleological explanations in biology have a long history, but what makes their re-appearance in neuroscience attractive is that nowadays they are often articulated in at least a semi-formal way, but now more often also in an explicit formal manner. A prime example is reward learning. Here, algorithms from reinforcement learning are used to provide a basis for the semantic interpretation of the neuronal activations (Schultz, 2002). In other words, the neuronal circuitry in the brain is considered as a "wetware" on which algorithms are running (with all the associated constraints brought about by the slow processing speed and sluggishness of nervous systems compared to today's computers). Here, the empirical observations are *intentionally* theory-dependent, which we think is noteworthy and a new tool in the methodological toolbox to investigate complex biological systems.

### **2.3 Models make the theory-dependencies explicit**

Both kinds of theory-dependence can be made explicit using models, which then allows for systematic comparisons between competing models on the basis of experimental observations. The approach of Bayesian model comparison is widely accepted as a principled method for comparing different models. **Figure 1b** shows the graphical model (graphical models are marriage of graph and probability theory used artificial intelligence and statistics) for Bayesian model comparison. Different models *M* can be parameterized with a parameter vector . Once prior distributions P |M are specified, observed data **x** can be used in order to compare different models *M1* and *M2* using the so-called posterior odds P M | /P M | 1 2 , or the posterior distributions over model parameters for a certain model can be inspected. Performing the necessary calculations, such as evaluating or estimating the integral P | ,M P |M d **<sup>x</sup>** , is technically demanding and defining the prior distributions P |M can be non-trivial. Still, however, this way of model comparison can serve as a general framework for model-based brain imaging even with multi-scale models, i. e. the particular definition of P | **x** will need to incorporate both kinds of theory-dependence (see Section 3.3).

### **3. Neuroinformatics support**

Neuroinformatics is an emerging discipline, which provides methods and tool support for neuroscience. Prime examples are databases for neuronal data, and tools for modeling and simulation. Let us briefly review selected advances in this field in order to identify ways in which Neuroinformatics could contribute to model-based brain imaging with multi-scale models.

Towards Model-Based Brain Imaging with Multi-Scale Modeling 103

(such as in terms of specifications for data and file formats and ultimately implementations using service-oriented architectures) appears as a promising solution. The existing Matlab-

account for the observed data in terms of the neuronal activity of synaptically coupled neurons. Do we currently have enough knowledge in order to define such models mathematically, not to mention their numerical simulation? For example, for fMRI it is still not clear if the signal reflects pre- or postsynaptic activity (Logothetis, 2008), or excitatory or inhibitory synaptic activity (Lauritzen and Gold, 2003). For EEG forward models, the anisotropy of the conductivities may matter (Güllmar et al., 2010), and linking local field potentials to the spiking/synaptic activity is also a current research topic

For example, we have recently combined NMMs and anisotropic head-models in order to simulate EEG activity (Zimmermann et al., 2011) as an attempt for a multi-scale forward model. Here, we argue that even without a complete physical description of how spiking activity causes, for example, EEG signals one could still pursue model-based brain imaging with multi-scale models, namely by using phenomenological models for those parts of the full model, where a physical description is currently not available. The procedure of Bayesian model comparison is blind to the physical plausibility or "truth" of a model,

 , but only compares probabilistic models with each other. Thus, by using data from multiple imaging modalities and employing phenomenological models for the boundaries between different scales of modeling, one could iteratively improve these models. Of course, one needs to accept that parts of such multi-scale models are incomplete. From a pragmatic perspective, this shifts the focus from finding a "true" model or physical principle to bridge the gap between different scales to the very practical question: How to simulate forward

Sharing models in terms of scripts for established simulators like NEURON, GENESIS or NEST has been a major step towards facilitating the exchange of models. Recently, these efforts have been extended by the development of simulator-independent model descriptions like generating models via Python scripts (Davison et al., 2008), renewed interest in NeuroML (Goddard et al., 2001), or the recent NineML initiative (http://ninml.org). Compared to Systems Biology, however, corresponding efforts in Computational Neuroscience are less developed in terms of model exchange (De Schutter, 2008), because a standard as widely accepted as the Systems Biology Markup Language (SBML) is still missing, but it is actively developed within the Neuroinformatics community. As of now, web portals like ModelDB (http://senselab.med.yale.edu/modeldb) or, for vision science, the Visiome (http:// visiome.neuroinf.jp) are an excellent source of models in terms of scripts for simulators. Hopefully, in the near future, such portals will also host model descriptions (as compared to simulator scripts), similar to the BioModels database for Systems Biology models (http://www.ebi.ac.uk/biomodels). However, we argue that it is essentially the multiplicity of different (spatial and temporal) scales of neuronal models,

, for multi-scale models one needs to

based tools come closest to such an ecosystem.

In order to evaluate the forward model, P | **x**

(Rasch et al., 2009).

models, and how to share models?

which makes similar efforts a challenge for Neuroinformatics.

P | **x** 

**3.2 Neuroinformatics support for multi-scale modeling** 

### **3.1 Neuroinformatics support for sharing and analysis of data**

The brain imaging community has always been very progressive in terms of sharing data and tools. While new analysis methods could certainly be developed within a single method-oriented laboratory, it is the community-wide evaluation of new methods, which provides their ultimate test. The free sharing of tools is certainly helpful.

In the field of fMRI studies, the SPM software (http://www.fil.ion.ucl.ac.uk/spm) is probably the most widely used open source software, and comparing models given observed data is a well-developed feature of SPM. It is the tool of choice to apply Dynamic Causal Models (DCMs) to brain imaging data, i. e. performing Bayesian model comparison with Neural Mass Models (NMMs), which are models accounting for averaged population activity in a whole cortical area with a greatly simplified local circuit architecture. Other software packages for specialized tasks such as, for example, multivariate pattern classification exist, but the advantage of a mature platform with a long tradition such as SPM is that one can almost consider it as a "software ecosystem" for data analysis as other tools and toolboxes can easily be integrated into analysis workflows.

In the field of EEG studies, the situation is more diverse. This may be due to the fact that EEG as a brain-imaging modality has been (and still is) in a "renaissance" phase, i. e. it is recognized that new analysis methods can pull out much more information about brain states than plain averaging. Prominent tools for EEG analysis are EEGLAB (Delorme and Makeig, 2004) and the FieldTrip toolbox (Oostenveld et al., 2011), both of which support time-frequency analysis and source localization, namely dipole-based localization and the beamformer algorithm, respectively. EEGLAB is the probably best first choice for applying Independent Component Analysis (ICA) to EEG data (Onton and Makeig, 2006). Even though the application of ICA to EEG is controversial, recent advances suggest that properly adapted variants of ICA can yield physiologically plausible results (Hyvärinen et al., 2010). SPM can also be applied for EEG analysis, and it supports DCMs for EEG data. Another prominent tool for EEG analysis is Cartool (http://brainmapping.unige.ch/cartool), which supports a so-called topographic analysis of event-related potentials (Pascual-Marqui et al., 1995; Murray et al., 2008) as well as distributed source localization. In contrast to the other Matlab-based tools, Cartool is an application for the Windows operating systems with a graphical user interface and currently no possibility for external scripting, i. e. it cannot be part of an automated toolchain.

In terms of data sharing, it is notable that for more than 10 years the fMRI community has the possibility to share the raw data (Editorial, 2000). Initially, this was viewed rather skeptically, but it has facilitated the development of new analysis methods (Van Horn and Ishai, 2007). Unfortunately, it appears that EEG researchers do not typically take such an approach and are still far more protective of "their" data. As of now, this also applies to data sharing in neurophysiology. Only some neurophysiology researchers freely share their data via web pages. However, progress and standardization can be expected from efforts taken by, for example, the International Neuroinformatics Coordination Facility or actions of major funding agencies and publishers.

Taken together, the sharing of tools and data from various imaging modalities (including neurophysiology) is an essential prerequisite for applying multi-scale modeling to modelbased brain imaging. Very likely, there will never be a single tool to cover all requirements for data analysis. Instead, a "software ecosystem" with loose coupling between components

The brain imaging community has always been very progressive in terms of sharing data and tools. While new analysis methods could certainly be developed within a single method-oriented laboratory, it is the community-wide evaluation of new methods, which

In the field of fMRI studies, the SPM software (http://www.fil.ion.ucl.ac.uk/spm) is probably the most widely used open source software, and comparing models given observed data is a well-developed feature of SPM. It is the tool of choice to apply Dynamic Causal Models (DCMs) to brain imaging data, i. e. performing Bayesian model comparison with Neural Mass Models (NMMs), which are models accounting for averaged population activity in a whole cortical area with a greatly simplified local circuit architecture. Other software packages for specialized tasks such as, for example, multivariate pattern classification exist, but the advantage of a mature platform with a long tradition such as SPM is that one can almost consider it as a "software ecosystem" for data analysis as other

In the field of EEG studies, the situation is more diverse. This may be due to the fact that EEG as a brain-imaging modality has been (and still is) in a "renaissance" phase, i. e. it is recognized that new analysis methods can pull out much more information about brain states than plain averaging. Prominent tools for EEG analysis are EEGLAB (Delorme and Makeig, 2004) and the FieldTrip toolbox (Oostenveld et al., 2011), both of which support time-frequency analysis and source localization, namely dipole-based localization and the beamformer algorithm, respectively. EEGLAB is the probably best first choice for applying Independent Component Analysis (ICA) to EEG data (Onton and Makeig, 2006). Even though the application of ICA to EEG is controversial, recent advances suggest that properly adapted variants of ICA can yield physiologically plausible results (Hyvärinen et al., 2010). SPM can also be applied for EEG analysis, and it supports DCMs for EEG data. Another prominent tool for EEG analysis is Cartool (http://brainmapping.unige.ch/cartool), which supports a so-called topographic analysis of event-related potentials (Pascual-Marqui et al., 1995; Murray et al., 2008) as well as distributed source localization. In contrast to the other Matlab-based tools, Cartool is an application for the Windows operating systems with a graphical user interface and currently no possibility for external scripting, i. e. it cannot be

In terms of data sharing, it is notable that for more than 10 years the fMRI community has the possibility to share the raw data (Editorial, 2000). Initially, this was viewed rather skeptically, but it has facilitated the development of new analysis methods (Van Horn and Ishai, 2007). Unfortunately, it appears that EEG researchers do not typically take such an approach and are still far more protective of "their" data. As of now, this also applies to data sharing in neurophysiology. Only some neurophysiology researchers freely share their data via web pages. However, progress and standardization can be expected from efforts taken by, for example, the International Neuroinformatics Coordination Facility or actions of

Taken together, the sharing of tools and data from various imaging modalities (including neurophysiology) is an essential prerequisite for applying multi-scale modeling to modelbased brain imaging. Very likely, there will never be a single tool to cover all requirements for data analysis. Instead, a "software ecosystem" with loose coupling between components

**3.1 Neuroinformatics support for sharing and analysis of data** 

provides their ultimate test. The free sharing of tools is certainly helpful.

tools and toolboxes can easily be integrated into analysis workflows.

part of an automated toolchain.

major funding agencies and publishers.

(such as in terms of specifications for data and file formats and ultimately implementations using service-oriented architectures) appears as a promising solution. The existing Matlabbased tools come closest to such an ecosystem.

### **3.2 Neuroinformatics support for multi-scale modeling**

In order to evaluate the forward model, P | **x** , for multi-scale models one needs to account for the observed data in terms of the neuronal activity of synaptically coupled neurons. Do we currently have enough knowledge in order to define such models mathematically, not to mention their numerical simulation? For example, for fMRI it is still not clear if the signal reflects pre- or postsynaptic activity (Logothetis, 2008), or excitatory or inhibitory synaptic activity (Lauritzen and Gold, 2003). For EEG forward models, the anisotropy of the conductivities may matter (Güllmar et al., 2010), and linking local field potentials to the spiking/synaptic activity is also a current research topic (Rasch et al., 2009).

For example, we have recently combined NMMs and anisotropic head-models in order to simulate EEG activity (Zimmermann et al., 2011) as an attempt for a multi-scale forward model. Here, we argue that even without a complete physical description of how spiking activity causes, for example, EEG signals one could still pursue model-based brain imaging with multi-scale models, namely by using phenomenological models for those parts of the full model, where a physical description is currently not available. The procedure of Bayesian model comparison is blind to the physical plausibility or "truth" of a model, P | **x** , but only compares probabilistic models with each other. Thus, by using data from multiple imaging modalities and employing phenomenological models for the boundaries between different scales of modeling, one could iteratively improve these models. Of course, one needs to accept that parts of such multi-scale models are incomplete. From a pragmatic perspective, this shifts the focus from finding a "true" model or physical principle to bridge the gap between different scales to the very practical question: How to simulate forward models, and how to share models?

Sharing models in terms of scripts for established simulators like NEURON, GENESIS or NEST has been a major step towards facilitating the exchange of models. Recently, these efforts have been extended by the development of simulator-independent model descriptions like generating models via Python scripts (Davison et al., 2008), renewed interest in NeuroML (Goddard et al., 2001), or the recent NineML initiative (http://ninml.org). Compared to Systems Biology, however, corresponding efforts in Computational Neuroscience are less developed in terms of model exchange (De Schutter, 2008), because a standard as widely accepted as the Systems Biology Markup Language (SBML) is still missing, but it is actively developed within the Neuroinformatics community. As of now, web portals like ModelDB (http://senselab.med.yale.edu/modeldb) or, for vision science, the Visiome (http:// visiome.neuroinf.jp) are an excellent source of models in terms of scripts for simulators. Hopefully, in the near future, such portals will also host model descriptions (as compared to simulator scripts), similar to the BioModels database for Systems Biology models (http://www.ebi.ac.uk/biomodels). However, we argue that it is essentially the multiplicity of different (spatial and temporal) scales of neuronal models, which makes similar efforts a challenge for Neuroinformatics.

Towards Model-Based Brain Imaging with Multi-Scale Modeling 105

dominant controversy relates to the question: To what extent is neuronal selectivity in sensory systems determined by the afferent feedforward connections vs. intra-cortical processing (or even feedback from higher visual areas)? Previous theoretical studies already investigated a so-called "balanced regime" of neuronal networks, where excitatory and inhibitory inputs are balanced and largely cancel each other out. We have investigated such

Fig. 2. Orientation tuning models and Bayesian posterior over models. **a)** Illustrations of circuits, which may compute orientation-tuned responses in area V1. **b)** Bayesian posterior for models enumerated in terms of the strength of local recurrent inhibition (x-axis) and excitation (y-axis), given the data reported in (Mariño et al., 2005b). (Figure 2b is taken from

A study by Stimberg et al. (2009) employed a Bayesian approach to investigate data reported first by Mariño et al. (2005b). Most importantly, a class of models was defined, which includes all the different variants shown in **Figure 2a**, by considering the strength of recurrent excitation and inhibition within a network model of an approx. 1mm2 cortical

posterior probabilities over the model parameters were computed, given the experimental data (**Figure 2b**). It turned out that a recurrent regime (**Figure 2a**, right icon) is the most probable regime. The details of this calculation and the simulations are given in Stimberg et al. (2009). A few aspects of this study are important to emphasize here: *First*, the data to be explained by the network model was already postprocessed data from multiple experimentally recorded neurons, i. e. it was not aimed at accounting for every recorded spike. *Second*, compared to the NMMs used in DCMs the network model referred to a smaller scale than accessible by current brain imaging studies (<1mm2). *Third*, this model used parameter values for the model neurons (such as membrane conductances), which are at best good guesses informed by other studies, but way beyond a comprehensive

. Then, using this forward model, the

Stimberg et al. (2009), by permission of Oxford University Press)

patch as parameters in a forward model P | **x**

an operating regime in joint experimental-modeling studies.

### **3.3 Is there any Neuroinformatics support for multi-facet modeling?**

In addition the different scales, neuronal systems can also be described at different levels of abstractions such as in terms of the neuronal dynamics, but also in terms of the computations carried out by the neuronal "wetware". Marr distinguished between the computational problem, the algorithmic solution, and a "wetware" used to execute the algorithm (Marr, 1982). This distinction is very close to a typical computer science approach, where the algorithms are clearly distinct from the hardware on which they are running. Such an apparently clear separation has been questioned recently, because the algorithms and the neuronal "wetware" may not be as independent as previously thought (Noë, 2005). Computational Neuroscience researchers have produced hypotheses, which aim at bridging the gap between mere mechanistic descriptions and computational properties by essentially proposing transformations from algorithmic descriptions to neuronal circuitry. Is there any Neuroinformatics support for such a multi-facet modeling? Can multi-facet modeling be considered within the framework of Bayesian model comparison?

The answer to the first question is simply "no". As of now, there is no tool support to explicitly formulate such multi-facet models, but we have recently started to address this problem (Ansorg and Schwabe, 2010) by taking inspiration from software engineering. How can such multi-facet models fit into the framework of Bayesian model comparison? One simply needs to define a forward model P | **x** for the observed data, but for multi-facet models this calls for defining a computational model *and* a transformation into the "wetware". If proper description languages for such multi-facet models and transformations would be available, one could compare different hypothesis about the computations in neuronal circuits in a data-driven way. Readily available candidates for such multi-facet modeling include hypotheses about the role of dopamine in reward learning (Schultz, 2002), or postulates about population codes in sensory systems (Ma et al., 2006).

### **4. Selected advances in cortical microcircuit models**

The NMMs employed in DCMs are a major step beyond the mere statistical models embodied in, for example, effective connectivity analyses. However, these NMMs often assume a simplified cortical architecture. In our modeling of visual cortical networks we also employed mean-field firing rate models as in NMMs (as well as more detailed models with so-called "spiking neurons"). We could show that single neuron responses in primary visual cortex (V1) are best explained when the local cortical microcircuits are assumed to operate in a balance between strong recurrent excitation and inhibition (Mariño et al., 2005a; Stimberg et al., 2009), and that inter-areal feedback into V1 may play a crucial role in "lateral inhibition" (Schwabe et al., 2006a; Ichida et al., 2007; Schwabe et al., 2010). To the best of our knowledge, such and other recent advances in cortical microcircuits models have not yet been implemented into NMMs used in brain imaging. Here we review some of these advances and outline how they could be used in model-based brain imaging with multiscale models.

### **4.1 The operating regime of local cortical computations**

More than 50 years after the discovery of orientation tuning in V1, there are still controversial discussions about the underlying neuronal circuits (**Figure 2a**). Probably the

In addition the different scales, neuronal systems can also be described at different levels of abstractions such as in terms of the neuronal dynamics, but also in terms of the computations carried out by the neuronal "wetware". Marr distinguished between the computational problem, the algorithmic solution, and a "wetware" used to execute the algorithm (Marr, 1982). This distinction is very close to a typical computer science approach, where the algorithms are clearly distinct from the hardware on which they are running. Such an apparently clear separation has been questioned recently, because the algorithms and the neuronal "wetware" may not be as independent as previously thought (Noë, 2005). Computational Neuroscience researchers have produced hypotheses, which aim at bridging the gap between mere mechanistic descriptions and computational properties by essentially proposing transformations from algorithmic descriptions to neuronal circuitry. Is there any Neuroinformatics support for such a multi-facet modeling? Can multi-facet modeling be considered within the framework of Bayesian

The answer to the first question is simply "no". As of now, there is no tool support to explicitly formulate such multi-facet models, but we have recently started to address this problem (Ansorg and Schwabe, 2010) by taking inspiration from software engineering. How can such multi-facet models fit into the framework of Bayesian model comparison? One

models this calls for defining a computational model *and* a transformation into the "wetware". If proper description languages for such multi-facet models and transformations would be available, one could compare different hypothesis about the computations in neuronal circuits in a data-driven way. Readily available candidates for such multi-facet modeling include hypotheses about the role of dopamine in reward learning (Schultz, 2002),

The NMMs employed in DCMs are a major step beyond the mere statistical models embodied in, for example, effective connectivity analyses. However, these NMMs often assume a simplified cortical architecture. In our modeling of visual cortical networks we also employed mean-field firing rate models as in NMMs (as well as more detailed models with so-called "spiking neurons"). We could show that single neuron responses in primary visual cortex (V1) are best explained when the local cortical microcircuits are assumed to operate in a balance between strong recurrent excitation and inhibition (Mariño et al., 2005a; Stimberg et al., 2009), and that inter-areal feedback into V1 may play a crucial role in "lateral inhibition" (Schwabe et al., 2006a; Ichida et al., 2007; Schwabe et al., 2010). To the best of our knowledge, such and other recent advances in cortical microcircuits models have not yet been implemented into NMMs used in brain imaging. Here we review some of these advances and outline how they could be used in model-based brain imaging with multi-

More than 50 years after the discovery of orientation tuning in V1, there are still controversial discussions about the underlying neuronal circuits (**Figure 2a**). Probably the

or postulates about population codes in sensory systems (Ma et al., 2006).

**4. Selected advances in cortical microcircuit models** 

**4.1 The operating regime of local cortical computations** 

for the observed data, but for multi-facet

**3.3 Is there any Neuroinformatics support for multi-facet modeling?** 

model comparison?

scale models.

simply needs to define a forward model P | **x**

dominant controversy relates to the question: To what extent is neuronal selectivity in sensory systems determined by the afferent feedforward connections vs. intra-cortical processing (or even feedback from higher visual areas)? Previous theoretical studies already investigated a so-called "balanced regime" of neuronal networks, where excitatory and inhibitory inputs are balanced and largely cancel each other out. We have investigated such an operating regime in joint experimental-modeling studies.

Fig. 2. Orientation tuning models and Bayesian posterior over models. **a)** Illustrations of circuits, which may compute orientation-tuned responses in area V1. **b)** Bayesian posterior for models enumerated in terms of the strength of local recurrent inhibition (x-axis) and excitation (y-axis), given the data reported in (Mariño et al., 2005b). (Figure 2b is taken from Stimberg et al. (2009), by permission of Oxford University Press)

A study by Stimberg et al. (2009) employed a Bayesian approach to investigate data reported first by Mariño et al. (2005b). Most importantly, a class of models was defined, which includes all the different variants shown in **Figure 2a**, by considering the strength of recurrent excitation and inhibition within a network model of an approx. 1mm2 cortical patch as parameters in a forward model P | **x** . Then, using this forward model, the posterior probabilities over the model parameters were computed, given the experimental data (**Figure 2b**). It turned out that a recurrent regime (**Figure 2a**, right icon) is the most probable regime. The details of this calculation and the simulations are given in Stimberg et al. (2009). A few aspects of this study are important to emphasize here: *First*, the data to be explained by the network model was already postprocessed data from multiple experimentally recorded neurons, i. e. it was not aimed at accounting for every recorded spike. *Second*, compared to the NMMs used in DCMs the network model referred to a smaller scale than accessible by current brain imaging studies (<1mm2). *Third*, this model used parameter values for the model neurons (such as membrane conductances), which are at best good guesses informed by other studies, but way beyond a comprehensive

Towards Model-Based Brain Imaging with Multi-Scale Modeling 107

A prominent phenomenon of interest in visual neuroscience is surround suppression. When another stimulus in the surround of a classical receptive field of a neuron is shown, then the response is suppressed compared to the stimulation of only the classical receptive field. **Figure 3a** illustrates this for visually responsive neurons tuned to orientation. The concept of "lateral inhibition" is usually invoked to explain this surround suppression, i. e. recurrent connections between different 1mm2 patches implement a competition via mutual suppression. In a modeling study we predicted that feedback from extrastriate areas into V1 (see **Figure 3b** for an illustration of the model architecture) may play a major role in mediating this "lateral inhibition" (Schwabe et al., 2006a), which in this model is an interareal inhibition. Interestingly, we also predicted that stimuli with a large separation from the receptive field of the recorded neuron could even facilitate (and not only suppress) the responses when the classical receptive field is stimulated at low contrast. Later we confirmed this prediction experimentally (Ichida et al., 2007). These studies show that stimulus-driven responses and their modulation can depend on the stimulus properties and are mediated by inter-areal connections. Most importantly, such studies could inform the stimulus design in brain-imaging experiments (Harrison et al., 2007). While the models of the 1mm2 patches of cortex (see Section 4.1) are currently at the limit of the spatial resolution accessible to fMRI, the spatially more extended models considered in these studies are in principle directly applicable as network models in DCMs, but now with cortical areas

In our investigation of inter-areal networks we also performed systematic comparisons between model predictions and the experimentally recorded single neuron responses, but here we did *not* apply a Bayesian approach (Schwabe et al., 2010), and this can serve to highlight an important distinction to be respected for the comparison of network models, namely between the definition of a "noise model" and variability due to true heterogeneity

sensory input into a neuronal network model, parameterized by , and a mean output **y** predicted by, for example, a NMM. Since the experimental observations **x** are usually much more variable than predicted by such a mean output, one could formulate a noise model,

account for the observed variability in the data. We have experimentally measured the strength of surround suppression of neurons with an oriented stimulus in the classical field brought about by stimulation at more distant visual field locations (the "far surround"). A summary of the measured surround suppression strengths is shown in **Figure 3c** in terms of the error ellipse (50% confidence) around the estimated mean suppression for two experimental conditions (high contrast stimulus in the center, y-axis, vs. low contrast stimulus, x-axis). Clearly, the measured suppression strengths are variable, but we assumed that this variability is due to the fact that we recorded different types of neurons without being able to distinguish them based on the extracellular recordings. For example, it is conceivable that some neurons were excitatory while others were inhibitory, some neurons may operate in a different local network neighborhood than others, etc. In Schwabe et al. (2010) we hypothesized that this variability is due to different strengths of the intra-areal vs. inter-areal connections (the model parameter ), and we simulated the network model with different parameter values. As we increased, for example, the strength of the feedback

with being additive Gaussian noise, for the observations **x**, which shall

denote the functional dependence between a

explicitly modeled as spatially extended patches of cortex.

of the recorded neurons. Let **y f** Input;

such as **x y**

characterization in a single preparation. *Finally*, the class of models was sufficiently restricted so that one could investigate the full posterior distribution.

Hence, this study demonstrates that large-scale network simulations can be used successfully in a Bayesian model comparison (of models within a properly defined class). Note that while the employed model certainly lacks many potentially biophysically relevant details, it is far more realistic than the NMMs used in DCMs. We argue that the most relevant way, in which it goes beyond the NMMs, is *not* the level of biophysical realisms but the fact that it operates the model network in a "balanced" regime. In such a regime, recently termed "inhibition stabilized network" (Ozeki et al., 2009) as recurrent excitation makes the network unstable in the absence of recurrent inhibition, even small external inputs can be amplified via local recurrent connections. The dynamic properties of such strong local recurrent connections have not yet been considered in DCMs, which address inter-areal networks and use greatly simplified local circuit models.

### **4.2 Contextual effects and "lateral inhibition"**

In another series of joint experimental-modeling studies we investigated such inter-areal networks, but we focused on the detailed microcircuits of the inter-areal connections. This is a key question one needs to deal with when interpreting large-scale brain activations in terms of network models. Here, the spatial scales of the connections need to be identified (see Angelucci et al. (2002) for the corresponding anatomy of feedback within the visual system), but network simulations needs to be conducted in order to predict the physiological responses.

Fig. 3. Surround suppression in network models and data-model comparison. **a)** Illustration of surround suppression: When the surround of a neuron's classical receptive field is also stimulated, then the response is reduced. **b)** Illustration of a recurrent network model of many recurrently coupled orientation hypercolumns in area V1. They receive feedforward inputs from the lateral geniculate nucleus (LGN), are interconnected via long-range horizontal connections within V1 and reciprocally to another retinotopically organized extra-striate visual area (here: area MT). **c)** Summary of surround suppression data from macaque V1 from two experimental conditions (stimulus in the classical receptive field at high, y-axis, and low, xaxis, contrast). The ellipse indicates the 50% confidence region of the measured surround suppression in N=63 cells. The lines show predicted surround suppression of the recurrent network model from Schwabe et al. (2006) for increasing strength of the *inter*-areal feedback connections to local inhibitory cells for moderate (dashed) and strong *intra*-areal (solid) lateral inhibition. (Figure 3c is taken from Schwabe et al.(2010), with permission from Elsevier).

characterization in a single preparation. *Finally*, the class of models was sufficiently

Hence, this study demonstrates that large-scale network simulations can be used successfully in a Bayesian model comparison (of models within a properly defined class). Note that while the employed model certainly lacks many potentially biophysically relevant details, it is far more realistic than the NMMs used in DCMs. We argue that the most relevant way, in which it goes beyond the NMMs, is *not* the level of biophysical realisms but the fact that it operates the model network in a "balanced" regime. In such a regime, recently termed "inhibition stabilized network" (Ozeki et al., 2009) as recurrent excitation makes the network unstable in the absence of recurrent inhibition, even small external inputs can be amplified via local recurrent connections. The dynamic properties of such strong local recurrent connections have not yet been considered in DCMs, which address

In another series of joint experimental-modeling studies we investigated such inter-areal networks, but we focused on the detailed microcircuits of the inter-areal connections. This is a key question one needs to deal with when interpreting large-scale brain activations in terms of network models. Here, the spatial scales of the connections need to be identified (see Angelucci et al. (2002) for the corresponding anatomy of feedback within the visual system), but network simulations needs to be conducted in order to predict the

Fig. 3. Surround suppression in network models and data-model comparison. **a)** Illustration of

surround suppression: When the surround of a neuron's classical receptive field is also stimulated, then the response is reduced. **b)** Illustration of a recurrent network model of many recurrently coupled orientation hypercolumns in area V1. They receive feedforward inputs from the lateral geniculate nucleus (LGN), are interconnected via long-range horizontal connections within V1 and reciprocally to another retinotopically organized extra-striate visual area (here: area MT). **c)** Summary of surround suppression data from macaque V1 from two experimental conditions (stimulus in the classical receptive field at high, y-axis, and low, xaxis, contrast). The ellipse indicates the 50% confidence region of the measured surround suppression in N=63 cells. The lines show predicted surround suppression of the recurrent network model from Schwabe et al. (2006) for increasing strength of the *inter*-areal feedback connections to local inhibitory cells for moderate (dashed) and strong *intra*-areal (solid) lateral inhibition. (Figure 3c is taken from Schwabe et al.(2010), with permission from Elsevier).

restricted so that one could investigate the full posterior distribution.

inter-areal networks and use greatly simplified local circuit models.

**4.2 Contextual effects and "lateral inhibition"** 

physiological responses.

A prominent phenomenon of interest in visual neuroscience is surround suppression. When another stimulus in the surround of a classical receptive field of a neuron is shown, then the response is suppressed compared to the stimulation of only the classical receptive field. **Figure 3a** illustrates this for visually responsive neurons tuned to orientation. The concept of "lateral inhibition" is usually invoked to explain this surround suppression, i. e. recurrent connections between different 1mm2 patches implement a competition via mutual suppression. In a modeling study we predicted that feedback from extrastriate areas into V1 (see **Figure 3b** for an illustration of the model architecture) may play a major role in mediating this "lateral inhibition" (Schwabe et al., 2006a), which in this model is an interareal inhibition. Interestingly, we also predicted that stimuli with a large separation from the receptive field of the recorded neuron could even facilitate (and not only suppress) the responses when the classical receptive field is stimulated at low contrast. Later we confirmed this prediction experimentally (Ichida et al., 2007). These studies show that stimulus-driven responses and their modulation can depend on the stimulus properties and are mediated by inter-areal connections. Most importantly, such studies could inform the stimulus design in brain-imaging experiments (Harrison et al., 2007). While the models of the 1mm2 patches of cortex (see Section 4.1) are currently at the limit of the spatial resolution accessible to fMRI, the spatially more extended models considered in these studies are in principle directly applicable as network models in DCMs, but now with cortical areas explicitly modeled as spatially extended patches of cortex.

In our investigation of inter-areal networks we also performed systematic comparisons between model predictions and the experimentally recorded single neuron responses, but here we did *not* apply a Bayesian approach (Schwabe et al., 2010), and this can serve to highlight an important distinction to be respected for the comparison of network models, namely between the definition of a "noise model" and variability due to true heterogeneity of the recorded neurons. Let **y f** Input; denote the functional dependence between a sensory input into a neuronal network model, parameterized by , and a mean output **y** predicted by, for example, a NMM. Since the experimental observations **x** are usually much more variable than predicted by such a mean output, one could formulate a noise model, such as **x y** with being additive Gaussian noise, for the observations **x**, which shall account for the observed variability in the data. We have experimentally measured the strength of surround suppression of neurons with an oriented stimulus in the classical field brought about by stimulation at more distant visual field locations (the "far surround"). A summary of the measured surround suppression strengths is shown in **Figure 3c** in terms of the error ellipse (50% confidence) around the estimated mean suppression for two experimental conditions (high contrast stimulus in the center, y-axis, vs. low contrast stimulus, x-axis). Clearly, the measured suppression strengths are variable, but we assumed that this variability is due to the fact that we recorded different types of neurons without being able to distinguish them based on the extracellular recordings. For example, it is conceivable that some neurons were excitatory while others were inhibitory, some neurons may operate in a different local network neighborhood than others, etc. In Schwabe et al. (2010) we hypothesized that this variability is due to different strengths of the intra-areal vs. inter-areal connections (the model parameter ), and we simulated the network model with different parameter values. As we increased, for example, the strength of the feedback

Towards Model-Based Brain Imaging with Multi-Scale Modeling 109

for self-related tasks. For example, vestibular signals are likely to be of importance (Schwabe and Blanke, 2008) as well as proper multi-sensory integration in sensory-motor loops (Schwabe and Blanke, 2007; Kannape et al., 2010). We have conceptualized the sense of self as a set of learning and inference algorithms for such self-related sensory signals, in particular vestibular signals, which are running on a neuronal "wetware". Thus, from a computational perspective, the processing of self-related (multi)sensory signals may be very similar to the processing of, say, visual signals in the visual system. Accordingly, such computational theories can be tested in a similar manner, but the main challenge is to control the sensory stimulation. Ideally one would like to exert fine-grained control over the vestibular stimulations, but in fMRI studies this is only possible via caloric or galvanic stimulation, which are far from the stimulation encountered in more natural scenarios. Thus, in order to investigate the self via brain imaging, one shall investigate the brain activity of whole bodies in action. As of now this is only possible with brain-wide intracranial recordings as pursued in, for example, the Neurotycho project (http://neurotycho.org), because EEG in behaving human subjects is very noisy. In the Neurotycho project, the full-body motions of freely behaving monkeys are also recorded. Unfortunately, the vestibular system is widely distributed (Lopez and Blanke, 2011). This makes large-scale recordings necessary so that animal studies may be the method of choice for the foreseeable future. Initiatives such as the Neurotycho project provide valuable data, which can also enter a Bayesian model comparison, where the forward models may now also account for the full-body motions (recorded via motion-capture technology). Of course, simulating such forward models calls for combining models at

various scales and coupling them with each other via phenomenological models.

Model-based brain imaging with multi-scale models can be performed in a truly data-driven manner once the model classes are defined. We have argued that it is already applicable even in the absence of a complete physical model for relating, for example, spikes to measured EEG activity (given that proper Neuroinformatics tool support is available). Then, applying such techniques could help to decipher the computational role and network connectivity of those brain areas, where we currently have only a rather limited grasp on their role for cognition. One such area is the temporoparietal junction (TPJ). Very briefly, the TPJ has been implicated in the theory of mind (Young et al., 2010), mental perspective taking and out-of-body experiences (Blanke and Mohr, 2005), and as part of an attentionmanagement network (Corbetta et al., 2008). Having available formal model-descriptions of such theories in computational terms (which is not yet possible as we don't have a proper support for multi-facet modeling), and a proposal for how to transform them into neuronal activations would allow for systematically comparing such theories using data. The need to formally describe such computational theories becomes evident by inspecting the rather descriptive nature of many imaging experiments. As of now, the computational theories imported from, for example, research in the visual and sensory-motor system seem to be the most promising candidates to explain TPJ activations in computational terms, namely as part of a (Bayesian) model a subject is using internally in order to infer the state of another person's mind (Kilner et al., 2007), as part of an attention-management network (Corbetta et al., 2008), or simply as a vestibular error signals in imagined (but not actually carried out) full-body movements in the case of mental perspective taking. In the case of distributed

**5.2 The function role of the temporoparietal junction** 

connections in the model (to local inhibitory neurons in V1) we predict different strengths of surround suppression (see lines in **Figure 3c**). For certain strengths the predicted suppression is within the 50% confidence region while for others it is outside.

This study shows that one can respect the variability in the data in terms of heterogeneity of the underlying microcircuits and still use experimental data with NMM-like network models in order to learn something about the actual microcircuits: Here, we found that *within the class of models we considered* the models with stronger feedback projections to inhibitory neurons in the model V1 produce quantitatively better matches to the measured surround suppression than models with less feedback to these inhibitory neurons; see Schwabe et al. (2010) for more details. Of course, stochastic models respecting heterogeneity in single cell properties and network connections, or models for large-scale simulations could be described with model parameters , which capture such heterogeneity and hence would be directly useable within a Bayesian model comparison.

### **5. The neuronal basis of the self as a test case**

Multi-scale models will soon enter model-based brain imaging studies. The method of DCMs can be extended to include cortical circuit models, which take into account, for example, the exquisite balance between excitation and inhibition, or the retinotopy and spatial scales of intra- and inter-areal connections. This could also lead to a re-evaluation of many already published brain imaging studies, where a "neuronal activation" was associated with a certain informally described function. One such discipline, which lacks more formalized models suitable for model-based brain imaging, is the emerging field of "the neuroscience of the self". In other words, we argue that addressing the fundamental question of how the brain organizes self-related computations is a challenging test case for model-based brain imaging with multi-scale models, in particular because of the lack of computational models and the need for microcircuits models to ensure a proper interpretation of the already available imaging data.

### **5.1 Computational modeling of self-related processing**

We argue that investigating self-related processing in the brain in order to determine the neuronal basis of the first-person perspective, or "selfhood" (Blanke and Metzinger, 2009), is a challenging but do-able test case for model-based brain imaging with multi-scale models. Of course, one could ask: Why address such problems, given that we still haven't resolved the circuitry underlying orientation tuning in V1?

What makes investigating the neuronal basis of the self a challenging test case is that we are currently lacking proper computational models for that, but brain-imaging studies suggest that certain brain regions and networks (including the so-called default mode network) may play an important role. It has been shown experimentally that by using incongruent multi-sensory stimulations the apparently hard-wired body scheme and body image of a subject can be disturbed as evident in the so-called "rubber hand illusion" (Botvinick and Cohen, 1998) or an extension of this to the full body (Lenggenhager et al., 2007). By applying computational concepts developed for the visual and sensory-motor system, we proposed computational models for such self-related processing, which do not at all refer to a "self" but only to (multi)sensory signals relevant

connections in the model (to local inhibitory neurons in V1) we predict different strengths of surround suppression (see lines in **Figure 3c**). For certain strengths the predicted

This study shows that one can respect the variability in the data in terms of heterogeneity of the underlying microcircuits and still use experimental data with NMM-like network models in order to learn something about the actual microcircuits: Here, we found that *within the class of models we considered* the models with stronger feedback projections to inhibitory neurons in the model V1 produce quantitatively better matches to the measured surround suppression than models with less feedback to these inhibitory neurons; see Schwabe et al. (2010) for more details. Of course, stochastic models respecting heterogeneity in single cell properties and network connections, or models for large-scale simulations could be described with model parameters , which capture such heterogeneity and hence

Multi-scale models will soon enter model-based brain imaging studies. The method of DCMs can be extended to include cortical circuit models, which take into account, for example, the exquisite balance between excitation and inhibition, or the retinotopy and spatial scales of intra- and inter-areal connections. This could also lead to a re-evaluation of many already published brain imaging studies, where a "neuronal activation" was associated with a certain informally described function. One such discipline, which lacks more formalized models suitable for model-based brain imaging, is the emerging field of "the neuroscience of the self". In other words, we argue that addressing the fundamental question of how the brain organizes self-related computations is a challenging test case for model-based brain imaging with multi-scale models, in particular because of the lack of computational models and the need for microcircuits models to ensure a proper

We argue that investigating self-related processing in the brain in order to determine the neuronal basis of the first-person perspective, or "selfhood" (Blanke and Metzinger, 2009), is a challenging but do-able test case for model-based brain imaging with multi-scale models. Of course, one could ask: Why address such problems, given that we still haven't resolved

What makes investigating the neuronal basis of the self a challenging test case is that we are currently lacking proper computational models for that, but brain-imaging studies suggest that certain brain regions and networks (including the so-called default mode network) may play an important role. It has been shown experimentally that by using incongruent multi-sensory stimulations the apparently hard-wired body scheme and body image of a subject can be disturbed as evident in the so-called "rubber hand illusion" (Botvinick and Cohen, 1998) or an extension of this to the full body (Lenggenhager et al., 2007). By applying computational concepts developed for the visual and sensory-motor system, we proposed computational models for such self-related processing, which do not at all refer to a "self" but only to (multi)sensory signals relevant

suppression is within the 50% confidence region while for others it is outside.

would be directly useable within a Bayesian model comparison.

**5. The neuronal basis of the self as a test case** 

interpretation of the already available imaging data.

the circuitry underlying orientation tuning in V1?

**5.1 Computational modeling of self-related processing** 

for self-related tasks. For example, vestibular signals are likely to be of importance (Schwabe and Blanke, 2008) as well as proper multi-sensory integration in sensory-motor loops (Schwabe and Blanke, 2007; Kannape et al., 2010). We have conceptualized the sense of self as a set of learning and inference algorithms for such self-related sensory signals, in particular vestibular signals, which are running on a neuronal "wetware". Thus, from a computational perspective, the processing of self-related (multi)sensory signals may be very similar to the processing of, say, visual signals in the visual system. Accordingly, such computational theories can be tested in a similar manner, but the main challenge is to control the sensory stimulation. Ideally one would like to exert fine-grained control over the vestibular stimulations, but in fMRI studies this is only possible via caloric or galvanic stimulation, which are far from the stimulation encountered in more natural scenarios. Thus, in order to investigate the self via brain imaging, one shall investigate the brain activity of whole bodies in action. As of now this is only possible with brain-wide intracranial recordings as pursued in, for example, the Neurotycho project (http://neurotycho.org), because EEG in behaving human subjects is very noisy. In the Neurotycho project, the full-body motions of freely behaving monkeys are also recorded. Unfortunately, the vestibular system is widely distributed (Lopez and Blanke, 2011). This makes large-scale recordings necessary so that animal studies may be the method of choice for the foreseeable future. Initiatives such as the Neurotycho project provide valuable data, which can also enter a Bayesian model comparison, where the forward models may now also account for the full-body motions (recorded via motion-capture technology). Of course, simulating such forward models calls for combining models at various scales and coupling them with each other via phenomenological models.

### **5.2 The function role of the temporoparietal junction**

Model-based brain imaging with multi-scale models can be performed in a truly data-driven manner once the model classes are defined. We have argued that it is already applicable even in the absence of a complete physical model for relating, for example, spikes to measured EEG activity (given that proper Neuroinformatics tool support is available). Then, applying such techniques could help to decipher the computational role and network connectivity of those brain areas, where we currently have only a rather limited grasp on their role for cognition. One such area is the temporoparietal junction (TPJ). Very briefly, the TPJ has been implicated in the theory of mind (Young et al., 2010), mental perspective taking and out-of-body experiences (Blanke and Mohr, 2005), and as part of an attentionmanagement network (Corbetta et al., 2008). Having available formal model-descriptions of such theories in computational terms (which is not yet possible as we don't have a proper support for multi-facet modeling), and a proposal for how to transform them into neuronal activations would allow for systematically comparing such theories using data. The need to formally describe such computational theories becomes evident by inspecting the rather descriptive nature of many imaging experiments. As of now, the computational theories imported from, for example, research in the visual and sensory-motor system seem to be the most promising candidates to explain TPJ activations in computational terms, namely as part of a (Bayesian) model a subject is using internally in order to infer the state of another person's mind (Kilner et al., 2007), as part of an attention-management network (Corbetta et al., 2008), or simply as a vestibular error signals in imagined (but not actually carried out) full-body movements in the case of mental perspective taking. In the case of distributed

Towards Model-Based Brain Imaging with Multi-Scale Modeling 111

Botvinick M, Cohen J (1998) Rubber hands "feel" touch that eyes see. Nature 391:756

Corbetta M, Patel G, Shulman GL (2008) The reorienting system of the human brain: from

Davison AP, Brüderle D, Eppler J, Kremkow J, Muller E, Pecevski D, Perrinet L, Yger P

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2634533&tool=pmce

De Schutter E (2008) Why are computational neuroscience and systems biology so separate?

Delorme A, Makeig S (2004) EEGLAB: an open source toolbox for analysis of single-trial

Editorial (2000) A debate over fMRI data sharing. Nature Neuroscience 3:845-6 Available at: http://www.ncbi.nlm.nih.gov/pubmed/10966604 [Accessed May 3, 2011]. Fleck L (1979) The Genesis and Development of a Scientific Fact T. J. Merton & R. K. Trenn,

Goddard NH, Hucka M, Howell F, Cornelis H, Shankar K, Beeman D (2001) Towards

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1088511&tool=pmce

Güllmar D, Haueisen J, Reichenbach JR (2010) Influence of anisotropic electrical

Hyvärinen A, Ramkumar P, Parkkonen L, Hari R (2010) Independent component analysis of

49:257-71 Available at: http://www.ncbi.nlm.nih.gov/pubmed/19699307. Ichida JM, Schwabe L, Bressloff PC, Angelucci A (2007) Response Facilitation From the

Kannape O a, Schwabe L, Tadi T, Blanke O (2010) The limits of agency in walking humans.

http://www.ncbi.nlm.nih.gov/pubmed/20144893 [Accessed August 4, 2010].

http://www.ncbi.nlm.nih.gov/pubmed/15102499 [Accessed May 3, 2011].

(2008) PyNN: A Common Interface for Neuronal Network Simulators. Frontiers in

EEG dynamics including independent component analysis. Journal of

NeuroML: model description methods for collaborative modelling in neuroscience. Philosophical Transactions of the Royal Society of London. Series B, Biological

conductivity in white matter tissue on the EEG/MEG forward and inverse solution. A high-resolution whole head simulation study. NeuroImage 51:145-63 Available at: http://www.ncbi.nlm.nih.gov/pubmed/20156576 [Accessed May 3, 2011]. Harrison LM, Stephan KE, Rees G, Friston KJ (2007) Extra-classical receptive field effects

measured in striate cortex with fMRI. NeuroImage 34:1199-208 Available at:

short-time Fourier transforms for spontaneous EEG/MEG analysis. NeuroImage

"Suppressive" Receptive Field Surround of Macaque V1 Neurons. Journal of

Available at: http://www.ncbi.nlm.nih.gov/pubmed/9486643.

environment to theory of mind. Neuron 58:306-24 Available at:

ntrez&rendertype=abstract [Accessed September 13, 2010].

PLoS computational biology 4:e1000078 Available at:

http://www.ncbi.nlm.nih.gov/pubmed/18466742.

http://www.ncbi.nlm.nih.gov/pubmed/18516226.

Neuroscience Methods 134:9-21 Available at:

eds. Chicago: University of Chicago Press.

ntrez&rendertype=abstract [Accessed May 3, 2011].

http://www.ncbi.nlm.nih.gov/pubmed/17169579.

Sciences 356:1209-28 Available at:

Neurophysiology:2168 -2181

Neuropsychologia 48:1628-36 Available at:

Neuroinformatics 2:11 Available at:

brain activations and in a still rather exploratory phase of investigating self-related brain networks, combining multi-facet modeling and (even phenomenological) multi-scale models may be most fruitful.

### **6. Conclusion and future research**

In summary, we have emphasized that a Bayesian model comparison is the most systematic method to compare different models on the basis of measurements. The explicit use of models makes the theory-dependence of observations explicit and emphasizes that within a Bayesian model comparison we compare only models within a class (or between classes) of models as compared to finding a single "true" model. Our work in modeling orientation tuning and surround suppression further emphasize this: The employed models are certainly more detailed than the NMMs currently employed in brain-imaging, but they are also far too simplistic to account for many biophysical details. Still, our comparison of such network models within a class of models for a given data set is a valuable example of a systematic model comparison.

We also argued that multi-scale modeling, combined with multi-facet modeling, will be an important method for investigating even rather challenging neuroscientific questions such as explaining the neural basis of the self in terms of computational and neuronal models. This can be done even in the absence of a complete multi-level model for relating spikes to macroscopic data from brain imaging, namely by employing phenomenological models. As a consequence, we see the development of supporting Neuroinformatics tools to support the data-driven comparison of multi-scale and multi-facet models as an important step for future research.

### **7. References**


http://www.ncbi.nlm.nih.gov/pubmed/16019077.

brain activations and in a still rather exploratory phase of investigating self-related brain networks, combining multi-facet modeling and (even phenomenological) multi-scale models

In summary, we have emphasized that a Bayesian model comparison is the most systematic method to compare different models on the basis of measurements. The explicit use of models makes the theory-dependence of observations explicit and emphasizes that within a Bayesian model comparison we compare only models within a class (or between classes) of models as compared to finding a single "true" model. Our work in modeling orientation tuning and surround suppression further emphasize this: The employed models are certainly more detailed than the NMMs currently employed in brain-imaging, but they are also far too simplistic to account for many biophysical details. Still, our comparison of such network models within a class of models for a given data set is a valuable example of a

We also argued that multi-scale modeling, combined with multi-facet modeling, will be an important method for investigating even rather challenging neuroscientific questions such as explaining the neural basis of the self in terms of computational and neuronal models. This can be done even in the absence of a complete multi-level model for relating spikes to macroscopic data from brain imaging, namely by employing phenomenological models. As a consequence, we see the development of supporting Neuroinformatics tools to support the data-driven comparison of multi-scale and multi-facet models as an important step for

Almeida R, Stetter M (2002) Modeling the link between functional imaging and neuronal

Angelucci A, Levitt JB, Walton EJS, Hupe J-M, Bullier J, Lund JS (2002) Circuits for local and

22:8633-46 Available at: http://www.ncbi.nlm.nih.gov/pubmed/12351737. Ansorg R, Schwabe L (2010) Domain-Specific Modeling as a Pragmatic Approach to

Blanke O, Metzinger T (2009) Full-body illusions and minimal phenomenal selfhood. Trends

Blanke O, Mohr C (2005) Out-of-body experience, heautoscopy, and autoscopic

activity: synaptic metabolic demand and spike rates. NeuroImage 17:1065-79 Available at: http://www.ncbi.nlm.nih.gov/pubmed/12377179 [Accessed May 4,

global signal integration in primary visual cortex. The Journal of Neuroscience

Neuronal Model Descriptions In Brain Informatics, Lecture Notes in Computer

hallucination of neurological origin Implications for neurocognitive mechanisms of corporeal awareness and self-consciousness. Brain research. Brain research reviews

may be most fruitful.

**6. Conclusion and future research** 

systematic model comparison.

future research.

**7. References** 

2011].

Science (6334) Springer, p. 168-179.

50:184-99 Available at:

in cognitive sciences 13:7-13 Available at: http://www.ncbi.nlm.nih.gov/pubmed/19058991.

http://www.ncbi.nlm.nih.gov/pubmed/16019077.


 http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2634533&tool=pmce ntrez&rendertype=abstract [Accessed September 13, 2010].


http://www.ncbi.nlm.nih.gov/pubmed/15102499 [Accessed May 3, 2011].


 http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1088511&tool=pmce ntrez&rendertype=abstract [Accessed May 3, 2011].


http://www.ncbi.nlm.nih.gov/pubmed/20144893 [Accessed August 4, 2010].

Towards Model-Based Brain Imaging with Multi-Scale Modeling 113

Pascual-Marqui RD, Michel CM, Lehmann D (1995) Segmentation of brain electrical activity

Popper K (1972) Objective Knowledge: An Evolutionary Approach. New York: Oxford

Rasch M, Logothetis NK, Kreiman G (2009) From neurons to circuits: linear estimation of

Schultz W (2002) Getting formal with dopamine and reward. Neuron 36:241-63 Available at:

Schwabe L, Blanke O (2007) Cognitive neuroscience of ownership and agency.

 http://www.ncbi.nlm.nih.gov/pubmed/17920522 [Accessed September 7, 2010]. Schwabe L, Blanke O (2008) The vestibular component in out-of-body experiences: a computational approach. Frontiers in Human Neuroscience 2:17 Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2610253&tool=pmce

Schwabe L, Ichida JM, Shushruth S, Mangapathy P, Angelucci A (2010) Contrast-

Schwabe L, Obermayer K, Angelucci A, Bressloff PC (2006)(a) The role of feedback in

Schwabe L, Obermayer K, Angelucci A, Bressloff PC (2006)(b) The role of feedback in

Stimberg M, Wimmer K, Martin R, Schwabe L, Mariño J, Schummers J, Lyon DC, Sur M,

Van Horn JD, Ishai A (2007) Mapping the human brain: new insights from FMRI data

Young L, Camprodon JA, Hauser M, Pascual-Leone A, Saxe R (2010) Disruption of the right

http://www.ncbi.nlm.nih.gov/pubmed/17917125 [Accessed May 3, 2011].

beliefs in moral judgments. PNAS 107:6753-8 Available at:

dependence of surround suppression in Macaque V1: Experimental testing of a

shaping the extra-classical receptive field of cortical neurons: a recurrent network

shaping the extra-classical receptive field of cortical neurons: a recurrent network

Obermayer K (2009) The operating regime of local computations in primary visual

temporoparietal junction with transcranial magnetic stimulation reduces the role of

http://www.ncbi.nlm.nih.gov/pubmed/7622149 [Accessed May 3, 2011].

ntrez&rendertype=abstract [Accessed September 14, 2010].

recurrent network model. NeuroImage 1:1-16 Available at:

model. Journal of Neuroscience 26:9117-29 Available at:

model. Journal of Neuroscience 26:9117-29 Available at:

http://www.ncbi.nlm.nih.gov/pubmed/20079853.

http://www.ncbi.nlm.nih.gov/pubmed/16957068.

http://www.ncbi.nlm.nih.gov/pubmed/16957068.

cortex. Cerebral Cortex 19:2166-80 Available at: http://www.ncbi.nlm.nih.gov/pubmed/19221143.

sharing. Neuroinformatics 5:146-53 Available at:

http://www.ncbi.nlm.nih.gov/pubmed/20351278.

http://www.ncbi.nlm.nih.gov/pubmed/12383780.

Consciousness and cognition 16:661-6 Available at:

Biomedical Engineering 42:658-65 Available at:

University Press.

ntrez&rendertype=abstract.

into microstates: model estimation and validation. IEEE Transactions on

local field potentials. Journal of Neuroscience 29:13785-96 Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2924964&tool=pmce

Kilner JM, Friston KJ, Frith CD (2007) The mirror-neuron system: a Bayesian perspective. Neuroreport 18:619-23 Available at:

http://www.ncbi.nlm.nih.gov/pubmed/17413668.


http://www.ncbi.nlm.nih.gov/pubmed/18548064 [Accessed May 3, 2011].

Lopez C, Blanke O (2011) The thalamocortical vestibular system in animals and humans. Brain Research Reviews Available at:

http://www.ncbi.nlm.nih.gov/pubmed/21223979 [Accessed March 17, 2011].


Kilner JM, Friston KJ, Frith CD (2007) The mirror-neuron system: a Bayesian perspective.

Kuhn TS (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago

Lauritzen M, Gold L (2003) Brain Function and Neurophysiological Correlates of Signals

Lenggenhager B, Tadi T, Metzinger T, Blanke O (2007) Video ergo sum: manipulating bodily self-consciousness. Science (New York, N.Y.) 317:1096-9 Available at:

Logothetis NK (2008) What we can do and what we cannot do with fMRI. Nature 453:869-78

Lopez C, Blanke O (2011) The thalamocortical vestibular system in animals and humans.

Mariño J, Schummers J, Lyon DC, Schwabe L, Beck O, Wiesing P, Obermayer K, Sur M

Mariño J, Schummers J, Lyon DC, Schwabe L, Beck O, Wiesing P, Obermayer K, Sur M

Murray MM, Brunet D, Michel CM (2008) Topographic ERP analyses: a step-by-step tutorial

Onton J, Makeig S (2006) Information-based modeling of event-related brain dynamics.

Oostenveld R, Fries P, Maris E, Schoffelen J-M (2011) FieldTrip: Open source software for

Ozeki H, Finn IM, Schaffer ES, Miller KD, Ferster D (2009) Inhibitory stabilization of the

Available at: http://www.ncbi.nlm.nih.gov/pubmed/19477158.

Computational Intelligence and Neuroscience 2011:156869 Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3021840&tool=pmce

advanced analysis of MEG, EEG, and invasive electrophysiological data.

cortical network underlies visual surround suppression. Neuron 62:578-92

(2005)(a) Invariant computations in local cortical networks with balanced excitation

(2005)(b) Invariant computations in local cortical networks with balanced excitation

Used in Functional Neuroimaging. Neurophysiology 23:3972-3980

http://www.ncbi.nlm.nih.gov/pubmed/18548064 [Accessed May 3, 2011].

 http://www.ncbi.nlm.nih.gov/pubmed/21223979 [Accessed March 17, 2011]. Ma WJ, Beck JM, Latham PE, Pouget A (2006) Bayesian inference with probabilistic

population codes. Nature Neuroscience 9:1432-8 Available at:

and inhibition. Nature Neuroscience 8:194-201 Available at:

and inhibition. Nature Neuroscience 8:194-201 Available at:

Neuroreport 18:619-23 Available at: http://www.ncbi.nlm.nih.gov/pubmed/17413668.

http://www.ncbi.nlm.nih.gov/pubmed/17717189.

Brain Research Reviews Available at:

http://www.ncbi.nlm.nih.gov/pubmed/17057707.

http://www.ncbi.nlm.nih.gov/pubmed/15665876.

http://www.ncbi.nlm.nih.gov/pubmed/15665876.

review. Brain topography 20:249-64 Available at: http://www.ncbi.nlm.nih.gov/pubmed/18347966. Noë A (2005) Action in Perception. Cambridge, MA: MIT Press.

ntrez&rendertype=abstract [Accessed April 29, 2011].

Marr D (1982) Vision. W.H.Freeman & Co Ltd.

Brain 159:99-120

Press.

Available at:

Pascual-Marqui RD, Michel CM, Lehmann D (1995) Segmentation of brain electrical activity into microstates: model estimation and validation. IEEE Transactions on Biomedical Engineering 42:658-65 Available at:

http://www.ncbi.nlm.nih.gov/pubmed/7622149 [Accessed May 3, 2011].


http://www.ncbi.nlm.nih.gov/pubmed/17920522 [Accessed September 7, 2010].


 http://www.ncbi.nlm.nih.gov/pubmed/19221143. Van Horn JD, Ishai A (2007) Mapping the human brain: new insights from FMRI data sharing. Neuroinformatics 5:146-53 Available at:

http://www.ncbi.nlm.nih.gov/pubmed/17917125 [Accessed May 3, 2011].

Young L, Camprodon JA, Hauser M, Pascual-Leone A, Saxe R (2010) Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments. PNAS 107:6753-8 Available at: http://www.ncbi.nlm.nih.gov/pubmed/20351278.

**6** 

Irene Karanasiou

*Greece* 

**Functional Brain Imaging Using** 

*School of Electrical and Computer Engineering National Technical University of Athens* 

**Non-Invasive Non-Ionizing Methods:** 

**Towards Multimodal and Multiscale Imaging** 

Current and future trends in functional neuroimaging focus on the combination and synchronous application of imaging modalities by integrating more than one measures of brain function, e.g., hemodynamic and electrophysiological (EEG and fMRI). These multimodal approaches aim at achieving sufficient temporal and spatial resolution in order to localize neural activity and identify the functional connectivity between different brain regions, hypothesizing that the multi-modal information represents the same neural

In parallel, besides the impressive recent advancements in neuroimaging research, arguably even more outstanding advances have been reported in molecular medicine and genetics research. In this context, current and future trends in medical research aim at bridging biomolecular information and neural function through studies in anatomic and functional biomedical imaging, focusing on methods to discover novel markers influencing specific traits in psychiatric and neurological diseases. The new field of imaging genetics uses neuroimaging methods to assess the impact of genetic variance on the human brain. Ideally, several imaging methods are implemented in combination to achieve an optimal characterization of structural and functional parameters. The latter are statistically related to the genotype, resulting in a form of a genetic association study. Such procedures acting as a mediator between genetic polymorphisms and psychiatric disease risk may shed light on the relevant underlying neural processes (Hariri et al. 2006). Although this approach is still relatively novel, the emerging literature and initial results hold great promise that it may contribute to the understanding of the pathophysiology of complex psychiatric and neurological disorders. Importantly, the future trends in neuroimaging envision imaging of the chemical functions of the organ cells and even realtime images of the genes and proteins at work within cells. These will convey sophisticated fingerprints of disease processes and better assessment of the effectiveness of curative procedures. Overall, it is evident that profiling of the molecular changes in disease will also expand the scope of body imaging. In this scientific and technological milieu, the widely-acknowledged "THz gap" makes this region of the electromagnetic spectrum a scientific frontier, with critical questions in physics, chemistry, biology, and medicine that need answers. All molecules (biological,

**1. Introduction** 

networks (Laufs et al., 2008).

Zimmermann U, Petersen S, Schwabe L, Rienen U van (2011) Combination of Neural-Mass Models With Anisotropic Head Models to Simulate EEG-Signals In CEM2011 8th Int Conference on Computation in Electromagnetics Warsaw.
