Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images: Advances and Perspectives

*Aron Hernandez-Trinidad, Blanca Olivia Murillo-Ortiz, Rafael Guzman-Cabrera and Teodoro Cordova-Fraga*

### **Abstract**

This chapter examines the advances and perspectives of the applications of artificial intelligence (AI) in the classification of magnetic resonance (MR) images. It focuses on the development of AI-based automatic classification models that have achieved competitive results compared to the state-of-the-art. Accurate and efficient classification of MR images is essential for medical diagnosis but can be challenging due to the complexity and variability of the data. AI offers tools and techniques that can effectively address these challenges. The chapter first addresses the fundamentals of artificial intelligence applied to the classification of medical images, including machine learning techniques and convolutional neural networks. Here, recent advances in the use of AI to classify MRI images in various clinical applications, such as brain tumor detection, are explored. Additionally, advantages and challenges associated with implementing AI models in clinical settings are discussed, such as the interpretability of results and integration with existing radiology systems. Prospects for AI in MR image classification are also highlighted, including the combination of multiple imaging modalities and the use of more advanced AI approaches such as reinforcement learning and model generation.

**Keywords:** artificial intelligence, deep learning, medical imaging, convolutional neural networks, computer-aided diagnosis, automatic classification models

### **1. Introduction**

Medical imaging plays a pivotal role in the diagnosis and treatment of diseases, offering intricate visual insights into the human body [1]. Among the array of available imaging techniques, magnetic resonance imaging (MRI) has witnessed substantial growth in adoption due to its capacity for capturing high-resolution images that exhibit exceptional contrast between soft tissues [2]. The accessibility of magnetic resonance imaging has surged, thanks to advancements in technology and heightened recognition of its clinical value. These images, obtained from various

#### **Figure 1.**

*Organization. We first review MRI images. Next, we introduce common AI models that have been applied to learn those MRI images. Then, we investigate MRI applications that employ AI models. Finally, we discuss the evaluation metrics that are proposed to evaluate how well these AI models are.*

anatomical regions and under diverse protocols, furnish indispensable information about anatomical structures, functions, and potential abnormalities [3]. Nevertheless, the interpretation of these MR images presents formidable challenges. Manual analysis by radiologists can be labor-intensive, reliant on expertise, and vulnerable to interobserver variations. Furthermore, the burgeoning volume of images for each patient underscores the imperative for precise and efficient analysis to bolster clinical decision-making [4].

In this context, the application of artificial intelligence (AI) in the classification of magnetic resonance images has emerged as a promising solution [5]. AI holds the potential to process large volumes of images swiftly and accurately, thereby bolstering clinicians in the early detection, characterization, and ongoing monitoring of diseases [6]. Leveraging machine learning techniques and convolutional neural networks, the development of automatic classification models for medical images has demonstrated their competitiveness in comparison to traditional methods [7]. These models excel in discerning subtle patterns and features within MR images, thus facilitating precise diagnoses and prognoses for a myriad of conditions. **Figure 1** illustrates the organization of this chapter.

In summation, given the current landscape of medical imaging with the expanding availability of magnetic resonance images and the compelling need for precise and efficient analysis to underpin clinical decisions, the application of artificial intelligence in image classification is a field of research and development of profound significance [8]. By uniting the computational prowess of AI with the rich, intricate information offered by MR imaging, the potential exists to elevate the accuracy and efficiency of medical diagnosis, ushering in fresh possibilities for patient care.

### **2. Overview of MRI images**

Magnetic Resonance Imaging (MRI) is a non-invasive medical imaging technique that plays a pivotal role in modern healthcare by providing detailed cross-sectional images of the body's internal structures [9]. It operates on the principle of using

*Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images… DOI: http://dx.doi.org/10.5772/intechopen.113826*

strong magnetic fields and radio waves to interact with the hydrogen nuclei (protons) in the body. As these protons align and then return to their natural state within the magnetic field, they emit signals that are captured and processed to generate images. MRI offers various types of images, each with unique applications. T1-weighted images provide excellent anatomical detail, while T2-weighted images are adept at detecting abnormalities like edema and lesions [10]. Proton density (PD)-weighted images emphasize proton concentration, and Diffusion-weighted images (DWI) reveal water molecule movement. Functional MRI (fMRI) maps brain activity, magnetic resonance angiography (MRA) visualizes blood vessels, and magnetic resonance spectroscopy (MRS) assesses tissue chemistry [11]. These images find extensive use in clinical applications, from neuroimaging for brain and spinal conditions to musculoskeletal assessments and cardiovascular evaluations. MRI's advantages include the absence of ionizing radiation, superb soft tissue contrast, and multi-planar imaging capability [12]. However, it can be sensitive to motion artifacts, contraindicated for certain metal implants, and sometimes time-consuming for patients. Nonetheless, MRI remains an invaluable tool, offering detailed insights into the human body's internal structures and functions, thus shaping modern healthcare practices [13]. **Figure 2** illustrates a few examples using different MRI techniques from various human organs.

#### **2.1 Anatomical MRI**

One of the fundamental applications of magnetic resonance imaging (MRI) in the realm of medical diagnosis is the visualization of anatomical structures within the human body. Anatomical MRI, often referred to as structural MRI, is a cornerstone of clinical imaging. It provides detailed, high-resolution images of various body parts, offering essential insights into the morphology and integrity of tissues and organs [15].

**Figure 2.**

*Illustration of common MRI images. (a) T1-weighted MRI; left: Liver; right: Brain [neonate], (b) T2-weighted MRI; left: Prostate; middle: Brain [neonate]; right: Liver, (c) functional MRI, (d) diffusion tensor imaging, and (e) MR angiography [14].*

Anatomical MRI sequences, such as T1-weighted and T2-weighted images, play a crucial role in depicting different tissues based on their inherent physical properties. T1-weighted images offer excellent contrast between fat and water-rich tissues, making them ideal for visualizing anatomical boundaries and structures. In contrast, T2-weighted images highlight variations in water content, effectively revealing abnormalities such as edema, inflammation, or lesions [16].

These MRI sequences are instrumental in diagnosing a wide range of medical conditions. In neuroimaging, they aid in detecting brain abnormalities, such as tumors, vascular malformations, or degenerative diseases like multiple sclerosis. In musculoskeletal imaging, anatomical MRI helps identify soft tissue injuries, joint disorders, and assess the integrity of ligaments and tendons. Additionally, in abdominal imaging, it facilitates the evaluation of organs like the liver, kidneys, and gastrointestinal tract, allowing the detection of tumors, cysts, or structural anomalies.

### **2.2 Diffusion MRI**

Diffusion Magnetic Resonance Imaging (dMRI or diffusion MRI) is a specialized MRI technique that offers a unique window into the microscopic structures and tissue properties within the human body. Unlike traditional anatomical MRI, diffusion MRI focuses on the movement of water molecules within tissues, providing critical information about cellular structures and tissue microarchitecture [17].

At its core, diffusion MRI capitalizes on the inherent Brownian motion of water molecules. In biological tissues, water molecules are not stationary; instead, they exhibit random motion influenced by obstacles such as cell membranes, fibers, and other cellular structures. This random motion, known as diffusion, can be measured, and quantified using diffusion MRI [18].

One of the primary measures derived from diffusion MRI is the apparent diffusion coefficient (ADC), which characterizes the rate and direction of water molecule diffusion within tissues [19]. High ADC values typically indicate free and unrestricted diffusion, often seen in areas with fluid or cystic structures. Conversely, low ADC values suggest restricted diffusion, often associated with dense cellular structures or pathologies that hinder water molecule movement.

Diffusion MRI is particularly valuable in neuroimaging, where it enables the mapping of white matter tracts in the brain. By tracking the diffusion of water molecules along nerve fibers, this technique offers insights into brain connectivity and can identify abnormalities such as white matter lesions, which are common in conditions like multiple sclerosis [20].

#### **2.3 Functional MRI**

Functional magnetic resonance imaging (fMRI) is a groundbreaking application of MRI technology that provides real-time insights into the functioning of the human brain. Unlike traditional MRI, which primarily captures structural information, fMRI focuses on the brain's dynamic activity by measuring changes in blood flow and oxygenation levels [21]. At the heart of fMRI lies the concept of neurovascular coupling. When a specific region of the brain becomes active, it requires an increased supply of oxygen and glucose. To meet this demand, blood vessels in the activated area dilate and blood flow surges, leading to an increase in oxygenated hemoglobin levels. This change in blood oxygenation can be detected and visualized by fMRI [22].

### *Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images… DOI: http://dx.doi.org/10.5772/intechopen.113826*

Functional MRI is a non-invasive tool that has revolutionized our understanding of brain function and has numerous applications in both clinical and research settings. It enables researchers and clinicians to observe how different brain regions respond to specific tasks, stimuli, or cognitive processes [23]. One of the most prevalent applications of fMRI is functional localization. This technique helps identify critical brain areas responsible for specific functions, such as language processing, motor control, and memory formation. For instance, by instructing a subject to perform language-related tasks during an fMRI scan, researchers can pinpoint the brain regions associated with speech and language functions [24].

In the realm of cognitive neuroscience, fMRI is instrumental in studying complex cognitive processes like decision-making, emotion regulation, and working memory. By examining patterns of brain activation, researchers gain insights into the neural underpinnings of these cognitive functions, paving the way for breakthroughs in fields like psychology and psychiatry [25]. The clinical applications of fMRI are equally profound. It is extensively used in presurgical planning, particularly in cases where brain lesions or tumors are present. fMRI helps surgeons map out functional brain areas, ensuring that critical regions are preserved during surgery to minimize postoperative deficits [26].

### **2.4 Magnetic resonance angiography (MRA)**

Magnetic Resonance Angiography (MRA) is a specialized branch of MRI that focuses on imaging blood vessels, providing detailed visualizations of the vascular system without the need for invasive procedures or contrast agents commonly used in traditional angiography [27]. MRA has evolved as a valuable diagnostic tool in vascular medicine, offering high-resolution images of arteries and veins throughout the body. One of the key advantages of MRA is its non-invasive nature. Unlike conventional angiography, which requires the insertion of catheters and injection of contrast agents, MRA relies solely on the principles of magnetic resonance [28]. Patients undergoing MRA experience no exposure to ionizing radiation or contrast-related risks, making it a safer option, especially for individuals with underlying health conditions.

MRA techniques vary depending on the vascular region of interest, each optimized to provide optimal imaging for specific anatomical areas. Some common MRA techniques include [29]:


• *Magnetic resonance venography (MRV)* [32]: MRV is a specific application of MRA tailored to visualize veins. It is commonly used to assess deep vein thrombosis (DVT) in the extremities or to evaluate the venous system in the brain.

The clinical applications of MRA are extensive. It is routinely employed for the diagnosis and evaluation of vascular conditions, including [33]:


The integration of artificial intelligence (AI) into MRA analysis holds significant promise. AI algorithms can assist in automating the detection and quantification of vascular abnormalities, improving the efficiency and accuracy of diagnoses. Furthermore, AI-driven predictive models can provide insights into the risk of vascular events and guide personalized treatment strategies [34].

### **3. Brief introduction of AI models**

Artificial Intelligence (AI) has emerged as a transformative force in the field of medical imaging, revolutionizing the way we interpret and utilize various imaging modalities, including Magnetic Resonance Imaging (MRI) [35]. AI models, often powered by deep learning techniques, have demonstrated remarkable capabilities in extracting meaningful information from medical images, thereby aiding in disease diagnosis, treatment planning, and prognosis assessment.

At the heart of AI's impact on medical imaging are neural networks, specifically Convolutional Neural Networks (CNNs) [36]. CNNs have proven highly effective in learning complex patterns and features from images, making them well-suited for tasks such as image classification, segmentation, and object detection. These models mimic the hierarchical organization of neurons in the human brain, enabling them to recognize intricate details within medical images [37]. Two prominent AI models frequently employed in medical imaging are:

• *Convolutional neural networks (CNNs)* [38]: CNNs have become the workhorse of deep learning in medical imaging. They consist of multiple layers of convolutional and pooling operations that systematically extract hierarchical features from images. CNNs excel in tasks like image classification, where they

#### *Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images… DOI: http://dx.doi.org/10.5772/intechopen.113826*

can distinguish between normal and abnormal findings within medical images. Variants of CNNs, such as VGG16, ResNet50, and Inception, have been adapted and fine-tuned for specific medical imaging applications.

• *Recurrent neural networks (RNNs)* [39]: While CNNs dominate image-related tasks, RNNs are specialized for sequential data, making them invaluable for tasks that involve temporal information. In medical imaging, RNNs are particularly useful for processing time-series data, such as functional MRI (fMRI) or dynamic contrast-enhanced MRI (DCE-MRI). They can track changes in image sequences over time, aiding in the assessment of conditions like epilepsy or tumor response to treatment.

AI models in medical imaging go beyond image classification. They are instrumental in tasks like image segmentation, where they identify and outline specific structures or regions of interest within an image. For instance, in MRI, AI can be used to segment tumors, blood vessels, or organs, enabling precise measurements and volumetric assessments [40]. Furthermore, AI models facilitate image registration, aligning images from different modalities or time points, which is crucial for monitoring disease progression or treatment response. They also contribute to generative models, like Generative Adversarial Networks (GANs), which create synthetic medical images for training and augmenting datasets, a particularly useful capability in situations where data is limited [41].

In the realm of AI models for MRI image analysis, a rich tapestry of architectures has emerged, each tailored to specific tasks and challenges. The U-Net architecture, with its intricate encoding and decoding pathways, stands as a stalwart for semantic segmentation tasks, particularly in medical image segmentation [42]. Its ability to capture fine-grained features and preserve spatial information has made it indispensable in delineating anatomical structures. On the other hand, the Multiple Layer Perceptron (MLP) showcases its prowess in handling structured data extracted from MRI images [43]. MLPs are versatile, leveraging dense layers to process information and make predictions, making them suitable for various classification and regression tasks. Meanwhile, Graph Neural Networks (GNNs) have gained traction in MRI analysis by modeling complex relationships within medical data [44]. GNNs excel in tasks requiring the understanding of intricate connections, such as mapping neural pathways or identifying brain regions with functional significance. The adaptability of these architectures further underscores the dynamism of AI models in MRI image analysis, catering to the diverse needs of medical professionals and researchers.

As we delve deeper into this chapter, we will explore the various applications of AI models in the realm of MRI, shedding light on how these models are advancing our ability to extract meaningful insights from medical images. We will discuss their role in image analysis, disease detection, and prognosis assessment, emphasizing their potential to enhance clinical decision-making and patient care. Additionally, we will delve into the latest advancements and future perspectives in AI-driven MRI analysis, highlighting the ongoing research and development in this rapidly evolving field.

### **4. Deep learning techniques**

Deep learning techniques have catalyzed a transformative shift in medical image analysis, propelling the field to new heights in accuracy and efficiency [45]. In the

context of Magnetic Resonance Imaging (MRI), these techniques have proven particularly invaluable, enabling the extraction of intricate information from complex images [46]. This section explores the key deep learning techniques employed in MRI analysis, shedding light on their applications and advantages.

### **4.1 Convolutional neural networks (CNNs)**


### **4.2 Recurrent neural networks (RNNs)**


### **4.3 Generative adversarial networks (GANs)**


### **4.4 Transfer learning**

• *Pretrained models*: Transfer learning involves using pretrained deep learning models on large datasets, such as ImageNet, and fine-tuning them for specific MRI analysis tasks [47]. This approach saves computational resources and training time while benefiting from the generalization power of pretrained models.

*Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images… DOI: http://dx.doi.org/10.5772/intechopen.113826*

### **4.5 Autoencoders**

• *Feature extraction*: Autoencoders are utilized for unsupervised feature learning [48]. They compress MRI images into lower-dimensional representations, capturing salient features. These learned features can then be used for various tasks, including classification and segmentation.

### **4.6 Attention mechanisms**

• *Region of Interest (ROI) Attention*: Attention mechanisms enable models to focus on specific regions within an MRI scan [49]. This is particularly useful in cases where only a small part of the image contains diagnostically relevant information. Attention mechanisms help improve model accuracy by emphasizing the important areas.

### **4.7 3D CNNs**

• *Volumetric analysis*: For 3D MRI data, such as volumetric MRI or MRI video sequences, 3D CNNs are employed [50]. These models consider the spatial relationships between image slices, providing a more comprehensive understanding of the 3D structure of anatomical or pathological regions.

### **4.8 Ensemble models**

• *Improved accuracy*: Ensemble models combine predictions from multiple deep learning models, boosting overall accuracy and reducing model variability [51]. In MRI analysis, they are employed to enhance diagnostic reliability and minimize false positives.

### **4.9 Explainable AI (XAI) techniques**

• *Interpretability*: As AI models in MRI analysis become more sophisticated, the need for interpretability grows [52]. XAI techniques, including Grad-CAM and LIME, are applied to elucidate model decisions and provide insights into the features that influence diagnoses.

Deep learning techniques are not only transforming MRI analysis but also pushing the boundaries of what is possible in medical imaging. Their ability to handle complex data, adapt to various modalities, and continuously improve through data-driven learning positions them at the forefront of medical research and clinical applications. In the subsequent sections, we will delve into the specific applications of these techniques in MRI analysis, illustrating their impact on disease detection, prognosis assessment, and treatment planning.

### **5. AI role in realignment, normalization and registration stages in MRI**

The realignment stage in MRI is essential to ensure that the images obtained are of the highest quality possible, especially in clinical applications where patients may move during image acquisition [53]. Here, artificial intelligence has proven to be an invaluable tool in enabling accurate and efficient automation of this process. AI techniques at this stage include:


The use of artificial intelligence in motion realignment and correction not only improves the quality of MRI images, but also reduces the need for repeat studies due to inadvertent patient movements, saving time and resources. Intensity and contrast normalization is crucial to ensure that MRI images are comparable between patients and scanning sessions [54]. Here, artificial intelligence plays an essential role by adjusting image characteristics to facilitate accurate and objective analysis. AI techniques at this stage include:


Intensity and contrast normalization using artificial intelligence ensures that images are consistent and suitable for clinical interpretation and application of analysis algorithms. Image co-registration in MRI involves aligning multiple sets of images acquired in different sequences or modalities for better comparison and analysis [55]. Artificial intelligence has proven to be highly effective in automating this process. AI techniques at this stage include:


*Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images… DOI: http://dx.doi.org/10.5772/intechopen.113826*

3.*Multimodal data fusion*: When multiple MRI modalities are used, artificial intelligence can fuse data from different sequences or modalities to provide a more complete and accurate view of anatomy and pathologies.

Image co-registration and data fusion with the help of artificial intelligence are critical for more accurate interpretation and better-informed clinical decision-making in applications involving multiple sets of MRI images.

### **6. AI applications in MRI**

Artificial Intelligence (AI) has revolutionized the field of MRI, offering a plethora of applications that enhance image acquisition, analysis, and clinical decisionmaking. The fusion of AI and MRI has ushered in a new era of medical imaging, with a wide range of applications that benefit patients and healthcare providers alike [56]. **Table 1** provides a concise overview of how AI enhances various aspects of MRI, from image quality to disease diagnosis and treatment planning.


#### **Table 1.**

*Applications of AI in magnetic resonance imaging.*

## **7. AI evaluations in MRI**

The evaluation of AI models in the context of MRI images is crucial to assess their performance, accuracy, and clinical utility. One of the fundamental tools for this evaluation is the confusion matrix [57]. The confusion matrix is a table that allows us to visualize the performance of a classification model, particularly in binary classification scenarios, where we are concerned with distinguishing between two classes: positive (disease presence) and negative (disease absence).

### **7.1 Confusion matrix**

The confusion matrix is organized in **Table 2** as follows [58]: In this confusion matrix:


### **7.2 Key metrics derived from the confusion matrix**

Several key metrics can be calculated based on the values in the confusion matrix [59]:



**Table 2.**

*Confusion matrix for AI model evaluation in MRI images.*

*Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images… DOI: http://dx.doi.org/10.5772/intechopen.113826*


A well-interpreted confusion matrix can provide insights into the strengths and weaknesses of an AI model applied to MRI images. It helps in understanding where the model excels (e.g., high TP and TN) and where it needs improvement (e.g., high FP or FN). Depending on the specific medical application, the choice of evaluation metric may vary. For instance, in cancer detection, high sensitivity (recall) is often prioritized to minimize false negatives, ensuring early disease detection. In contrast, for certain rare conditions, high specificity may be crucial to avoid unnecessary interventions [60].

In addition to traditional evaluation metrics like accuracy, precision, recall, and F1 score, assessing the performance of AI models in MRI image analysis often involves considering other factors such as stability [61]. Stability examines how slight perturbations in the input affect the explanation provided by the model. The stability metric is calculated by dividing the number of stable explanations (those that remain consistent when the input is perturbed) by the total number of explanations generated by the model. A higher stability metric signifies that the AI model's explanations are robust and unaffected by minor variations in the input data. This metric is particularly relevant in medical imaging, where consistency and reliability of model interpretations are paramount. While metrics like stability focus on the model's response to perturbations in the input data, it's important to note that there are various evaluation metrics that do not rely on the confusion matrix but provide valuable insights into the model's performance and behavior [62].

### **8. Limitations of algorithms in magnetic resonance applications**

#### **8.1 Data size and sample requirements**

The size of data sets in magnetic resonance imaging (MRI) applications is a critical factor that can influence the effectiveness of machine learning algorithms. Large and diversified data sets are often needed to train high-precision models. However, in practice, it can be difficult to obtain large data sets, which can limit the ability of models to generalize and make accurate diagnoses [63]. In MRI applications, the availability of large data sets may be limited due to various reasons, such as patient privacy or costly and time-consuming data collection. To address these restrictions, data augmentation techniques are used. These strategies involve generating new training samples from existing samples, by applying controlled transformations. Some common forms of data augmentation in MRI include Rotation and Mirroring,

Panning and Zooming, Elastic Distortions, and Noise Aggregation [64]. Transfer learning is another powerful strategy to overcome sample size restrictions in MRI applications. This technique involves leveraging machine learning models pretrained on larger, generic data sets (e.g., models trained on large-scale medical images or even non-medical images) and tailoring them for specific MRI tasks.

### **8.2 Label quality and annotation challenges**

The quality of labels in MRI data sets is essential for training accurate machine learning models. Without accurate and consistent labels, algorithms can produce incorrect or biased results. Data annotation in MRI applications presents several unique challenges due to the detailed and medical nature of the images [65]. Some of these challenges include Expert Requirements, Ambiguity and Variability, Multimodal Data, and Privacy and Security. To address these challenges and improve label quality in MRI applications, several strategies can be employed:


By implementing these strategies, the quality of labels in MRI data sets can be improved, which in turn contributes to training more accurate and reliable machine learning models for medical applications. Furthermore, documentation and monitoring of annotation processes are essential to ensure traceability and data quality.

### **8.3 Training time and computational resources**

The time required to train machine learning models in MRI applications can be significant, especially when complex models are used. This can affect the efficiency of clinical implementation and the ability to respond in critical situations. Training time for AI models in MRI applications can be significant and can vary depending on the complexity of the task and the size of the data set. Some factors that contribute to training time include model architecture, data set size, computational resources, hyperparameters and regularization [66]. Computational resources are critical to accelerate training time and enable efficient deployment of AI models in MRI applications. Some key considerations include graphics processing units (GPU) or tensor processing units (TPU), compute clusters, cloud services, code optimization and transfer learning [67].

Training time and computational resources are critical considerations in AI applications in MRI. Choosing efficient model architectures, optimizing hyperparameters, and accessing high-performance resources are key strategies to reduce training times and improve efficiency in deploying AI models in medical MRI applications.

### **9. Conclusions**

In this chapter, we embarked on a journey through the dynamic intersection of magnetic resonance imaging (MRI) and artificial intelligence (AI). We began by delving into the diverse world of MRI imaging, exploring its various modalities, including anatomical MRI, diffusion MRI, functional MRI (fMRI), and magnetic resonance angiography (MRA). Each modality provided a unique window into the human body, offering invaluable insights for diagnosis and treatment. As we ventured further, we unraveled the power of AI models in revolutionizing MRI image analysis. Deep Learning techniques took center stage, with convolutional neural networks (CNNs) emerging as formidable tools for feature extraction and classification. We explored their versatility across datasets, showcasing their ability to accurately detect a spectrum of medical pathologies.

Applications of AI in MRI proved boundless, from detecting brain tumors in Anatomical MRI to mapping brain activity in fMRI, and even pinpointing vascular anomalies in MRA. Each application underscored the potential to enhance clinical decision-making, optimize resource utilization, and ultimately improve patient outcomes. The evaluation of AI models extended beyond traditional metrics, introducing stability as a crucial factor. We emphasized the importance of robust, consistent model interpretations, especially in the context of medical imaging, where precision is paramount.

In conclusion, the amalgamation of MRI imaging and AI has ushered in a new era of medical diagnostics and patient care. These transformative technologies are poised to reshape the healthcare landscape, offering more accurate, efficient, and reliable tools for medical professionals. With ongoing research, collaboration, and refinement, the future holds the promise of even greater advancements, ultimately benefiting individuals worldwide.

This chapter serves as an overview to the potential of AI in MRI imaging, offering a glimpse into a future where cutting-edge technology and medical expertise converge to improve lives and redefine healthcare standards.

### **Acknowledgements**

We would like to thank the University of Guanajuato and the support of CONAHCyT (scholarship No. 893699).

### **Conflict of interest**

The authors declare no conflict of interest.

*New Advances in Magnetic Resonance Imaging*

## **Author details**

Aron Hernandez-Trinidad1 \*, Blanca Olivia Murillo-Ortiz2 , Rafael Guzman-Cabrera3 and Teodoro Cordova-Fraga1

1 Science and Engineering Division, University of Guanajuato Leon Campus, Leon, GTO, Mexico

2 Epidemiology Research Unit IMSS No. 1 High Specialty Medicine Unit, Leon, GTO, Mexico

3 Engineering Division, University of Guanajuato Irapuato-Salamanca Campus, Salamanca, GTO, Mexico

\*Address all correspondence to: aron.hernandez@ugto.mx

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images… DOI: http://dx.doi.org/10.5772/intechopen.113826*

### **References**

[1] Hill DL et al. Medical image registration. Physics in Medicine & Biology. 2001;**46**(3):R1

[2] Kasban H, El-Bendary MAM, Salama DH. A comparative study of medical imaging techniques. International Journal of Information Science and Intelligent System. 2015;**4**(2):37-58

[3] Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Zeitschrift für Medizinische Physik. 2019;**29**(2):102-127

[4] Enzmann DR. Radiology's value chain. Radiology. 2012;**263**(1):243-252

[5] Mazurowski MA et al. Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI. Journal of Magnetic Resonance Imaging. 2019;**49**(4):939-954

[6] Gore JC. Artificial intelligence in medical imaging. In: Magnetic Resonance Imaging. Elsevier; 2020. pp. A1-A4

[7] Chattopadhyay A, Maitra M. MRIbased brain tumour image detection using CNN based deep learning method. Neuroscience Informatics. 2022;**2**(4):100060

[8] Adegun AA, Viriri S, Ogundokun RO. Deep learning approach for medical image analysis. Computational Intelligence and Neuroscience. 2021;**2021**:1-9

[9] Liang Z-P, Lauterbur PC. Principles of Magnetic Resonance Imaging. WA: SPIE Optical Engineering Press Belllingham; 2000

[10] Kuperman V. Magnetic Resonance Imaging: Physical Principles and Applications. Elsevier; 2000

[11] Landini L et al. Advanced Image Processing in Magnetic Resonance Imaging. CRC Press; 2018

[12] Katti G, Ara SA, Shireen A. Magnetic resonance imaging (MRI)–a review. International Journal of Dental Clinics. 2011;**3**(1):65-70

[13] Pham TT et al. Magnetic resonance imaging (MRI) guided proton therapy: A review of the clinical challenges, potential benefits and pathway to implementation. Radiotherapy and Oncology. 2022;**170**:37-47

[14] Shamshad F et al. Transformers in medical imaging: A survey. Medical Image Analysis. 2023;**88**:1361-8415

[15] Lenroot RK, Giedd JN. Brain development in children and adolescents: Insights from anatomical magnetic resonance imaging. Neuroscience & Biobehavioral Reviews. 2006;**30**(6):718-729

[16] Durston S et al. Anatomical MRI of the developing human brain: What have we learned? Journal of the American Academy of Child & Adolescent Psychiatry. 2001;**40**(9):1012-1020

[17] Jones DK. Diffusion MRI. Oxford University Press; 2010

[18] Le Bihan D et al. Artifacts and pitfalls in diffusion MRI. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine. 2006;**24**(3):478-488

[19] Sener RN. Diffusion MRI: Apparent diffusion coefficient (ADC) values in the normal brain and a classification of brain disorders based on ADC values.

Computerized Medical Imaging and Graphics. 2001;**25**(4):299-326

[20] Rovaris M et al. Diffusion MRI in multiple sclerosis. Neurology. 2005;**65**(10):1526-1532

[21] Moonen CTW, Bandettini PA. Functional MRI. Vol. 3. Springer; 1999

[22] Van Zijl PCM et al. Quantitative assessment of blood flow, blood volume and blood oxygenation effects in functional magnetic resonance imaging. Nature Medicine. 1998;**4**(2):159-167

[23] DeYoe EA et al. Functional magnetic resonance imaging (FMRI) of the human brain. Journal of Neuroscience Methods. 1994;**54**(2):171-187

[24] Manan HA, Franz EA, Yahya N. Utilization of functional MRI language paradigms for pre-operative mapping: A systematic review. Neuroradiology. 2020;**62**:353-367

[25] Szaflarski JP et al. Comprehensive presurgical functional MRI language evaluation in adult patients with epilepsy. Epilepsy & Behavior. 2008;**12**(1):74-83

[26] Park KY et al. Mapping language function with task-based vs. restingstate functional MRI. PLoS One. 2020;**15**(7):e0236423

[27] Dumoulin CL, Hart HR Jr. Magnetic resonance angiography. Radiology. 1986;**161**(3):717-720

[28] Hartung MP, Grist TM, François CJ. Magnetic resonance angiography: Current status and future directions. Journal of Cardiovascular Magnetic Resonance. 2011;**13**(1):1-11

[29] Potchen EJ. Magnetic Resonance Angiography: Techniques, Indications and Practical Applications. Springer; 2006

[30] Laub GA. Time-of-flight method of MR angiography. Magnetic Resonance Imaging Clinics of North America. 1995;**3**(3):391-398

[31] Dumoulin CL. Phase contrast MR angiography techniques. Magnetic Resonance Imaging Clinics of North America. 1995;**3**(3):399-411

[32] Carpenter JP et al. Magnetic resonance venography for the detection of deep venous thrombosis: Comparison with contrast venography and duplex Doppler ultrasonography. Journal of Vascular Surgery. 1993;**18**(5):734-741

[33] Carr JC, Carroll TJ. Magnetic Resonance Angiography: Principles and Applications. Springer Science & Business Media; 2011

[34] Yasaka K et al. Impact of deep learning reconstruction on intracranial 1.5 T magnetic resonance angiography. Japanese journal of. Radiology. 2022;**40**(5):476-483

[35] Huang S-C et al. Developing medical imaging AI for emerging infectious diseases. Nature Communications. 2022;**13**(1):7060

[36] Li Z et al. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Transactions on Neural Networks and Learning Systems. 2021;**33**:2162-237X

[37] Wu J. Introduction to convolutional neural networks. National Key Lab for Novel Software Technology. Nanjing University. China. 2017;**5**(23):495

[38] Albawi S, Mohammed TA, Al-Zawi S. Understanding of a Convolutional Neural Network. IEEE; 2017

*Applications of Artificial Intelligence in the Classification of Magnetic Resonance Images… DOI: http://dx.doi.org/10.5772/intechopen.113826*

[39] Medsker LR, Jain LC. Recurrent neural networks. Design and Applications. 2001;**5**(64-67):2

[40] Pereira S et al. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Transactions on Medical Imaging. 2016;**35**(5):1240-1251

[41] Creswell A et al. Generative adversarial networks: An overview. IEEE Signal Processing Magazine. 2018;**35**(1):53-65

[42] Yin X-X et al. U-net-based medical image segmentation. Journal of Healthcare Engineering. 2022;**2022**:1-16

[43] Singh J, Banerjee R. A Study on Single and Multi-Layer Perceptron Neural Network. IEEE; 2019

[44] Wu Z et al. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems. 2020;**32**(1):4-24

[45] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;**521**(7553): 436-444

[46] Liu J et al. Applications of deep learning to MRI images: A survey. Big Data Mining and Analytics. 2018;**1**(1):1-18

[47] Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. Journal of Big Data. 2016;**3**(1):1-40

[48] Tschannen M, Bachem OF, Lučić M. Recent advances in autoencoder-based representation learning. In: Bayesian Deep Learning Workshop, NeurIPS. 2018. 47658

[49] Niu Z, Zhong G, Yu H. A review on the attention mechanism of deep learning. Neurocomputing. 2021;**452**:48-62

[50] Klaiber M et al. A Systematic Literature Review on Transfer Learning for 3d-CNNs. IEEE; 2021

[51] Sagi O, Rokach L. Ensemble learning: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2018;**8**(4):e1249

[52] Gunning D et al. XAI—Explainable artificial intelligence. Science Robotics. 2019;**4**(37):eaay7120

[53] Mathiak K, Posse S. Evaluation of motion and realignment for functional magnetic resonance imaging in real time. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine. 2001;**45**(1):167-171

[54] Shah M et al. Evaluating intensity normalization on MRIs of human brain with multiple sclerosis. Medical Image Analysis. 2011;**15**(2):267-282

[55] Stefano A et al. Robustness of pet radiomics features: Impact of co-registration with mri. Applied Sciences. 2021;**11**(21):10170

[56] Turkbey B, Haider MA. Deep learning-based artificial intelligence applications in prostate MRI: Brief summary. The British Journal of Radiology. 2022;**95**(1131):20210563

[57] Visa S et al. Confusion matrixbased feature selection. Maics. 2011;**710**(1):120-127

[58] Krstinić D et al. Multi-label classifier performance evaluation with confusion matrix. Computer Science & Information Technology. 2020;**1**:1-14

[59] Hossin M, Sulaiman MN. A review on evaluation metrics for data classification evaluations. International Journal of Data Mining & Knowledge Management Process. 2015;**5**(2):1

[60] Dalianis H, Dalianis H. Evaluation metrics and evaluation. In: Clinical Text Mining: Secondary Use of Electronic Patient Records. 2018. pp. 45-53

[61] Prabhu AM, Choksi TS. Data-driven methods to predict the stability metrics of catalytic nanoparticles. Current Opinion in Chemical Engineering. 2022;**36**:100797

[62] Farahani FV et al. Explainable AI: A review of applications to neuroimaging data. Frontiers in Neuroscience. 2022;**16**:906290

[63] McCradden MD et al. Ethical limitations of algorithmic fairness solutions in health care machine learning. The Lancet Digital Health. 2020;**2**(5):e221-e223

[64] Chlap P et al. A review of medical image data augmentation techniques for deep learning applications. Journal of Medical Imaging and Radiation Oncology. 2021;**65**(5):545-563

[65] Monarch RM. Human-in-the-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI. Simon and Schuster; 2021

[66] Grolinger K, Capretz MAM, Seewald L. Energy Consumption Prediction with Big Data: Balancing Prediction Accuracy and Computational Resources. IEEE; 2016

[67] Chen C et al. Deep learning on computational-resource-limited platforms: A survey. Mobile Information Systems. 2020;**2020**:1-19

### **Chapter 5**

## Resting-State fMRI Advances for Functional Brain Dynamics

*Denis Larrivee*

### **Abstract**

The development of functional magnetic resonance imaging (fMRI) in quiescent brain imaging has revealed that even at rest, brain activity is highly structured, with voxel-to-voxel comparisons consistently demonstrating a suite of resting-state networks (RSNs). Since its initial use, resting-state fMRI (RS-fMRI) has undergone a renaissance in methodological and interpretive advances that have expanded this functional connectivity understanding of brain RSNs. RS-fMRI has benefitted from the technical developments in MRI such as parallel imaging, high-strength magnetic fields, and big data handling capacity, which have enhanced data acquisition speed, spatial resolution, and whole-brain data retrieval, respectively. It has also benefitted from analytical approaches that have yielded insight into RSN causal connectivity and topological features, now being applied to normal and disease states. Increasingly, these new interpretive methods seek to advance understanding of dynamic network changes that give rise to whole brain states and behavior. This review explores the technical outgrowth of RS-fMRI from fMRI and the use of these technical advances to underwrite the current analytical evolution directed toward understanding the role of RSN dynamics in brain functioning.

**Keywords:** resting-state networks, resting-state fMRI, big data analysis, high strength magnetic imaging, effective connectivity, parallel imaging, independent components analysis

### **1. Introduction**

Resting-state, functional, magnetic resonance imaging (RS-fMRI) focuses on spontaneous low-frequency fluctuations (< 0.1 Hz) in the BOLD signal that occur in the absence of task-related activities. The functional significance of these fluctuations was first recognized by Biswal et al. [1] in a study in which subjects were told not to perform any cognitive, language, or motor tasks. After determining the correlation between the BOLD time course of a seed region identified by bilateral finger tapping and that of all other areas in the brain, the authors found that fluctuations in the left somatosensory cortex were highly correlated with homologous areas in the contralateral hemisphere. This observed correlation led to their conclusion that such "resting networks" manifested the functional connectivity of the brain.

The observation of spontaneous, synchronous fluctuations occurring between brain regions has since stimulated studies that have identified as many as 7 to 17 other stable networks [2–5], although seven are consistently agreed upon. The visual network, for example, is highly consistent across various studies and spans much of the occipital cortex. The importance of this network structure is reflected in the amount of bodily energy devoted toward brain and, presumably, network maintenance. On a relative basis, the energy consumed by the brain is approximately 20% of the total bodily energy consumption, despite a relative mass of only 2%. Of the brain's consumption, some 60 to 80% of the energy is used while "resting," which is for internal communication and support alone. By contrast, elicited activity consumes less than 1% of the brain's energy resources. Resting networks thus appear to constitute a fundamental organizational architecture for the functional properties of the brain [5].

Because characterization of resting-state networks (RSNs) in the human brain relies on the analysis of temporal fluctuations in the blood oxygenation leveldependent (BOLD) signal, the delineation of RSNs has been directly linked to the ability of fMRI to detect neural activity [6]. Using T2-weighted signal intensity and blood oxygenation as the contrast agent [7], fMRI imaging offers a relatively facile procedure for the acquisition of brain activity data [8, 9], one that has been exploited in numerous studies.

Early investigations [10] confirmed fMRI suitability for RSN determinations. The advantages of RS-fMRI in its own right have since become apparent [8], including ease of signal acquisition, minimal requisite effort from the patients, and proficiency for identifying functional areas in different patient populations. Recent studies have demonstrated that imaging of difficult-to-monitor patients, such as pediatric subjects and patients with disorders of consciousness, that is, coma, vegetative, and minimally conscious states, are able to undergo RS-fMRI. The procedure also offers the capability for functional differentiation, when patients perform specific tasks that are designed to target a single network such as motor, language, memory, vision, attention, and sensory networks.

Despite limitations in use of the BOLD signal, especially the dichotomy between the temporal resolution and the temporal scale of the neural activity measured, RS-fMRI studies have continued to expand, propelled not only by technical improvements at the level of signal acquisition—e.g., parallel MRI imaging, data acquisition [11], and computational advances for preprocessing and feature extraction [12]—but also by theoretical and mathematical tools that have amplified the functional interpretations of quiescent and task-based brain activity [13, 14]. One outcome of these developments has been a more precise view of how RSNs are functionally organized and how this in turn modulates communication within the brain, that is, a more dynamic view of information exchange and regulation [15].

The need to address cognitive dysfunction in the light of these more precise and advanced models of brain operation has also benefitted from this work. The DMN has been an early and continuing focus of study for the exploration of alterations during Alzheimer's and other degenerative diseases, which tend to adapt to the structural profile of the network [16]. There is also increasing interest in examining the neurological changes that occur as a result of traumatic, vascular, or oncological influences, which, because of their focal impact, can affect multiple network domains [17, 18]. Stroke, especially, is a leading cause of disability and dependency in adults—in 2010, there were about 11.6 million incident ischemic stroke events in the United States, and by 2030, an additional 3.4 million adults are predicted to have strokes.

In light of RSN discoveries, the understanding of how these focal effects influence brain functioning has also evolved. Stroke lesions are therefore understood not only to result in focal, location-dependent neurological symptoms but can also induce widespread effects in remote regions in the affected and unaffected hemispheres. Consistent with this, while baseline measures of stroke severity represent the current level of diagnostic and prognostic capability, patients' neurological impairment sometimes exceeds what would be expected from stroke magnitude; that is, growing evidence emphasizes the role of distributed neural networks in the generation of brain states and the control of behavior that could account for stroke outcomes affecting behavior [18, 19]. Such possibilities implicate a need for still more comprehensive RSN tools that can explore the relationship between whole-scale RSN dynamics and behavior in clinical settings.

This review discusses the evolution in the study of brain RSNs as an outgrowth of the methodological principles that have advanced fMRI imaging of neural brain activity. It covers the advances in technical approaches for data retrieval and processing that have provided the basis for improved network analysis and that build on conceptual insights into functional network associations based on connectivity associations. It also considers both the frequently used data-driven approaches and their contribution to larger-scale explorations of brain dynamics based on causal connectivities and topological variation, now being applied in more global models. Improvements in these latter are likely to offer the prospect of clinical insights that can relate network operation to disease states, such as stroke.

### **2. Modern resting state network methodology**

#### **2.1 Resting-state network detection as an outgrowth fMRI**

RS-fMRI relies on spontaneous low-frequency fluctuations (< 0.1 Hz) in the BOLD signal, which measures the contrast between the diamagnetic effect of oxyhemoglobin and the paramagnetic effect of deoxy-hemoglobin [7]. The dependence on the BOLD signal means that RS-fMRI shares advantages that accrue to fMRI—the ability to monitor neural activity, albeit indirectly—but also disadvantages that characterize its use. Chief among these limitations is fMRI's temporal resolution, which is dependent on the hemodynamic response time. Since the hemodynamic response is much slower than the underlying neural processes, temporal information of spiking events is heavily blurred and typically requires the use of mathematical processing, like that of the general linear model [9], or experimental block protocols, to infer event-related, signal activity. With processing, temporal resolution in the 100 ms range can be achieved, which is roughly tenfold slower than the neural events being monitored. By contrast, the spatial resolution of fMRI is considerably better, as well as much superior to electrical and magnetic recording techniques, though slightly reduced from that of MRI. Due to the need for fast acquisition of time series information, the spatial resolution in the case of fMRI is limited somewhat by the signal-to-noise ratio (SNR). With single-shot imaging, for example, the acquisition time for fMRI is reduced and the pixel size must be increased to obtain a satisfactory SNR. With a suitable increase in magnetic strength [20], however, SNR is sufficiently enhanced to yield a pixel size slightly under 1 mm.

A key factor in the use of RS-fMRI is the measurement of neural activity fluctuations rather than spiking events per se. Neural activity fluctuations (low-frequency

and indirectly measured using the BOLD signal) exhibit substantially different time courses from those of neural firing (high-frequency and direct). Accordingly, while the representation of individual, high-frequency spiking events is itself heavily blurred, the slow neural activity fluctuations detected by the BOLD signal display a well-resolved temporal pattern. Measurements of these fluctuations thus provide for accurate functional inferences obtained from voxel-to-voxel comparisons. Together with the high spatial resolution that is an inherent feature of fMRI, RS-fMRI currently constitutes the most powerful tool available for assessing the functional connectivity properties of brain networks.

#### **2.2 Technical advances in RS-fMRI**

#### *2.2.1 General acquisition*

The early detection of RSNs by Biswal et al. [10] used a standard 1.5 T clinical scanner equipped with a three-axis head gradient coil and a shielded birdcage radio frequency coil. A time course of 512 echo-planar images (EPI) from a 10 mm axial slice (flip angle 34O) was obtained every 250 ms and the respective data sets were band pass filtered at <0.08 Hz. Using these moderate parameters, the study demonstrated a high degree of temporal correlation in the sensorimotor cortex and in several other regions associated with motor function. Departing from this early protocol, most RS-fMRI scanning now employs 3 Tesla (3 T) field strength to obtain clinically reliable data and gradient-echo echo-planar imaging (GE-EPI) sequences [21, 22]. Because RSN acquisition is T2\* weighted, GE sequencing is typically used in preference to T2 weighted spin echo sequences [23]. Whole-brain coverage is required, with high in-plane resolution (about 2 to 3 mm) and a repeat time (TR) of 2 to 3 s [24] to capture the distributed configuration of RSNs.

While most RS-fMRI imaging studies rely on these or comparable protocols, current resting-state procedures also have available an arsenal of advances that can supplement the current standard conditions. Among other developments, these include procedures for increasing data acquisition speed [22], enhancing spatial resolution by improving SNR capabilities with high-strength magnetic fields [20], preprocessing corrections for motion artifacts [25], and big data acquisition capability [26].

#### *2.2.2 Rapid data acquisition*

The advent of parallel imaging has stimulated an increasing number of studies that have sought to harness the speed of data acquisition made possible by its development [11]. Fast RS-fMRI has been motivated by various objectives. Firstly, increasing data acquisition speed can assist multivariate approaches while also retaining a comparable level of sensitivity. For clinical groups for whom RS-fMRI is an increasingly used diagnostic approach, this affords greater interpretive power [27]. The use of rapid data approaches also enables better discretization of dynamical changes associated with connectivity changes, which are posited to reflect distinct brain states [28–30]. Additionally, rapid RS-fMRI data acquisition can help to identify artifactual contributions, such as cardiac and respiratory rhythms [31, 32]. With low sampling rates, these sources of physiological noise often alias to lower, functionally associated, frequency bands [33] making them difficult to resolve since task time series are unavailable in the resting state [34].

#### *Resting-State fMRI Advances for Functional Brain Dynamics DOI: http://dx.doi.org/10.5772/intechopen.113802*

Parallel MRI imaging employs multiple receiver coils for fast data acquisition. These capture spatially distinct data sets due to the differential spatial profiles of the receivers. The most widely used configurations are Multiband (MB) and 3D echo planar imaging (EPI) [35]. Multiband pulses excite a set number of slices simultaneously, ranging from MB2–4 up to MB8, which are then unfolded. Faster sampling rates can be achieved by reducing the overlap between slices with techniques like GRAPPA or CAIPIRINHA [36–39]. Both of these techniques operate in the frequency domain and are based on the principle that k space information within a given point is partially retained in neighboring points of the k domain, which can be retrieved during scanning. The CAIPIRINHA technique is an evolution of the GRAPPA technique, in which there is an applied acceleration along the Ky and Kz directions and an additional phase offset (slice-shift) along the Kz direction. These modifications yield unique frequency patterns and therefore simpler aliasing to solve. In 3D EPI, the slice direction is embedded with a phase encoding gradient. Each repetition excites the whole imaging volume, requiring a smaller flip angle. The use of the encoding gradient also accelerates data acquisition, which when used in conjunction with the CAIPIRINHA approach, can still achieve faster retrieval [40].

Another approach used for rapid data retrieval is that of Magnetic Resonance Electroencephalography (MREG). This approach derives its speed from the ability to traverse the k-space with a stack of spiral trajectories [41], which significantly reduces sampling recovery, enabling whole data scans in less than 100 ms. A drawback is the relatively low spatial resolution of about 3 mm. However, the method offers the significant advantage of greatly facilitating dynamic functional connectivity analyses [42] that require large data sets.

#### *2.2.3 High strength fields in RS-fMRI*

Although most RS-fMRI studies are conducted at 3 T, higher field strengths offer advantages not provided by standard 3 T field strength. Higher field strengths yield correlation coefficients that are consistently higher for resting networks, due to the linear dependency of the SNR on the magnetic field [43, 44]. The higher correlation and enhanced signal combine to improve signal detection and lessen the amount of mathematical processing needed for signal resolution, which means that the spatial characteristics of resting networks can be measured with greater precision than at lower field strengths. The chief advantage of higher fields thus is an improved spatial resolution, which enables a better spatial delineation of network maps.

Additionally, due to the higher SNR, the temporal reliability of mapping is also improved, lending the technique a broader clinical range. For example, RS-fMRI at 7 T has been shown to enhance the temporal reliability of sensorimotor and language network detection in preoperative planning [45] and for mapping habenula restingstate networks involved in anxiety and addiction disorders [46].

On the other hand, use of higher field strengths has several drawbacks, including longer sampling intervals, inhomogenous magnetic field properties, and the logarithmic growth in specific absorption rate (SAR) with increasing field strength [22]. In particular, the higher spatial resolution requires long repetition times, due to the need to include data acquisition from the whole brain to accommodate the brain-wide distribution of major RSNs. Additionally, inhomogeneities in magnetic field affect receive and transmit RF coil sensitivity [47], which requires correction for accurate connective mapping, while SAR constraints on echo planar imaging affect multiband pulses [22].

#### *2.2.4 Big data*

Current increases in study size are generating exceptional amounts of data in their attempts to explore ever-larger studies of RSNs in brain operation. The Human Connectome Project [48] and the 1000 Functional Connectomes Project [49] have released in excess of 1000 RS-fMRI data sets, for example. Traditional data-driven methods for handling RS-fMRI data, such as independent components analysis and graph theoretic approaches, become unwieldy and lose descriptive power at elevated data levels. The need for suitable techniques to address big data handling is thus currently stimulating the development of new preprocessing methods and analytical adaptions that can accurately reflect network structure and dynamics [50].

Large data sets are typically characterized in three ways, the amount of data, termed Big Volume, the diversity of information, termed Big Variety, and the reliability of the data as a representation of brain functional architecture, termed Big Veracity. Big-volume RSN data sets are characterized by an informational mass exceeding that of a single very large computer processing capacity [50], though not so large as whole genome data sets. Big variety reflects the diversity of information within a single data set but can also extend to comparisons between two data sets, such as occurs with two or more imaging data sets or with other information modes like behavior, for example, the Open Access Series of Imaging Studies (OASIS) project with more than 500 subjects worth of data [51]. Big Veracity considers the various data sources that can lessen the ability to extract meaningful network data, including noise, resolution artifacts, data inconsistencies, and acquisition errors.

Initial steps involved in big data handling entail preprocessing to remove the effects of sources that diminish the ability to assess meaningful data. Several preprocessing steps are becoming more accepted, but these can also greatly increase computational load. The most widely used is the minimal preprocessing pipeline [50]. Its goal is to provide RS-fMRI data for analysis with a minimum level of quality, which also minimizes the loss of meaningful data. This can be of substantial benefit to researchers lacking access to high-powered preprocessing of Big Volume data sets. Currently, preprocessing software tools tend to adopt a parallelization approach with functions running in parallel for tools such as statistical parametric mapping (SPM) [50].

Analytical procedures have tended to emphasize graph-theoretic tools that are amenable to statistical mechanical methods. One of the most used topological tools is Mapper, developed by Singh et al. [52], which adopts a persistent homology approach. Mapper lends itself to big data analysis because the global organizational structure is divided into a series of overlapping slices. These are reconstructed *via* the use of common points located in the overlapping zones, which serve as a vehicle to orient topology.

### **3. Assessing functional connectivity in RSN data**

Several approaches have been developed to analyze imaging data after preprocessing and band-pass filtering. These include approaches driven by research focus as well as those dictated by the data itself, the so-called data-driven and model-free approaches. Each can be used to delineate the distribution of functional connections that characterize major networks of the brain.

### **3.1 Regions of interest seed-based analyses**

Functional connectivity determinations extend fMRI measurements of brain activity by providing likelihood estimates of functional associations between neural activity zones [1]. In practice, seed-based analyses identify deviations from independence between distributed and often distant sources of neural activity and a region of interest; that is, statistically significant deviations from independence reveal dependent relationships that functionally connect activity zones. Extending these relationships to multiple zones enables the construction of connectivity maps that become identified with unique networks. Exploiting a seed-based ROI strategy, for instance, one comprehensive study of resting-state fMRI sequences from 1000 healthy adults [53] revealed seven functionally connected networks at coarse resolution and 17 at fine resolution. The simplicity and interpretability of the ROI technique make it procedurally facile and a frequently adopted approach. However, the method relies entirely on user-defined ROIs and so is limited for network discovery by its a priori, selected criteria.

### **3.2 Independent components analysis (ICA)**

In light of this caveat, coupled with the evolution of mathematical models and improved computational capabilities, there has been a paradigm shift from that of imposing initial conditions, that is, seed-based ROIs, on the data to that of extracting patterns of brain activity directly from the raw time series. The main example of this approach is independent components analysis. In this approach, the time series signal is assumed to be due to multiple spatio-temporal processes that are statistically independent of each other. By extracting the independent signals, various time courses of specific brain regions can be constructed and grouped into maps representative of their spatial distribution.

Independent components analysis (ICA) aims at overcoming the selective bias toward priors contained in seed-based approaches by relying on direct data-driven interrogation for assessment of functional connectivity [54]. To do so, ICA posits an inherent representation of independent factors in the captured time series data. Its goal is to decompose the vector representation of these factors, *Z,* as a product of a combinatorial matrix and the spatially independent components where:

$$Z = \text{NC} + E = \sum\_{j=1}^{J} n\_j c\_j \quad \text{+} E,\tag{1}$$

Here, *N* is a *T* × *J* combinatorial matrix with columns *n*j, and *C* is the *J* × *Nv* matrix of independent components with rows *c*j, where each *c*j corresponds to component j for a cumulative total of *J* independent components. These components represent the networks of various functions. The elements of the matrix *E* are independent, normally distributed noise contributions. It is presumed that the component maps, *c*j, *j* = 1,. .., *J* contain overlapping and statistically dependent signals, but that the individual component map distributions are independent. Each independent component *c*j is a vector of size *Nv* and represents the relative amount of a given voxel that is modulated by the activation of that component. Due to the retrieval of large data during the acquisition stage, various algorithms have been developed to estimate the components, for example, the independent components analysis with a reconstruction cost (RICA) algorithm [55].

#### **3.3 Graph theory analysis**

Another approach to the interpretation of RS-fMRI datasets employs graph theory, where activity sources comprise nodes and connectivity defines the edges that link these nodes [56]. Unlike ICA, which focuses chiefly on the strength of correlation between different domains, graph theory characterizes the features of network topology. The graph theory approach describes the interaction between nodes by means of such graph parameters as average path length, clustering coefficients, node degree, centrality measures, and level of modularity. Graph theory is thus a promising technique for exploring the integration and segregation of networks in the brain. Graph metrics like average path length, for example, reveal the extent of integration of brain networks. Centrality, on the other hand, examines whether a particular node has a central or leading role in information segregation *via* its propagation to other nodes in a network.

Increasingly, modularity assessments have been used to characterize functional adjustments occurring during behavior, network perturbations, or pathologies that affect network function and the observed values have been shown to undergo significant alteration in such pathologies as stroke [57] and psychiatric disease [58–60]. Modularity assesses the presence of functionally independent units or modules that compose resting-state networks. These are defined as clusters of nodes displaying greater functional connectivity within the group than with the rest of the brain. During task-specific activity, such clusters are reallocated, implying that the networks themselves are reorganized topologically [61, 62]. Their flexibility suggests that they operate as independent functional entities inducing [63–65] specific behaviors *via* their reallocation [66, 67].

In practice, modularity analysis [63] describes the difference between the network configuration at rest and the network reconfiguration during behaviorally altered conditions by means of a quality function (Q ) [68] that maximizes the optimal modular decomposition. As expressed by Q , the modularity index provides a measure of the degree of modular segregation [69], where Q is close to one when there are few edges between modules and high density inside modules—that is, module segregation is present—and Q is close to zero when the number of connections between modules is comparable to that of random—indicating an absence of segregation.

#### **3.4 RSN functional connectivity maps**

The first demonstration of correlated spontaneous fluctuations explored somatosensory areas. Since this initial demonstration, multiple other resting networks have been discovered. Functional connectivity determinations have shown that these networks can be reliably reproduced [53], although much variation in the identification of networks is dependent on the degree of resolution achieved during scanning. Major resting networks, according to Yeo's seven network parcellation atlas [4, 53], are listed in **Table 1** and classed broadly as belonging to either sensorimotor or association groups. While numerically greater numbers of networks can be detected at finer resolution, e.g., 17 network estimate of Yeo et al. [53], generally, the 17-network determination fractionates the lesser member set into smaller network components of the seven major networks.


#### **Table 1.**

*Major resting state networks of the human brain classified according to association or sensory-motor functions. Network identification follows that of Yeo et al., [53].*

### **4. RSN dynamics and brain states**

#### **4.1 Assessing sources of connectivity modulation**

While methodological advances in RS-fMRI have made significant strides in unveiling a macro-scale, network-based architecture for the brain, how brain functions emerge from network connectivity remains uncertain. Brain states like those of sleep or altered states of consciousness undergo continually changing dynamics involving whole brain networks. These dynamics are regularly modulated by internal fluctuations in activity that can affect sensory efferent or motor afferent activity [70, 71] and alter spatiotemporal patterning [72]. The ubiquity of these influences reveals that brain dynamics involve causal influences affecting network connectivity, which can be detected with BOLD fMRI [73]. Accordingly, recent developments in RS-fMRI seek to build on functional connectivity determinations by relating causal sources of connectivity changes to brain states and behavior. Network descriptions of these have been termed effective connectivity.

#### *4.1.1 Effective connectivity*

Effective connectivity presumes that efficient causes precede their effects and that these are revealed in the time domain. Because the functional coupling among neuronal populations changes as a function of processing demands [74] it is inherently context-dependent and dynamic. Accordingly, effective connectivity has been used

to clarify sources of brain activity and the directionality of their influence. Inferences of causality are used to interpret the mechanisms that underlie neuronal dynamics and assist studies of how neuronal populations are functionally integrated [75]. In practice, models of effective connectivity seek to assess whether functional coupling is modulated under task-based manipulations and rely on fMRI data. The most common analytical methods include structural equation modeling (SEM), multivariate autoregressive models (MAR), GRANGER, and dynamic causal modeling (DCM).

DCM is perhaps the most widely employed approach for assessing effective connectivity and is based on an input-output model for a system of n interacting brain regions [76]. In this method, the activity of a neuronal population from each region is represented by a single state variable, which is perturbed by controlled inputs. DCM models report the series activity changes vis a vis the system's resting state represented by the system state vector (mathematical approximations of the system typically employ a Taylor series approximation that describes non-linear functions). Using these models it is possible to explore the dynamic character of brain activity under normal and pathological conditions. Unlike other approaches, DCM does not utilize time series data directly but combines a proposed model of the unknown neuronal dynamics with a forward model that translates neuronal states into output measurements. The description of the neuronal population activity employs a bilinear differential equation process, which is combined with the forward model.

Since the inception of the DCM, various methodological changes have extended the DCM approach [77, 78]. Recent, and more complex, models have included simulations from various prominent neuron classes, such as deep pyramidal cells, and spin stellate excitatory interneurons that contribute to the neuronal state [79]. Because of the complexity of these neuronal models, more general models have attempted to overcome their perceived difficulties in data fitting. One approach premises neural activity on generalized spiking described by Wilson Cowan spiking equations to satisfy a wider range of applications. In this adaptation, the Wilson Cowan equations are used to describe the evolution of excitatory and inhibitory activity in a population of neurons, instead of the bilinear equations used for both single and two- state DCM [80].

In a novel variant of DCM, effective connectivity analyses are conducted for largescale or even whole-brain networks [81, 82]. This approach modifies the original DCM procedure in several ways: (i) translation of equations of state from the time to frequency domain using Fourier transformation, (ii) application of a mean field approximation across regions, and (iii) specification of conjugate priors on neuronal input. Choosing appropriate priors yield a generative model that can be used for making inferences about changes in directed connection strengths and inputs.

#### *4.1.2 Granger causal analysis*

Like DCM, Granger causal analysis provides a statistical tool for assessing directed functional connections from time series data, based on the concept that causes precede and induce their outcomes [13]. The method includes linear vector autoregressive models obtained from time series neural data, where a variable at a specific time point is modeled as a linearly weighted sum of its own past and that of a set of other variables, each represented by a vector. Minimizing estimation errors yields the set of optimal connection weights. Variable Y is said to be caused by variable X if the time series of X provides unique information not present in the prior Y series [83] that helps to predict the future Y series.

### **4.2 Macroscale brain organization and RSN dynamics**

In principle, inferences of causality from directional connectivity determinations can be extended to brain-wide neuronal dynamics. Empirical studies from RS-fMRI, for example, show that RSNs are differentiated on the basis of their metastability and synchrony [84]. These and similar observations have stimulated models of brain function and behavior that predict that the human brain at rest operates at maximum metastability, that is, in a state of maximal network switching. Under such conditions, information flow can be said to be guided by temporally ordered sequences of metastable states [85, 86]. The existence of RSN properties like metastability thus implicates directed connectivity changes in the construction of brain states, which emerges from the dynamics of RSNs in whole brain, effective connectivity [87] in health, disease, or trauma. The methodological question that arises is that of generating a descriptive approach relating functional neuroimaging data to whole brain dynamics. Recent attempts to address this question have adopted two approaches.

#### *4.2.1 Recurrence structure analysis*

The first employs a BOLD, data-driven, computational method that leverages the method of *recurrence structure analysis* (RSA), a mathematical procedure derived from Poincare's recurrence theorem [15]. The Poincaré theorem states that trajectories of a complex dynamical system visit certain regions of their available state space more frequently over the course of time than other regions of the state space. This "recurrent" behavior can be described by a *recurrence plot method* (RP), which allows a matrix-based visualization of recurrent states. These latter are mapped into state space trajectories described by symbolic sequences [88]. Combining the structurefunction modules of a brain hierarchical atlas with the optimized recurrent structures yields resting-state networks presumed to reflect time-dependent, recurrent cognitive states.

### *4.2.2 Landscape of informational structures*

The second approach posits the governance of RSN dynamics by a ground-state global attractor. This global ground state is mathematically described as a stable stationary solution representing a point of maximal stability in a landscape of stationary points (nodes) that information flows toward or away from [89]. Similar to whole-brain models, the description of this landscape consists of coupling local dynamics with anatomical brain connectivity. The stability and instability directions of each stationary point are characterized by non-stationary solutions entering or leaving these points, respectively. This provides a framework in which coupled systems of differential equations describe individual brain regions (nodes) in terms of other brain regions and with respect to the global ground state; hence, there exists a global structure linking all stationary points. Accordingly, such points can be ordered by their level of attraction or stability and characterized by various topological measures, for example, number of energy levels (NoEL) or sensitivity to perturbations (criticality) [90], based on connectivity data. This theoretical framework has been shown to successfully account for the highly structured dynamics arising from spontaneous brain activity in RSNs [91].

### **5. Resting state networks in disease**

#### **5.1 RS-fMRI studies in clinical diagnosis**

Given the utility of RSNs for understanding the brain's functional organization in healthy individuals, RS-fMRI has also been exploited for determining how the brain's organization is modified as the result of trauma, degeneration, or disease [92]. A majority of RS-fMRI studies have consisted of comparisons of resting-state functional connectivity patterns between groups of normal subjects and those with neurological or psychiatric impairments [93], in part due to the relative ease with which these studies can be conducted. While changes in the correlation patterns of spontaneous activity have been reported in many cases, the consistency of the correlations has varied significantly with the disease type. Studies of the default mode network in AD, for example, generally yield consistent patterning whereas network patterns in other types of diseases, for example, schizophrenia, exhibit wide variation.

Underlying mechanisms and even diagnostic markers of these dysfunctions are in many cases unknown, moreover, a hindrance to assessing how functional network changes modify behavior. This obstacle could be partially surmounted by knowing how focal perturbations impact functional and task-based connectivity. Supporting this, neuroimaging studies show that localized changes in neural activity result in distinct activity and functional connectivity changes within and between networks [93, 94]. Mapping of whole-brain effects on RSNs due to local trauma may therefore reveal how RSNs are globally reorganized following these insults. For example, the characterization of large-scale deregulations in functional connectivity may emerge from studies of selective trauma in highly interconnected core regions [95].

#### **5.2 RS-fMRI tools for stroke-induced changes in brain organization**

With this as an objective, RS-fMRI technical and analytical procedures have been exploited to interrogate RSN-based changes that occur in stroke. By definition, stroke is a clinical syndrome characterized as an acute, focal neurological deficit that is the result of vascular injury (e.g., infarction, hemorrhage) within the central nervous system [96]. It is itself a major cause of death and disability across the globe. In adults worldwide, stroke is the chief cause of acquired physical disability, and the second leading cause of mortality in middle-to high-income countries. Because the disruption is usually sudden, stroke's effects on neural networks can be directly attributed to the focal impairment, rather than to more widely extended and long-term processes, such as degeneration. Stroke frequently results from ischemia, for instance, which deprives the supply of blood to adjacent cerebral tissue [17].

Assessing the spatial locus of a stroke-based lesion requires knowledge of the brain vasculature, which assists in co-localizing fiber pathways and structural connectivity. Anterior circulation, for example, includes regions supplied by the anterior and middle cerebral arteries, which contain the ophthalmic artery. Strokes occurring within the opthalmic artery lead to monocular loss of vision. Proximal occlusion of the middle cerebral artery, on the other hand, can cause contralateral hemiparesis and hemi-sensory loss, visual field defect, and/or hemineglect [96].

### **5.3 Connectivity determinations in stroke diagnosis**

As mentioned, stroke outcomes involve not only focal disturbances at affected sites, that is, the set of regions directly damaged or indirectly affected by the stroke, but also those more distally located that are embedded within the larger functional network that is in dynamic balance with other networks of the brain. Hence, restingstate measures of connectivity can be expected to reflect a more distributed network organization than the lesion site alone and to be correspondingly seen in spatially extended, connectivity changes.

Consistent with this, global studies of focal infarcts affecting motor behaviors characteristically display a decrease in functional connectivity involving interhemispheric homologous sensory and motor areas, which is correlated with the degree of behavioral impairment. Reduced functional connectivity between hemispheres is also seen in rodent models of stroke [97], corresponding with decreases in motor proficiency [98]. In the first few days after stroke, this involves the connectivity between the ipsilesional primary sensorimotor cortex and its contralateral homologs [99]. Similarly, RS-fMRI of the sensorimotor network in humans, including the M1, SMA, secondary somatosensory cortex, cerebellum, putamen, and thalamus regions, reveals a direct correlation between motor performance and the degree of M1 interhemispheric connectivity [100]. Structural observations are consistent with this and show that the integrity of corticospinal fibers correlates with the reduction in interhemispheric M1 resting-state connectivity [99, 101]; RSN studies of effective connectivity with DCM further show that post-stroke excitatory, ipsilesional influences from premotor areas to M1 are also reduced, decreasing M1 output for paretic hand movements [17]. Ipsilesional inhibitory influences from M1 to the contralesional M1 are also attenuated. Together, these results implicate a reduction in inhibitory interhemispheric control of M1 homologs in paretic motor movements and excitatory intrahemispheric effects from premotor areas to M1. Importantly, they also reveal the interpretive utility of combining RS-fMRI effective and functional connectivity determinations in network assessments.

### **5.4 Assessing topological changes in stroke**

Functional determinations assist in the identification of resting networks based on characterization of connectivity number, direction, and weight. Changes in such parameters help to assess the degree to which the network has retained its functional association; that is, the degree to which it is intact. On the other hand, they do not assess connectivity topography, which reflects how the organization of the network influences information flow, which needs to be assessed with graph theoretical parameters like centrality or modularity. Recent evidence in animal models notably indicates that network topology is likely to change following stroke [98]. In a mice model, total functional connectivity increases in comparison with normal controls. Since interhemispheric connectivity is reduced in most stroke subtypes, this suggests that intrahemispheric functional connectivity is cumulatively increased, generating a new organizational network structure within the affected hemisphere; that is, a transference of interhemispheric callosal connections to intrahemispheric targets.

Diagnostic assessments of network reorganization in stroke patients, accordingly, have been required, typically employing graph theoretic modular analysis. Modular analysis of task-based studies in normal subjects, for example, shows a high level of reorganization of nodes in the frontal and temporal cortices from the resting state.

Moreover, as mentioned, complex dynamics occur between networks during task performance, which involves the reallocation of network modules. Graph theoretic analysis shows that this entails the switching of network topologies between the frontoparietal, ventral attention, and the dorsal attention areas [63–65]. In like manner, modularity determinations can be expected to show stroke-induced reorganization.

Existing studies reveal, in fact, a low-dimensional architecture following stroke [57]. The significance of this network reorganization is as yet undetermined. One possibility is that decreased modularity reflects a default strategy for efficient behavioral responses in a complex environment, which is needed to reduce the degrees of freedom in movement [101]. In healthy individuals, a higher modularity provides for exploration of varied trajectories, that is, there is a maximizing of degrees of freedom, which needs to be reduced to provide stability for tasking. In stroke, this exploratory ability is lost, together with a corresponding loss in modularity. The reduction in modularity would thus imply a reduced ability to process information effectively [57].

Methodologically, assessing this possibility would require RS-fMRI procedures capable of whole-brain modeling to determine whether and which topographical adjustments occur on a global scale [90]. This is likely to require a synergy of ongoing developments that merge enhanced signal recognition and data acquisition, big data processing pipelines, and whole brain reconstruction [22, 50, 90], suggesting that advanced clinical analysis with RS-fMRI remains at an early, but promising stage.

### **6. Conclusion**

Resting-state fMRI has enabled the identification of brain networks critical to affecting how humans interact, perceive, and process environmental and internal stimuli. Much of the success of this discovery can be attributed to the synergy between the technical capabilities of fMRI and the low-frequency activity characterizing RSNs. RS-fMRI has benefitted from a spectrum of technical advances in fMRI that have occurred since the initial discovery of RSNs, including improved data-gathering capacity, processing, and handling. The enhanced reliability of RSN detection made possible by these advances has underwritten increasingly powerful interpretive tools that are clarifying the role and structure of brain networks in organizing and executing global brain function. These insights into global brain events have in turn revealed areas where new technical advances, like big data processing and whole brain modeling, are needed, which can interrogate not only resting-state connectivity associations but also the dynamic variations in these associations that occur during brain behavior. While the use of these tools is currently limited to the research laboratory, their future potential for clinical use warrants the current expansion in technical development that will make possible the diagnosis of brain states.

*Resting-State fMRI Advances for Functional Brain Dynamics DOI: http://dx.doi.org/10.5772/intechopen.113802*

## **Author details**

Denis Larrivee1,2

1 Mind and Brain Institute, University of Navarra Medical School, Spain

2 Loyola University Chicago, USA

\*Address all correspondence to: sallar1@aol.com

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Biswal B, Yetkin FZ, Haughton VM, Hyde JS. Functional connectivity in the motor cortex of resting human brain using echo-planar MRI. Magnetic Resonance Medicine. 1995;**34**(4):537-541. DOI: 10.1002/mrm.1910340409

[2] van den Heuvel MP, Hilleke E, Hulshoff P. Exploring the brain network: A review on resting-state fMRI functional connectivity. European Neuropsychopharmacology. 2010;**20**:519-534

[3] Damoiseaux SA, Rombouts RB, Barkhof F, Beckman CF. Consistent resting-state networks across healthy subjects. National Academy of Sciences of the United States of America. 2006;**103**(37):13848-13853. DOI: 10.1073

[4] Seitzman BA, Snyder AZ, Leuthardt EC, Shimony JS. The state of resting state networks. Topics in Magnetic Resonance Imaging. 2019;**28**(4):189-196. DOI: 10.1097/ RMR.0000000000000214

[5] Smitha KA, Akhil RK, Arun KM, et al. Resting state fMRI: A review on methods in resting state connectivity analysis and resting state networks. Neuroradiology Journal. 2017;**30**(4):305- 317. DOI: 10.1177/1971400917697342

[6] Ogawa S, Tank DW, Menon R, et al. Intrinsic signal changes accompanying sensory stimulation: Functional brain mapping with magnetic resonance imaging. Proceedings of the National Academy of Science USA. 1992;**89**:5951-5955

[7] Bandettini P. The spatial, temporal, and interpretive limits of functional MRI. In: Davis K, Charney D, Coyle JT, Nemeroff C, editors. Neuropsychopharmacology: The Fifth Generation of Progress. Philadelphia: Lippincott, Williams, and Wilkins; 2002

[8] Kazan SM, Weiskopf N. fMRI Methods. Encyclopedia of Spectroscopy and Spectrometry (Third Edition). 2017:670-677. DOI: 10.1016/ B978-0-12-409547-2.12109-2

[9] Loued-Khenissi L, Doll O, Preuschoff K. An overview of functional magnetic resonance imaging techniques for organizational research. Organizational Research Methods. 2019;**22**(1):17-45

[10] Biswal B. Resting state fMRI: A personal history. NeuroImage. 2012;**62**(2):938-944

[11] Deshmane ME, Gulani V, Griswold MA, Seiberlich N. Parallel MR imaging. Journal of Magnetic Resonance Imaging. 2012;**36**(1):55-72. DOI: 10.1002/ jmri.23639

[12] Vadmal V, Junno G, Badye C, et al. MRI image analysis methods and applications. Neuro-Oncology Advances. 2020;**2**(1):1-13

[13] Seth AK, Barrett AB, Barnett L. Granger causality analysis in neuroscience and neuroimaging. The Journal of Neuroscience. 2015;**35**(8):3293-3297

[14] Friston KJ, Harrison L, Penny W. Dynamic causal modelling. NeuroImage. 2003;**19**:1273-1302

[15] Beim Graben P, Jimenez-Marin A, Diez I, Cortes JM, et al. Metastable resting state brain dynamics. Frontiers in Computational Neuroscience. 2019;**13**:62. DOI: 10.3389/fncom.2019.00062

*Resting-State fMRI Advances for Functional Brain Dynamics DOI: http://dx.doi.org/10.5772/intechopen.113802*

[16] Fox MD, Greicius M. Clinical applications of resting state functional connectivity. Frontiers in Systems Neuroscience. 2010;**4**(19):1

[17] Rehme AK, Grefkes C. Cerebral network disorders after stroke: Evidence from imaging-based connectivity analyses of active and resting brain states in humans. Journal of Physiology. 2013;**591**(1):17-31

[18] Siegel JS, Ramsey LE, Snyder AZ, et al. Disruptions of network connectivity predict impairment in multiple behavioral domains after stroke. Proceedings of the National Academy of Sciences. 2016:E4367-E4376. DOI: 10.1073/pnas.1521083113

[19] Voss MW, Soto C, Yoo S, et al. Exercise and hippocampal memory systems. Trends in Cognitive Science. 2019;**23**(4):318-333. DOI: 10.1016/j. tics.2019.01.006

[20] Vizioli L, Moeller S, Dowdle L, et al. Lowering the thermal noise barrier in functional brain mapping with magnetic resonance imaging. Nature Communications. 2021;**12**:5181. DOI: 10.1038/s41467-021-25431-8

[21] Smith SM, Vidaurre D, Beckmann CF, et al. Functional connectomics from resting-state fMRI. Trends in Cognitive Science. 2013;**2013**(17):666-682. DOI: 10.1016/j.tics.2013.09.016

[22] Raimondo L, Ĺcaro AF, Jurjen HO, et al. Advances in resting state fMRI acquisitions for functional connectomics. NeuroImage. 2021;**243**:118503

[23] Yacoub E, Van De Moortele PF, Shmuel A, et al. Signal and noise characteristics of Hahn SE and GE BOLD fMRI at 7 T in humans. NeuroImage. 2005;**2005**(24):738-750. DOI: 10.1016/j. neuroimage

[24] Van Dijk KRA, Hedden T, Venkataraman A, et al. Intrinsic functional connectivity as a tool for human connectomics: Theory, properties, and optimization. Journal of Neurophysiology. 2010;**103**:297-321. DOI: 10.1152/jn.00783.2009

[25] Maknojia S, Churchill NW, Schweizer TA, Graham SJ. Resting state fMRI: Going through the motions. Frontiers in Neuroscience. 2019;**13**:825. DOI: 10.3389/fnins.2019.00825

[26] Yan CG, Craddock RC, Zuo XN, et al. Standardizing the intrinsic brain: Towards robust measurement of inter-individual variation in 1000 functional connectomes. NeuroImage. 2013;**80**:246-262

[27] Demetriou L, Kowalczyk OS, Tyson G, et al. A comprehensive evaluation of increasing temporal resolution with multiband- accelerated protocols and effects on statistical outcome measures in fMRI. NeuroImage. 2018;**176**:404-416. DOI: 10.1016/j. neuroimage.2018.05.011

[28] Preti MG, Bolton TA, Ville DV. The dynamic functional connectome: State-of- the-art and perspectives. NeuroImage. 2016:41-54. DOI: 10.1016/j. neuroimage.2016.12.061

[29] Zalesky A, Fornito A, Cocchi L, et al. Time-resolved restingstate brain networks. Proceedings of the National Academy of Science USA. 2014;**2014**(111):10341- 10346. DOI: 10.1073/pnas.1400181111

[30] Aedo-Jury F, Schwalm M, Hamzehpour L, Stroh A. Brain states govern the spatio-temporal dynamics of resting-state functional connectivity. eLife. 2020;**22**(9):e53186. DOI: 10.7554/ eLife.53186

[31] Jacobs HI, Priovoulos N, Poser BA, et al. Dynamic behavior of the locus coeruleus during arousal-related memory pro- cessing in a multi-modal 7T fMRI paradigm. eLife. 2020;**9**. DOI: 10.7554/ eLife.52059

[32] Wu GR, Marinazzo D. Sensitivity of the resting-state haemodynamic response function estimation to autonomic nervous system fluctuations. Philosophical Transactions A. Mathematics, Physics and Engineering Science. 2016;**374**. DOI: 10.1098/ rsta.2015.0190

[33] Chen JE, Polimeni JR, Bollmann S, Glover GH. On the analysis of rapidly sampled fMRI data. NeuroImage. 2019;**188**:807-820. DOI: 10.1016/j. neuroimage.2019.02.008

[34] Huotari N, Raitamaa L, Helakari H, et al. Sampling rate effects on resting state fMRI metrics. Frontiers in Neuroscience. 2019;**13**:279. DOI: 10.3389/ fnins.2019.00279

[35] Barth M, Breuer F, Koopmans PJ, et al. Simultaneous multislice (SMS) imaging techniques. Magnetic Resonance Medicine. 2016;**2016**(75):63-81. DOI: 10.1002/mrm.25897

[36] Breuer FA, Blaimer M, Heidemann RM, et al. Controlled aliasing in parallel imaging results in higher accelera- tion (CAIPIRINHA) for multi-slice imaging. Magnetic Resonance Medicine. 2005;**53**:684-691. DOI: 10.1002/mrm.20401

[37] Setsompop K, Gagoski BA, Polimeni JR, et al. Blipped-controlled aliasing in parallel imaging for simultaneous multislice echo pla- nar imaging with reduced g-factor penalty. Magnetic Resonance in Medicine. 2012;**67**:1210-1224. DOI: 10.1002/ mrm.23097

[38] Hamilton J, Franson D, Seiberlich N. Recent advances in parallel imaging for MRI. Progress in Nuclear Magnetic Resonance Spectroscopy. 2017;**101**:71-95. DOI: 10.1016/j.pnmrs.2017.04.002

[39] Calogero C. Recent advances in parallel imaging for MRI: WAVE-CAIPI technique. Journal of advanced. Health Care. 2022;**4**(1)

[40] Batson MA, Petridou N, Klomp DW, et al. Single session imaging of cerebellum at 7 tesla: Obtaining structure and function of multiple motor subsystems in individual subjects. PLoS One. 2015;**10**:e0134933. DOI: 10.1371/ jour- nal.pone.0134933

[41] Zahneisen B, Hugger T, Lee KJ, et al. Single shot concentric shells trajectories for ultra fast fMRI. Magnetic Resonance Medicine. 2012;**68**:484-494. DOI: 10.1002/mrm.23256

[42] Akin B, Lee HL, Hennig J, et al. Enhanced subject-specific resting-state network detection and extraction with fast fMRI. Human Brain Mapping. 2017;**2017**(38):817-830. DOI: 10.1002/ hbm.23420

[43] Pohmann R, Speck O, Scheffler K. 2016. Signal-to-noise ratio and MR tissue parameters in human brain imaging at 3, 7, and 9.4 tesla using current receive coil arrays. Magnetic Resonance Medicine. 2016;**75**:801-809. DOI: 10.1002/ mrm.25677

[44] Vaughan JT, Garwood M, Collins CM, et al. 7T vs. 4T: RF power, homogeneity, and signal-to-noise comparison in head images. Magnetic Resonance Medicine. 2001;**46**:24-30. DOI: 10.1002/mrm.1156

[45] Branco P, Seixas D, Castro SL. Temporal reliability of ultra-high field resting- state MRI for single-subject

*Resting-State fMRI Advances for Functional Brain Dynamics DOI: http://dx.doi.org/10.5772/intechopen.113802*

sensorimotor and language mapping. NeuroImage. 2018;**168**:499-508. DOI: 10.1016/j.neuroimage.2016.11.029

[46] Torrisi S, Nord CL, Balderston NL, et al. Resting state connectivity of the human habenula at ultra-high field. NeuroImage. 2017;**147**:872-879. DOI: 10.1016/j.neuroimage.2016.10.034

[47] Van de Moortele PF, Auerbach EJ, Olman C, et al. T1 weighted brain images at 7 tesla unbiased for proton density, T2<sup>∗</sup> contrast and RF coil receive B1 sensitivity with simultaneous vessel visualization. NeuroImage. 2009;**46**:432-446. DOI: 10.1016/j.neuroimage.2009.02.009

[48] Van Essen DC, Ugurbil K, Auerbach E, et al. The human connectome project: A data acquisition perspective. NeuroImage. 2012;**62**(4):2222-2231

[49] Biswal BB, Mennes M, Zuo XN, et al. Toward discovery science of human brain function. National Academy of Sciences of the United States of America. 2010;**107**(10):4734-4739

[50] Phinyomark A, Ibanez-Marcelo E, Petri G. Resting-state fMRI functional connectivity: Big data preprocessing pipelinesand topological data analysis. IEEE Transactions on Big Data. 2017;**3**(4):415-428

[51] Churchill NW et al. Optimizing preprocessing and analysis pipelines for single-subject fMRI. I. Standard temporal motion and physiological noise correction methods. Human Brain Mapping. 2012;**33**(3):609-627

[52] Ghrist B. Barcodes: The persistent topology of data. Bulletin of the American Mathematics Society. 2008;**45**(1):61-75

[53] Yeo BTT, Krienen FM, Sepulcre J, et al. The organization of the human

cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology. 2011;**106**:1125-1165. DOI: 10.1152/jn.00338.2011

[54] Shahhosseini Y, Miranda MF. Functional connectivity methods and their applications in fMRI data. Entropy. 2022;**24**:390. DOI: 10.3390/e24030390

[55] Le Q, Karpenko A, Ngiam J, and Ng A. ICA with reconstruction cost for efficient overcomplete feature learning. In: Shawe-Taylor J, Zemel R, Bartlett P, Pereira F, Weinberger KQ, editors, Advances in Neural Information Processing Systems. Vol. 4. New York: Curran Associates, Inc.; 2011

[56] Yang J, Gohel S, Vachha B. Current methods and new directions in resting state fMRI. Clinical Imaging. 2020;**65**:47- 53. DOI: 10.1016/j.clinimag.2020.04.004

[57] Corbetta M, Siegel JS, Schulman GL. On the low dimensionality of behavioral deficits and alterations of brain network connectivity after focal injury. Cortex. 2018;**107**:229-237

[58] Crossley NA, Mechelli A, Vertes PE, et al. Cognitive relevance of the community structure of the human brain functional coactivation network. Proceedings of the National Academy of Sciences USA. 2013;**110**:11583-11588

[59] Lerman-Sinkoff DB, Barch DM. Network community structure alterations in adult schizophrenia: Identification and localization of alterations. Neuroimage Clinic. 2016;**10**:96-106. DOI: 10.1016/j. nicl.2015.11.011

[60] Bordier C, Nicolini C, Forcellini G, Bifone A. Disrupted modular organization of primary sensory brain areas in schizophrenia. Neuroimage Clinic. 2018;**18**:682-693. DOI: 10.1016/j. nicl.2018.02.035

[61] Bullmore ET, Sporns O. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Review Neuroscience. 2009;**10**:186-198. DOI: 10.1038/nrn2575

[62] Liang X, Zou QH, He Y, Yang YH. Topologically reorganized connectivity architecture of default-mode, executivecontrol, and salience networks across working memory task loads. Cerebral Cortex. 2016;**26**:1501-1511. DOI: 10.1093/ cercor/bhu316

[63] Fornito A, Harrison BJ, Zalesk A, Simons JS. Competitive and cooperative dynamics of large-scale brain functional networks supporting recollection. National Academy of Sciences of the United States of America. 2012;**109**(31):12788-12793

[64] Bray S, Arnold AEGF, Levy RM, Iaria G. Spatial and temporal functional connectivity changes between resting and attentive states. Human Brain Mapping. 2015;**36**:549-565. DOI: 10.1002/ hbm.22646

[65] Vatansever D, Menon DK, Manktelow AE, et al. Default mode network connectivity during task execution. NeuroImage. 2015;**2015**(122):96-104. DOI: 10.1016/j. neuroimage.2015.07.053

[66] Braun U, Schafer A, Walter H, et al. Dynamic reconfiguration of frontal brain networks during executive cognition in humans. Proceedings of the National Academy of Science USA. 2015;**112**:11678-11683. DOI: 10.1073/ pnas.1422487112

[67] Leech R, Kamourieh S, Beckmann CF, Sharp DJ. Fractionating the default mode network: Distinct

contributions of the ventral and dorsal posterior cingulate cortex to cognitive control. Journal of Neuroscience. 2011;**31**:3217-3224. DOI: 10.1523/ jneurosci.5626-10.2011

[68] Rubinov M, Sporns O. Weightconserving characterization of complex functional brain networks. NeuroImage. 2011;**56**:2068-2079. DOI: 10.1016/j. neuroimage.2011.03.069

[69] Lebedev AV, Nilsson J, Lövdén M. Working memory and reasoning benefit from different modes of large-scale brain dynamics in healthy older adults. Journal of Cognitive Neuroscience. 2018;**30**:1033- 1046. DOI: 10.1162/jocn\_a\_01260

[70] Pachitariu M, Lyamzin DR, Lesica SM. State dependent population coding in primary auditory cortex. Journal of Neuroscience. 2015;**35**:2058-2073. DOI: 10.1523/ jneurosci.3318-14.2015

[71] Schwalm M, Schmid F, Wachsmud L, et al. Cortext wide BOLD fMRI acti vity reflects locally recorded slow oscillation associated calcium waves. eLife. 2017;**6**:e27602

[72] Pais-Roldan P, Takahashi K, Sobczak F, et al. Indexing brain state dependent pupil dynamics with simultaneous fMRI and optical fiber calcium recording. National Academy of Sciences of the United States of America. 2020;**117**:6875-6882

[73] Staresina BP, Alink A, Kriegeskorte N, Henson RN. Awake reactivation predicts memory in humans. National Academy of Sciences of the United States of America. 2013;**110**:21159-21164

[74] Stephan KE, Friston KJ. Analyzing effective connectivity with functional magnetic resonance imaging.

*Resting-State fMRI Advances for Functional Brain Dynamics DOI: http://dx.doi.org/10.5772/intechopen.113802*

Wiley Interdisciplinary Review of Cognitive Science. 2010;**1**(3):446-459. DOI: 10.1002/wcs.58

[75] Friston KJ. Functional and effective connectivity: A review. Brain Connectomics. 2011;**1**(1):13-36. DOI: 10.1089/brain.2011.0008

[76] Kiebel SJ, Garrido MI, Moran RJ, et al. Dynamic causal modelling for EEG and MEG. Cognitive Neurodynamics. 2008;**2**:121-136. DOI: 10.1007/ s11571-008-9038-0

[77] Moran R, Pinotsis DA, Friston K. Neural masses and fields in dynamic causal modeling. Frontiers in Computational Neuroscience. 2013;**7**(57):1-12. DOI: 10.3389/fncom.2013.00057

[78] Wei H, Jafarian A, Zeidman P, et al. Bayesian fusion and multimodal DCM for EEG and fMRI. NeuroImage. 2020;**211**:6595

[79] Hass J, Hertäg L, Durstewitz D. A detailed data-driven network model of prefrontal cortex reproduces key features of in vivo activity. PLoS Computational Biology. 2016;**12**:e1004930. DOI: 10.1371/ journal.pcbi.1004930

[80] Frässle S, Lomakina EI, Kasper L, et al. A generative model of whole-brain effective connectivity. NeuroImage. 2018;**179**:505-529. DOI: 10.1016/j. neuroimage.2018.05.058

[81] Frässle S, Lomakina EI, Razi A, et al. Regression DCM for fMRI. NeuroImage. 2017;**155**:406-421. DOI: 10.1016/j. neuroimage.2017.02.090

[82] Barnett L, Barrett AB, Seth AK. Granger causality and transfer entropy are equivalent for gaussian variables. Physics Review Letters. 2009;**103**:238701

[83] Lee WH, Frangou S. Linking functional connectivity and dynamic properties of resting-state networks. Scientific Reports. 2017;**7**:16610. DOI: 10.1038/s41598-017-16789-1

[84] Rabinovich MI, Huerta R, Varona P, Afraimovich VS. Transient cognitive dynamics, metastability, and decision making. PLoS Computational Biology. 2008;**4**(5):e1000072. DOI: 10.1371/ journal.pcbi.1000072

[85] Tognoli E, Kelso JA. The metastable brain. Neuron. 2014;**81**(1):35-48. DOI: 10.1016/j.neuron.2013.12.022

[86] Tagliazucchi E, Laufs H. Decoding wakefulness levels from typical fMRI resting-state data reveals reliable drifts between wakefulness and sleep. Neuron. 2014;**82**(3):695-708. DOI: 10.1016/j. neuron.2014.03.020

[87] Beim Graben P, Sellers KK, Fröhlich F, Hutt A. Optimal estimation of recurrence structures from time series. Europhysics Letters. 2016;**2016**(114):38003. DOI: 10.1209/0295-5075/114/38003

[88] Carvalho A, Langa J, Robinson J. Attractors for infinite-dimensional nonautonomous dynamical systems. In: Applied Mathematical Sciences. New York: Springer; 2012

[89] Soler-Toscano F, Galad ́ı JA, Escrichs A, et al. What lies underneath: Precise classification of brain states using time-dependent topological structure of dynamics. PLoS Computational Biology. 2022;**18**(9):e1010412. DOI: 10.1371/ journal.pcbi.1010412

[90] López-González A, Panda R, Ponce-Alvarez A, et al. Loss of consciousness reduces the stability of brain hubs and the heterogeneity of brain dynamics. Communications Biology. 2021;**4**(1):1037-1052

[91] Greicius M. Resting-state functional connectivity in neuropsychiatric disorders. Current Opinion in Neurology. 2008;**21**:424-430

[92] Andoh J, Matsushita R, Zatorre RJ. Asymmetric interhemispheric transfer in the auditory network: Evidence from tms, resting-state fMRI, and diffusion imaging. Journal of Neuroscience. 2015;**35**(43):14602-14611. DOI: 10.1523/ jneurosci.2333-15.2015

[93] Watanabe T, Hirose S, Wada H, et al. A pairwise maximum entropy model accurately describes restingstate human brain networks. Nature Communications. 2013;**4**:1370

[94] Aerts H, Fias W, Caeyenberghs K, et al. Brain networks under attack: Robustness properties and the impact of lesions. Brain. 2016;**139**(12):3063-3083. DOI: 10.1093/brain/aww194

[95] Murphy SJX, Werring DJ. Stroke: Causes and clinical features. Medicine. 2020;**48**:9561

[96] Rehme AK, Eickhoff SB, Rottschy C, et al. Activation likelihood estimation meta-analysis of motor-related neural activity after stroke. NeuroImage. 2012;**59**:2771-2782

[97] Hall GR, Kaiser M, Farr TD. Functional connectivity change in response to stroke is comparable across species from mouse to man. Stroke. 2021;**52**:2961-2963

[98] van Meer MP, Otte WM, van der Marel K, et al. Extent of bilateral neuronal network reorganization and functional recovery in relation to stroke severity. The Journal of Neuroscience. 2012;**32**:4495-4507. DOI: 10.1523/ jneurosci.3662-11.2012

[99] Carter AR, Astafiev SV, Lang CE, et al. Resting interhemispheric functional magnetic resonance imaging connectivity predicts performance after stroke. Annals of Neurology. 2010;**67**:365-375

[100] Carter AR, Patel KR, Astafiev SV, et al. Upstream dysfunction of somatomotor functional connectivity after corticospinal damage in stroke. Neurorehabilitation and Neural Repair. 2012;**26**:7-19

[101] Santello M. Getting a grasp of theories of sensorimotor control of the hand: Identification of underlying neural mechanisms. Motor Control. 2015;**19**(2):149-153. DOI: 10.1123/mc. 2014-0057

Section 4
