**9. Challenges of digital pathology education**

*Interactive Multimedia - Multimedia Production and Digital Storytelling*

AI may be able teachers to identify students that may need some additional help or individuals with special needs that may struggle in a typical classroom. AI algorithms have been designated to increase the production of learning and the efficiency and effectiveness in learning. There are some paramount roles of AI in education. They include automation of grading with an approach tailored explicitly to short answered question other than multiple choice questions, teacher's support using chatbot able to communicate directly with students, student's aid with future students having an AI lifelong learning companion starting from high-school to university and postgraduate education adopting a new model of AI-controlled continuing professional development. Moreover, AI may be able to identify each student's strengths and weaknesses in a way that may be more standard than conventional teaching, which may be linked to the current motivation of the teacher. There is a personalized learning curriculum with an AI machine able to help students with special needs by adapting teaching material to lead them to success without being downscaled for mental r physical barriers. AI will allow teachers to Act as learning motivators and help to mentor undergraduate and postgraduate students to the best suitable path for them. As AI takes on more of an education role by providing students with the necessary information, this procedure will change the position of teachers in the classroom. Educators will move into the role of classroom supervisor, facilitator or learning motivator and adopt a previously unimaginable relationship with the students. Some examples of classroom-based AI include Thinkster Math, brain, and Content Technologies Inc. Thinkster Math (http://get.hellothinkster.com/why-tabtor-is-now-thinkster/), which is a math tutor able to identify the level of each student allowing each student to improve the logic process by providing video assistance for stuck students and immediate, personalized feedback. Brainly (https://brainly.com/) is the social media site for classroom questions allowing users to ask questions and receive verified answers from fellow students. Content Technologies, Inc. (http://contenttechnologiesinc. com/) is an AI company using Deep Learning to create customized textbooks. Teachers carefully import curricula (syllabi) into a CTI engine. The CTI equipment then masters the content and uses specific algorithms to create tailored books and coursework based on core concepts. Mika (https://www.carnegielearning.com/ products/software-platform/mika-learning-software/) is another AI based on math, and like Thinkster Math, Carnegie Learning's Mika harbors AI-based tutoring tools for learners, who may be too busy for after-school tutors. This solution has also been promoted for students who require personalized attention. Finally, Netex Learning (http://www.netexlearning.com/en/) teachers design curriculum across a variety of digital platforms and devices (iPad, android or surface devices). The use of Netex allows teachers to create customized materials to be published on any digital platform while providing tools for video sessions, adapted assignments, and learning analytics (https://www.thetechedvocate.org/5-examples-artificialintelligence-classroom/). There will be plenty of apps in the future able to target pathology residents in their curriculum preparation and the proposed limitation of the pathology education to core-competencies only is a tragic evolution. The identification and implementation of these technologies should form the basis for venture companies able to shape the transforming platform of work of pathologists. An application to improve pathology teaching is the use of eye-tracking technology [73, 74]. During the teaching of histopathology skills to medical students and postgraduates, the use of eye tracking allowed a better performance at the final score in learners that took advantage of this technique compared to

**8. Artificial intelligence and classroom**

**164**

learners that did not utilize it.

DP is far from the niche described a few years ago. DP is a stable platform in many universities and colleges. Radiology images are chiefly acquired as digital data and saved in robust picture archiving systems [75]. Hartman et al. [75] describe the challenges using digital pathology for second opinion intraoperative consultations for over 10 years implementing an incremental rollout for digitalization in pathology on subspecialty benches. They began with cases that contained small amounts of tissue (biopsy specimens). The authors successfully scanned over 40,000 slides through their digital pathology system and emphasized that a successful conversion to digitalization in pathology requires pre-imaging adjustments, integrated software, and post-imaging evaluations. The limitations in the implementation of digital pathology include: (1) Infrastructure and resources support, although the cost of acquisition and maintenance of DP equipment, networking equipment, and staffs expenses are cheaper than a few years ago. (2) Integration into an existing laboratory information system (LIS) or Provincial Health Network (PHN) portals such as the upcoming Epic software implementation in several regions (e.g., Alberta, Canada) [76–79] rather than a stand-alone DP education system may attract investments from the government or the private sector or creating public-private partnerships. (3) Acceptance of digital pathology images in the diagnosis (4) Engagement of all pathologists in practice or training.

## **10. Artificial neural networks in medicine**

Artificial neural networks (ANN) are increasingly a desirable technique for solving machine learning and AI issues. The variety of neural network type and their use of diagnosis and therapy in medicine requires skilled knowledge to choose the most appropriate approach. ANN may be considered as simplified models of the human brain neuronal networks. In both natural and artificial, the essential requirement for a system is that it should attempt to capture the necessary information for further processing. The simplest ANN that may be listed here is the threshold logic unit or TLU. A processing unit for numbers with n inputs x1, x2,…, xn and one output y constitute the TLU. In the TLU, there is a threshold θ, and each input xi is associated with a weight wi. A TLU computes the function and then output a "1" if this sum exceeds a threshold, and a "0" otherwise. TLUs mimic the thresholding comportment of biological neurons *in vivo*. This simple logical unit may become more complicated and apply to various areas of medicine, such as diagnostic systems, biochemical analysis, image analysis, and drug development. ANNs are very useful in medicine and applications to have been described in the literature dealing with problems in cardiology and oncology. ANNs are an AI technique that uses a set of nonlinear equations to mimic the neuronal connections of biological systems. ANNs are useful for pattern recognition and outcome prediction applications and have the potential to bring AI techniques to the personal computers of practicing pathologist, assisting them with a variety of diagnostic procedure, such as hepatocellular carcinoma [80–83]. The benefits to utilize ANNs is that they are not affected by external factors such as fatigue, working conditions and emotional or mood state. ANNs may represent a useful AI companion in the routine diagnostic pathology as it has been used in several other fields in medicine, such as to analyze blood and urine samples, track glucose levels in diabetics, and determine ion levels in body fluids. There are numerous applications including tumor detection in ultra-sonograms, classification of chest X-rays, blood vessel classification in MRI, determination of skeletal age from X-ray images, and determination of brain maturation, among others. ANNs are also useful in the

development of drugs for treating cancer and AIDS and in the process of modeling biomolecules. There is also the ability of ANNs to provide sensor fusion which is the combining of values from several different sensors. Sensor fusion empowers the ANNs to acquire complex relationships among the individual sensor values, which would otherwise be lost if the values were independently analyzed. Pathology is an imaging-based discipline in medicine which deals with the nature of disease like radiology. Pattern recognition starts with the idea of classifying input data into identifiable classes by use of significant feature attributes of the data, where the feature attributes are extracted from a background of irrelevant detail. This pattern recognition has been used primarily in radiology [84–92]. ANNs are used in pattern recognition because of their ability to learn and to store knowledge, and they can achieve very high computation rates which are vital in an application like telemedicine. Another approach for applications of image-driven machine learning is a "deep learning" architecture labeled as convolutional neural networks (CNNs). CNNs are a deep learning architecture procedure constituted by a set of layers of individual modules able to extract progressively and sequentially higher levels of abstraction from input images. This procedure is far more sophisticated than the human eye and can discern immediately features that are important for a classification task. AlexNet [93–98] and GoogLeNet [95, 97–107] became quite popular most recently. Their uptake has been speeding up by the availability of open source software such as Caffe, Theano, and Tensorflow. These frameworks of deep learning interface efficiently with Graphical Processing Units (GPUs) to provide speed improvements at which models can be developed and tested. Neural networks together with random forests (RF), and support vector machines (SVMs) are machine learning algorithms. Esteva et al. were able to create and train a CNN to differentiate between benign and malignant skin lesions obtaining an accuracy pretty similar as dermatologists on a test set of cases verified by follow-up biopsies [108]. In **Table 1**, some relevant terms of machine learning in pathology are grouped.


**167**

research and practice [126].

*Digital Pathology: The Time Is Now to Bridge the Gap between Medicine and Technological…*

Image recognition in pathology has used a discrete number of hand-crafted features, which are time-consuming and are limited in scope, while deep learning identifies its elements from a large number of training examples able to identify patterns that may be unrecognized by humans. There are three tasks in "deep learning" that need to be differentiated, including detection, segmentation, and classification. Litjens et al. trained a CNN for prostate and breast biopsies to improve the objectivity and efficiency of histologic (microscopic) slide analysis. All slides containing prostate cancer and micro- and macro-metastases of breast cancer could be recognized automatically. Moreover, 30–40% of the slides containing benign and normal tissue could be excepted without the use of any additional immunohistochemical marker or human intervention [109]. Murthy et al. investigated the automated classification of the nuclear shapes and visual attributes of cells of glioma, a tumor of the central nervous system, using CNNs on pathology images of automatically segmented nuclei, proposing three methods that improve the performance of a previously-developed semi-supervised CNN. On a dataset of 2078 models, the combination of the proposed approaches was able to cut the error rate and shape classification by 21.54% and 15.07%, respectively [110]. It is not inconceivable that computers in the future will exceed human decision making demonstrating their superiority over humans in identify-

*DOI: http://dx.doi.org/10.5772/intechopen.84329*

ing new categories [60, 111].

**11. Artificial intelligence and basic research**

Image-based recognition of developmental pathways has been a pillar in identifying several milestones in developmental biology [112–119]. In systems biology, networks and network-based methods are starting a new analysis of the functional organization of gene networks [120]. In translational bioinformatics, there is the union of translational medicine and bioinformatics. In this setting, translational medicine moves fundamental discoveries of biology from the research bench into the patient-care setting and iteratively uses clinical observations to inform basic biologists. Translational medicine is focusing on patient care, including the creation of new diagnostic procedures, prognostic markers, prevention strategies, and therapeutic protocols based on biological discoveries with an explicit goal of affecting profoundly clinical care [121]. AI is helping to decipher non-coding genes after that 17 years ago the sequencing of the human genome was reached. Currently, one in eight of the 22,210 coding genes listed by the Ensembl/GENCODE, RefSeq and UniProtKB reference databases are differently marked across the three sets [122]. Mappings of tumor-infiltrating lymphocytes (TILs) based on histological images through computational staining using a convolutional neural network trained to classify patches of images will be important in identifying the interaction of cancer with the surrounding environment [123]. The fabrication of functional DNA nanostructures operating at a cellular level could be crucial in determining the pathways to check how more natural-like orchestration is present at cellular level comparing to the rigid and restrictive conventional approaches adopted so far [124]. Currently, we are witnessing a renewed interest in adapting ANN for pharmaceutical research and computer-assisted drug discovery, and design will be a daily task in the future [125]. We specially emphasize deep neural networks, restricted Boltzmann machine networks, and convolutional networks. The Virtual Physiological Human and research studies into nanotechnology will confidently produce yet more unpredictable opportunities, leading to substantial changes in biomedical

#### **Table 1.**

*The most useful definitions of frameworks of machine learning and beyond.*

*Digital Pathology: The Time Is Now to Bridge the Gap between Medicine and Technological… DOI: http://dx.doi.org/10.5772/intechopen.84329*

Image recognition in pathology has used a discrete number of hand-crafted features, which are time-consuming and are limited in scope, while deep learning identifies its elements from a large number of training examples able to identify patterns that may be unrecognized by humans. There are three tasks in "deep learning" that need to be differentiated, including detection, segmentation, and classification. Litjens et al. trained a CNN for prostate and breast biopsies to improve the objectivity and efficiency of histologic (microscopic) slide analysis. All slides containing prostate cancer and micro- and macro-metastases of breast cancer could be recognized automatically. Moreover, 30–40% of the slides containing benign and normal tissue could be excepted without the use of any additional immunohistochemical marker or human intervention [109]. Murthy et al. investigated the automated classification of the nuclear shapes and visual attributes of cells of glioma, a tumor of the central nervous system, using CNNs on pathology images of automatically segmented nuclei, proposing three methods that improve the performance of a previously-developed semi-supervised CNN. On a dataset of 2078 models, the combination of the proposed approaches was able to cut the error rate and shape classification by 21.54% and 15.07%, respectively [110]. It is not inconceivable that computers in the future will exceed human decision making demonstrating their superiority over humans in identifying new categories [60, 111].

#### **11. Artificial intelligence and basic research**

Image-based recognition of developmental pathways has been a pillar in identifying several milestones in developmental biology [112–119]. In systems biology, networks and network-based methods are starting a new analysis of the functional organization of gene networks [120]. In translational bioinformatics, there is the union of translational medicine and bioinformatics. In this setting, translational medicine moves fundamental discoveries of biology from the research bench into the patient-care setting and iteratively uses clinical observations to inform basic biologists. Translational medicine is focusing on patient care, including the creation of new diagnostic procedures, prognostic markers, prevention strategies, and therapeutic protocols based on biological discoveries with an explicit goal of affecting profoundly clinical care [121]. AI is helping to decipher non-coding genes after that 17 years ago the sequencing of the human genome was reached. Currently, one in eight of the 22,210 coding genes listed by the Ensembl/GENCODE, RefSeq and UniProtKB reference databases are differently marked across the three sets [122]. Mappings of tumor-infiltrating lymphocytes (TILs) based on histological images through computational staining using a convolutional neural network trained to classify patches of images will be important in identifying the interaction of cancer with the surrounding environment [123]. The fabrication of functional DNA nanostructures operating at a cellular level could be crucial in determining the pathways to check how more natural-like orchestration is present at cellular level comparing to the rigid and restrictive conventional approaches adopted so far [124]. Currently, we are witnessing a renewed interest in adapting ANN for pharmaceutical research and computer-assisted drug discovery, and design will be a daily task in the future [125]. We specially emphasize deep neural networks, restricted Boltzmann machine networks, and convolutional networks. The Virtual Physiological Human and research studies into nanotechnology will confidently produce yet more unpredictable opportunities, leading to substantial changes in biomedical research and practice [126].

*Interactive Multimedia - Multimedia Production and Digital Storytelling*

development of drugs for treating cancer and AIDS and in the process of modeling biomolecules. There is also the ability of ANNs to provide sensor fusion which is the combining of values from several different sensors. Sensor fusion empowers the ANNs to acquire complex relationships among the individual sensor values, which would otherwise be lost if the values were independently analyzed. Pathology is an imaging-based discipline in medicine which deals with the nature of disease like radiology. Pattern recognition starts with the idea of classifying input data into identifiable classes by use of significant feature attributes of the data, where the feature attributes are extracted from a background of irrelevant detail. This pattern recognition has been used primarily in radiology [84–92]. ANNs are used in pattern recognition because of their ability to learn and to store knowledge, and they can achieve very high computation rates which are vital in an application like telemedicine. Another approach for applications of image-driven machine learning is a "deep learning" architecture labeled as convolutional neural networks (CNNs). CNNs are a deep learning architecture procedure constituted by a set of layers of individual modules able to extract progressively and sequentially higher levels of abstraction from input images. This procedure is far more sophisticated than the human eye and can discern immediately features that are important for a classification task. AlexNet [93–98] and GoogLeNet [95, 97–107] became quite popular most recently. Their uptake has been speeding up by the availability of open source software such as Caffe, Theano, and Tensorflow. These frameworks of deep learning interface efficiently with Graphical Processing Units (GPUs) to provide speed improvements at which models can be developed and tested. Neural networks together with random forests (RF), and support vector machines (SVMs) are machine learning algorithms. Esteva et al. were able to create and train a CNN to differentiate between benign and malignant skin lesions obtaining an accuracy pretty similar as dermatologists on a test set of cases verified by follow-up biopsies [108]. In **Table 1**, some relevant terms

Artificial intelligence (AI) A context where a machine executes the execution of

Artificial neural networks (ANNs) Computing structure with several stacked layers that

Convolutional neural networks (CNNs) An ANN-like architecture, but devoid of the constraint of

tasks. Machine learning (ML) An AI field, which stresses the use of algorithmic

Deep learning (DL) An AI and ML subfield that controls large-scale datasets and

ImageNet A dataset of large-scale (10 million) images annotated by

Technological singularity An event showing a singular technological advance or sum

*The most useful definitions of frameworks of machine learning and beyond.*

of machine learning in pathology are grouped.

machine learning task.

cognitive tasks

analyze information from the input to the desired output, with mathematical optimization that is at the basis for a process driving knowledge extraction and learning from the data (input) concerning the production (output).

every stacked layer and applicable for image recognition

approaches to train machines in performing tasks such as classification, prediction, and pattern recognition.

consecutively complex mathematical architecture to fulfill a

nouns in the photos with several degrees of granularity.

of innumerable technological advances that in aggregate could lead to a break in the psychologic and somatic evolution of humans with entirely unpredictable results.

**166**

**Table 1.**

## **12. Quantum computing and pathology imaging**

The language called "R" is a free open source programming language, which is mainly used for data analytics and statistical analysis. Compared with commercial software, open source software allows the operator to become a programmer and change the code. R is enabling users to develop custom AI apps to arrange within their organization with applications for predictive modeling, deep learning, extracting mission-critical information from reams of text, and several other applications. A revolutionary concept in digital data processing is quantum computing, which is based on the fundamental principles by which nature operates, i.e., quantum mechanics [127]. In a classic computer, the process works with bits, which at any given time can be in one of two states, i.e., 0 or 1. Conversely, quantum computers use qubits. These units can exist in any superposition of states 0 and 1 and are represented by a complex number, which is a number that can be expressed in the form a + bi, where a and b are real numbers, while i is a solution of the equation x<sup>2</sup> = −1, and it is called imaginary number, because no real number satisfies this equation. When N qubits are in superposition, a combination of 2^N states are created. While a traditional computer can only hold one of these states at a time, quantum computers can perform significant operations on superpositions of states. The most basic operations performed on qubits are defined by quantum gates, which are pretty similar to logical gates used in standard computers using bits. The state of a quantum computer, a set of qubits called quantum register, can be visualized in some ways, typically as a 2D or 3D graph, on which points or bars represent superpositions of qubits, while their color or bar height represent amplitude and phase of a given superposition. Instituted in 1999, D-Wave Systems is considered as the world's first quantum computing company. D-Wave is the leader in the progress and distribution of quantum computing systems and software, and a few applications have been recently reported [128–131]. Quantum computing users have already developed over 100 early applications in areas including image analysis, optimization, machine learning, pattern recognition, anomaly detection, cybersecurity, financial analysis, software/hardware verification, and validation, bioinformatics/cancer research, traffic flow, manufacturing processes, and internet advertising. However, quantum computing is a work in progress, because D-Wave quantum computers do not currently perform arbitrary quantum gate operations on sequences of qubits. Quantum Computing Playground (http://www.quantumplayground.net/#/home) is a browser-based WebGL Chrome platform. It features a graphics processing unit (GPU)-accelerated quantum computer with a simple integrated development environment (IDE) interface and its scripting language with debugging and 3D quantum state visualization features. Quantum Computing Playground can resourcefully simulate quantum registers up to 22 qubits, run some algorithms (e.g., Grover's and Shor's algorithms), and has a variety of quantum gates built into the scripting language itself. All currently known and useful quantum algorithms that can run on quantum computers are based on the ability of the quantum system, upon specific rearrangement, to behave in unison. Large chunks of data can be processed at once, operating primarily on only a few particles, that is, in a massively parallel manner. This aspect will allow tasks that would require centuries of computing on a standard computer to require only a few minutes on a quantum computer. A key challenge for quantum computers is to provide and maintain isolation of individual qubits involved in the computation. Extreme and stable cooling is required to make wire circuits behave in a quantum fashion. Operated by an electrical signal from a classical computer, these systems must be maintained at these extremely low temperatures by a vast refrigeration apparatus involving a rare helium-3 isotope. Standard encryption methods rely on

**169**

computing (**Figure 1**).

*Digital Pathology: The Time Is Now to Bridge the Gap between Medicine and Technological…*

a code for the operator to access encrypted data. However, the key must be shared and can be decoded by unauthorized persons seeking to 'hack' a system using several software programs (most of them open-source) available in internet. With quantum computing, the core and the data can be secured indefinitely with guaranteed unbreakable encryption. Such strong security is possible because quantum encryption relies on the laws of nature (quantum mechanics) to furnish it. Thus, cryptography is expected to be the first application of quantum computing to enter medical practice to secure medical records and communication. "Big data" research and machine learning are likely to be one of the fields to advance quickly with the advent of real-world functional quantum computers. A statistical model requires rational decisions about variable definitions and their inter-relationships. In machine learning, there are few assumptions and algorithms are derived from computer programs that evaluate millions of data elements and all their potential directions of effect and interactions. The more an algorithm is derived from raw data and with less human input, the more it fits into machine learning. Machine learning that informs clinical practice in real time depends on growing databases containing regularly updating medical record information and linked to other sources of data (e.g., wearable technology). To deal with this complexity future machine learning programs will require computational power of quantum computing to deliver results in real time. It is expected that a quantum MRI machine will generate extremely precise imaging allowing even the visualization of single molecules. Using artificial intelligence, quantum computing can be applied to interpreting diagnostic images, histology images other than radiology images. Not only will image detail be exponentially improved but the physician can be aided in understanding results because active machine learning can train a quantum computer to identify abnormal findings with a precision better than the human eye. Combining "big data" (i.e., data that are too complicated to work on using traditional data processing application software) with quantum computing will provide access to the current evidence and enable meaningful use of the electronic data continuously generated in the delivery of care. Realization of personalized medicine will need to draw on analysis of mega-data and bring together measures of physiology, imaging, genomics, wearable technology, screening measures, patient records, environmental measures and more. Currently, we realize that we may be at the dawn of a revolution in computing. We have numerous examples of machine learning algorithms and artificial intelligence that may leverage the power

*DOI: http://dx.doi.org/10.5772/intechopen.84329*

of quantum computing to deliver real time results.

The technological singularity may be considered an event that shows a single technological advance or may represent a sum of many technological advances that in aggregate could lead to a break in the psychologic and physical evolution of humans with entirely unpredictable and unfathomable results [60]. In this hypothesis, there is the concept that artificial superintelligence will abruptly prompt blockbusting technological growth, resulting in impenetrable changes to human civilization. Currently, it is not inconceivable that AI may generate software-based AI learning with "deep learning" on "big data" to enter a phase of self-improvement cycles, with each new and more intelligent generation appearing algorithms will be installed becoming operative. It may swiftly create an intelligence burst resulting in a powerful superintelligence that would, qualitatively, far surpass the human intelligence. Such time is already started and may coincide with the progress of quantum

**13. Technological singularity**

#### *Digital Pathology: The Time Is Now to Bridge the Gap between Medicine and Technological… DOI: http://dx.doi.org/10.5772/intechopen.84329*

a code for the operator to access encrypted data. However, the key must be shared and can be decoded by unauthorized persons seeking to 'hack' a system using several software programs (most of them open-source) available in internet. With quantum computing, the core and the data can be secured indefinitely with guaranteed unbreakable encryption. Such strong security is possible because quantum encryption relies on the laws of nature (quantum mechanics) to furnish it. Thus, cryptography is expected to be the first application of quantum computing to enter medical practice to secure medical records and communication. "Big data" research and machine learning are likely to be one of the fields to advance quickly with the advent of real-world functional quantum computers. A statistical model requires rational decisions about variable definitions and their inter-relationships. In machine learning, there are few assumptions and algorithms are derived from computer programs that evaluate millions of data elements and all their potential directions of effect and interactions. The more an algorithm is derived from raw data and with less human input, the more it fits into machine learning. Machine learning that informs clinical practice in real time depends on growing databases containing regularly updating medical record information and linked to other sources of data (e.g., wearable technology). To deal with this complexity future machine learning programs will require computational power of quantum computing to deliver results in real time. It is expected that a quantum MRI machine will generate extremely precise imaging allowing even the visualization of single molecules. Using artificial intelligence, quantum computing can be applied to interpreting diagnostic images, histology images other than radiology images. Not only will image detail be exponentially improved but the physician can be aided in understanding results because active machine learning can train a quantum computer to identify abnormal findings with a precision better than the human eye. Combining "big data" (i.e., data that are too complicated to work on using traditional data processing application software) with quantum computing will provide access to the current evidence and enable meaningful use of the electronic data continuously generated in the delivery of care. Realization of personalized medicine will need to draw on analysis of mega-data and bring together measures of physiology, imaging, genomics, wearable technology, screening measures, patient records, environmental measures and more. Currently, we realize that we may be at the dawn of a revolution in computing. We have numerous examples of machine learning algorithms and artificial intelligence that may leverage the power of quantum computing to deliver real time results.
