**3. Artificial Intelligence in academic medicine**

The topic of AI in academic medicine is certainly a heated one. It is becoming evident that the introduction of AI into medical education will likely prompt significant rethinking, and likely rebuilding, of our medical curriculums. This will help ensure that both our medical schools and the new generation of medical trainees are sufficiently prepared to optimize the positive aspects of AI while minimizing any potentially negative aspects and considerations [5]. For the current, fairly traditional medical

### *Introductory Chapter: Artificial Intelligence in Healthcare – Where Do We Go from Here? DOI: http://dx.doi.org/10.5772/intechopen.111823*

school curricula, the introduction of ML and AI applications will be both transformational and hugely challenging. Similarly, the increasing presence of ML/AI in clinical medicine will force many changes in clinical information management, patient care workflows, the broad range of diagnostics, and many other related areas [50, 51]. The optimal end-product will be the advent of true "precision medicine" where each patient can be treated using highly individualized and much more optimized approaches.

For all of the above to happen seamlessly, without undue disruptions, the incorporation of AI applications into medical education will require unique curricular modifications. It is likely that the current evidence-based medicine (EBM) guidelines will quickly become obsolete and instead may be replaced by dynamically updated AI-based recommendations (AIBRs). Consequently, how we train our next generation of physicians and other healthcare professionals will likely become unrecognizable in the next 10–20 years. Moreover, the issues of "black box" interpretability, data security, and decision liability are bound to present us with problems not addressed by traditional curricula [52].

It is reassuring to know that research in this area has been ongoing and that a significant amount of expertise is available and continues to grow [5, 52]. Our collective perception of AI is also likely to evolve over time. According to recent data, a large proportion of medical students perceived AI as an assistive technology that could facilitate physicians' access to information, and patient access to healthcare, all while reducing the number and impact of medical errors [53]. In parallel, more and more medical students are expressing the need for updates in the current medical school curriculum, accommodating the need for adaptation to AI-facilitated healthcare industry transformation [54, 55]. Curricular updates should revolve around equipping future physicians with the knowledge and skills to effectively harness the power of AI-based applications, minimize potential harms related to the misuse of AI, and ensure that their professional values and rights are protected.

At the same time, implementing the right plan and appropriately re-setting professional requirements and boundaries is not an easy task. All clinicians, students, and AI professionals alike should understand the social, ethical, legal, and regulatory issues that will determine whether AI-based tools will narrow or widen health disparities, affect professional independence, and potentially influence any existing healthcare gaps. A multi-pronged approach should involve the development of novel teaching models, the recruitment of qualified and experienced content specialists (to design and teach ML/ AI curricula), and subsequently, the facilitation of communication challenges relative to any existing and/or perceived knowledge gaps between physicians and engineers [53].

Parallel to the issue of medical education reform, another set of critical issues will arise pertaining to intellectual property, content attribution, and content originality (e.g., plagiarism) [56]. Within this context, we must remember that ML, AI, and other advanced tools like "chatbots" and "ChatGPT" are not inherently "good or bad," and that any inappropriate uses of said technological capabilities will stem from misuse by individuals whose intentions lack ethical and/or moral grounding. The educational setting in general, and higher education in particular, is largely based on the presence of academic integrity as an essential component of the system. While AI-based technologies have the potential to greatly enhance our lives and improve our efficiency across various areas of society [6], it is not unreasonable to speculate that such highly sophisticated tools could easily "fool an expert" into giving credit for effort that should have never been attributed to a particular individual, in effect propagating intellectual fraud [57, 58].

In addition to the potential for difficult-to-detect plagiarism, AI-based technologies also have the potential to be used for other nefarious purposes, such as cheating on assignments, using 'deep fake' or other unethical practices to gain an unfair advantage, and even assisting unscrupulous individuals in actively lying on their resumes and job applications [59, 60]. As modern technology continues to advance relentlessly, it becomes increasingly difficult to determine whether a piece of writing is truly original or if it has been generated by a machine [61]. This raises questions about the value of originality and the importance of properly crediting sources in the digital age. It also highlights the need for individuals to be more critical of the information they consume and follow, as well as the importance of careful consideration of the sources of the information being actively shared, especially in the context of omnipresent social media [19]. Finally, we must always remember that AI-generated content will be inherently limited by the quality of data inputs utilized during the generative process.
