**5.8 Bioethics**

By facilitating the acquisition and analysis of images, AI could improve equity in healthcare. This would facilitate access to optimal radiological assessments in areas where specialists are not available such as developing countries or rural areas [50].

It has been proposed to use AI-based systems to facilitate the entry of clinical information to reduce the time and cognitive load required to perform such a task. However, from an ethical and human point of view, there are controversies, since the doctor– patient relationship is based on trust and close treatment, and these are not assumable by a computer [51]. Clinical care remains a human process. It should never be reduced to applying more or less complex diagnostic or treatment algorithms. A patient's health should not be limited to a mere statistical concept [52]. It seems unreasonable and unethical to make clinical decisions based solely on computerized processes.

#### **Figure 2.**

*Current applications of artificial intelligence (AI) in musculoskeletal medicine.*

Another aspect of ethical interest is related to the costs associated with surgical procedures in which the use of algorithms to adjust the payment models per procedure has been evaluated. Although the price of interventions is usually fixed, patient comorbidities are known to increase the number of perioperative complications and produce worse outcomes [53]. This could result in some centers selecting lower-risk patients to extract a higher financial return, creating an ethical issue that must be resolved before recommending widespread use of these algorithms [54].

When interpreting radiological evidence, the physician not only classifies and analyzes the images but also interprets them within a broad clinical context. This clinical reasoning ability is acquired through the clinician's professional experience and even during the undergraduate years [55]. In fact, not all clinical decisions are made based on objective aspects. Sometimes, an experienced clinician may make clinical decisions based on experience or intuition. Even the clinician cannot explain why he or she makes this decision, and yet, in many instances, these decisions are accurate. An AI, devoid of feelings and emotions, can hardly make up for this aspect [52]. It is controversial to think what will happen in the case where an algorithm recommends one course of action and the clinician thinks that another clinical action should be taken.

On the other hand, clinical decision-making based on the use of AI algorithms, and the possible errors in diagnosis and treatment that this may cause, implies an important liability issue. And it is not clear who should assume this responsibility: the clinician, the health center or the company that has designed the algorithm. **Figure 2** summarizes current applications of AI in musculoskeletal medicine.
