**9. Limitations and concerns**

Though AI shows great promise in changing many aspects of medical and surgical care, it is important to highlight the limitations of this technology. The construction of ML algorithms is reliant on large amounts of data to create generalizable algorithms that limit unnecessary data within the data set [145]. The classification of ML model algorithms can identify tumors from imaging. Both training and test data sets still require annotation, manpower, and time [12, 146]. These factors limit how quickly these algorithms can be generated. Additionally, ML algorithms identify patterns from input data without interpretation or critical analysis and may be prone to biases within the data set. There often exist biases in who participates in clinical trials, and this may lead to outputs that disproportionately segregate minorities and other groups which are not as well represented in the training data for the ML model [147, 148]. In some cases, minute changes or fluctuations in the input data can drastically affect the model field output [146]. In the same vein, poor data, such as poor video or image quality, can have deleterious effects on the quality of the model [149]. Because of this,

standardization of imagining techniques and video characteristics is vital for model efficacy [146]. Verifying the integrity of these models is integral to maintaining patient autonomy. Faulty or biased recommendations made by AI models can affect a patient's ability to provide informed consent for their care [150]. Finally, there may be a risk for "adversarial attacks," defined as data inputted in the training set with the intention of biasing outputs [151]. Notably, potential methods for adversarial attacks have been identified for every type of machine learning model and may be as overt as modifying input data or as seemingly innocuous as rotating an image slightly [151, 152]. There may be many reasons for adversarial data input, from fraudulent reimbursement to altering research outcomes, so it is vital that methods are implemented to prevent intentional and unintentional biases in these models.

Ethical concerns surrounding the use of AI center around oversight and liability. It is important that AI is tested and verified before actual clinical use, but there are currently no governing body and no approval process for reviewing ML algorithms in clinical care, let alone for autonomous surgery [12]. This is especially important because of the "black-box" effect, which is especially prevalent in deep learning algorithms. Due to the existence of "hidden" layers in deep learning neural networks, it is often not entirely clear how the AI model arrives at its output, and this can limit how much trust physicians and patients put in the recommendations made by these algorithms [153]. Without entities to review these algorithms, AI will remain primarily experimental. There are many legal concerns regarding the use of AI in surgery. One of the most prominent concerns among physicians is liability [154, 155]. Currently, there is essentially no case law on the legality of AI in clinical settings [155]. Therefore, legal entities must establish how malpractice and liability are handled if complications occur because of the use of AI. Without answers to complex legal questions, the use of AI in surgery will be severely limited. According to Price et al., physicians are incentivized to minimize the use of AI under current law. Normally, a physician's actions are privileged under tort law if normal standard of care is followed [155]. However, if a physician follows AI recommendations that go against the current standard of care, even if the AI recommendation is correct, any resulting poor outcomes could lead to litigation [155]. Thus, under current law, the clinical use of AI will mostly be limited to confirming clinical decisions, greatly reducing the potential value of AI. Finally, in cases where data are stored on the cloud or in cases where data are crowd-sourced, there may be data privacy concerns [149]. Additionally, in shared data, there may be concerns about the ownership of uploaded data [149]. Thus, with each application of AI, terms must clearly delineate medicolegal terms, who owns uploaded data, and how models may be monetized.
