**1. Introduction**

Pain management is becoming increasingly recognised as a formal medical subspecialty worldwide. Pain is the most common reason that patients come to the emergency department. Emergency nurses have an indispensable role in the

management of this pain [1]. Sedation in the intensive care unit (ICU) is challenging, as both over- and under-sedation are detrimental [2]. Optimal sedation and analgesic strategies, combined with delirium management, are difficult when caring for critically ill patients. For sedation monitoring, the most widely used tools are the Richmond Agitation and Sedation Scale (RASS) [3] and Sedation Agitation Scale (SAS) [4]. Assessments using RASS and SAS are undertaken intermittently and traditionally rely on patients' behavioural response to stimulation, which perturbs rest and sleep [5–8]. Various studies have suggested that a non-stimulating method for "continuous" sedation monitoring may be beneficial and allow for more frequent assessment.

Indeed earlier, Rudge, Chase, and Shaw [9] discussed target-controlled infusion (TCI) systems to deliver drugs to maintain target plasma concentrations, using a pharmacokinetic model, shown to be feasible when anaesthesia is given over short periods of reduced consciousness and well-known pharmacology is invoked. Infusion systems that regulate the infusion rate to maintain target agitation levels, to regulate the primary metric for long-term sedation, are one approach to improving care in the ICU. The data analysed in this chapter pertains to the scenario and data type studied earlier by Hudson, Rudge and colleagues [9–14].

These authors have suggested that assessing the severity of agitation is a challenging clinical problem as variability related to drug metabolism for each individual is often subjective. A multitude of previous studies also suggest assessment accuracy of the sedation quality conducted by nurses tend to suffer from subjectivity and lead to sub-optimal sedation [9–14]. For example, it has been recommended by some authors to use lighter than deeper levels of sedation. And that sedation should be reviewed and adjusted regularly [5–8]. Agitation management methods frequently rely on subjective agitation assessment the carers then select an appropriate infusion rate based upon their evaluation of these scales, experience, and intuition [3–5]. This approach usually leads to largely continuous infusions which lack a bolus-focused approach, commonly resulting in over or under-sedation.

The work of [9–14] aimed to enhance feedback protocols for medical decision support systems and eventually automated sedation administration. A minimal differential equation model to predict or simulate each patient's agitation-sedation status over time was presented in [9] for our ICU patients and was shown to capture patient agitation-sedation (A-S) dynamics. The use of quantitative modelling to enhance understanding of the agitation-sedation (A-S) system and the provision of an A-S simulation platform is one of the key tools in this area of patient critical care. A more refined A-S model, which utilised regression with an Epanechnikov kernel was formulated by [9]. A Bayesian approach using densities and wavelet shrinkage methods was later suggested by [10] to assess a previously derived deterministic, parametric A-S model [11], thus successfully challenging the practice of sedating ICU patients using continuous infusions.

Wavelets approaches [10, 11] were shown to provide reliable diagnostics and visualisation tools to assess A-S models, giving alternative metrics of A-S control to assess the validity of the earlier A-S deterministic models (**Table 3** in [10]). This suite of wavelet metrics based on the discrete wavelet transform (DWT) established the value of earlier deterministic agitation-sedation (A-S) models against empirical (recorded) dynamic A-S infusion profiles, providing robust performance metrics of A-S control and excellent tools, based on the classification of patients into poor and good trackers based on Wavelet Probability Bands (WPBs). Importantly, the WPBs were shown to be a useful patient-specific method by which to identify and detect

regions in the patient's A-S profile i.e., times whilst in ICU, where the simulated infusion rate performs poorly, thus providing visual and quantified ways to help improve and distil the deterministic A-S model and in practice be a gauge to alert carers.

The aim of this chapter is to identify regions of poor and good control using copulas. Copulas are functions that join or connect multivariate distribution functions to their one-dimensional marginal distribution functions. Copulas have had applications in fields such as finance [15, 16], public health and medicine [17], and actuarial science [18, 19]. Empirical distributions of the nurses' ratings of a patient's pain and/or agitation levels and the administered dose of sedative are often positively skewed and if the joint distribution is non-elliptical, then high nurses' ratings of a patient's agitation levels may not correspond to the occurrences of patient's A-S profile with large infusion dose. Copulas are used as they measure nonlinear dependencies capturing the dependence between skewed distributions. Copulas are widely applied in diverse fields, including health services research and medical studies, quantitative risk management, econometric modelling, environmental studies, finance, and hydrology.

Advantages of using copulas in modelling are: (i) capacity to model both linear and non-linear dependence; (ii) allowing an arbitrary choice of a marginal distribution; and (iii) capability of modelling extreme endpoints. Copulas are functions that "couple together" the marginal cumulative distribution functions (CDFs) of a random vector to form its joint CDF. When used in statistical modelling, copulas can estimate multivariate distributions of data involving two or more outcome variables for mixed type, complex data. We determine the best-fit copula type for all patients with a focus on differences between poor and good trackers, where classification of patients into poor and good trackers was based on Wavelet Probability Bands (WPB) [10, 11].

This chapter builds on the earlier pilot work of Tursunalieva et al. [20, 21]] to address the gap in the methodology by integrating non-elliptical dependence structure between nurses' rating of a patient's agitation level and the automated sedation dose. In an earlier pilot work discussed by Hudson [22], the tail thresholds of two (2) test patients were determined manually, whereas in [21] the dynamic programming algorithm of Bai and Perron [23] was used to establish the lower and upper tail threshold. Copula mathematics allows us to determine and identify lower and/or upper tail thresholds when they exist for all 36 intensive care unit patients' agitation-sedation profiles collected at Christchurch Hospital, School of Medicine and Health Sciences, NZ and analysed earlier in [9–14]. Infusion data were recorded using an electronic drug infusion device for all admitted ICU patients during a nine-month observation period and required more than 24 hours of sedation.

In this chapter, our novel and general formulation of the equation relating each patient's nurses'score to the automated infusion dose is given by the following expression nurses'score = intercept+α\*Dose–β\*Dose\*LT region+ γ\*Dose\*UT region. This formulation accounts for the non-linear relationships between the nurses' A-S rating and the automated sedation dose, and permits identification of thresholds and regions of mismatch between the nurse's scores and sedation dose, thereby suggesting a possible way forward for an improved alerting system for over/under-sedation.

Establishing the presence of tail dependence and patient-specific thresholds for areas with different agitation intensities has significant implications for the effective administration of sedatives. Better management of A-S states will allow clinicians to improve the efficacy of care and reduce healthcare costs. Our approach lends credence to augmenting conventional RASS and SAS agitation measures with semi-automated systems [24–26] and identifying thresholds and regions of deviance for alerting increased risk. Better management of A-S states will allow clinicians to improve the efficacy of care and reduce healthcare costs.
