**3. Methods**

In this section, we present a multiphase, sequential mixed method study design. The study included a total of 16 clinicians who specialized in pediatric and adult epilepsies.

Our study included two main parts: The first part of the study investigated the **types, priorities,** and **characteristics** of useful clinical indicators during epilepsy diagnosis and treatment, respectively; while the second part investigated the **performance of current seizure detection technologies** as compared with current patient self-reporting.

#### **3.1. Part 1: establishing self-reporting types, priorities, and characteristics**

The first step was establishing the **types, priorities,** and **characteristics** of useful patient indicators that clinicians need during diagnosis and treatment. This included


The complete list of symptoms and triggers is available upon request. The resulting findings are intended to provide technology developers with insights for anticipating clinical patient self-reporting needs.

#### *3.1.1. Investigating self-reporting needs*

**2.2. Health tracking design challenges**

72 Seizures

self-management during treatment [22].

**3. Methods**

challenges.

self-reporting needs.

developing appropriate and effective health tracking tools.

**technologies** as compared with current patient self-reporting.

Health tracking technologies could help answer these questions but many practical challenges remain [17]. Health tracking priorities among clinicians and the patient's role in selfreporting are each often under-specified in the literature. There is considerable interest in behavioral surveillance [21] as input for both assessing chronic conditions and evaluating

These advancements stand to greatly inform traditional diagnosis and treatment, but current health tracking measurements only touch on a small subset of health indicators that are relevant for patient care. The Chronic Care Model [23] is helpful in describing the role of clinical systems, healthcare communications, and self-management in patient care, but it is not instructive in terms of describing what clinical, self-management, and electronic health record (EHR) information is most important to keep track of for achieving positive long-term outcomes.

Current developments can be leveraged for greatly enhancing the capabilities of the existing systems. For example, inertial-based seizure detection wristbands are increasingly capable of detecting convulsive seizures [24]. Most patients have access to smartphones with increasingly powerful sensing capabilities [9, 10]. Well-designed health tracking [25] and health reporting tools [26] have the potential to greatly reduce the burden placed on patients to collect clinically significant health information [27]. It is, therefore, important for researchers to establish an understanding of clinical information needs and health tracking performance for

In this section, we present a multiphase, sequential mixed method study design. The study

Our study included two main parts: The first part of the study investigated the **types, priorities,** and **characteristics** of useful clinical indicators during epilepsy diagnosis and treatment, respectively; while the second part investigated the **performance of current seizure detection** 

The first step was establishing the **types, priorities,** and **characteristics** of useful patient indi-

• Interviews with subject matter experts to identify five **characteristics of** self-reporting

The complete list of symptoms and triggers is available upon request. The resulting findings are intended to provide technology developers with insights for anticipating clinical patient

included a total of 16 clinicians who specialized in pediatric and adult epilepsies.

**3.1. Part 1: establishing self-reporting types, priorities, and characteristics**

cators that clinicians need during diagnosis and treatment. This included • Interviews and a literature review to identify symptoms and triggers.

We conduct interviews over a 2-month period. The interviews included one-on-one meetings with one nurse practitioner specializing in pediatric epilepsy at the Children's Healthcare of Atlanta (CHOA), Georgia and two attending specializing in adult epilepsy at Emory University Hospital, Georgia. These meetings highlighted important patient self-reporting characteristics that we would later include in our online survey.

Next, we conducted a literature review to generate a list of patient symptoms and triggers that clinicians might find as useful feedback during diagnosis and treatment. Inclusion criteria for the symptoms included any health indicators that described a specific aspect of the condition such as duration and quality; while triggers included any factors that were known to impact the likelihood of symptoms such as physical activity, sleep quality [28], and selfmanagement behaviors [29, 30]. The literature review resulted in a list of 48 symptoms and 11 triggers that may be of interest during either diagnosis or treatment.

#### *3.1.2. Investigating self-reporting priorities*

The next step was to establish the clinical priority of these symptoms and triggers during diagnosis and treatment. A one-hour card sorting session was conducted with six pediatric epilepsy care specialists at CHOA. Informed consent was obtained from all participants. The participants included four nurse practitioners and two epileptology attendings.

The card sorting was conducted as follows. First, we printed the list of symptoms and triggers from the literature review on two separate stacks of notecards. The same card sorting exercise was conducted twice, with one stack of cards being sorted in terms of usefulness to prioritize data needs during diagnosis, and the second stack of cards being sorted to prioritize the same data during treatment. Each card contained a single symptom or trigger, and each set of notecards was shuffled beforehand.

The clinicians were asked to order the notecards in terms of "most-to-least" useful patientreported symptoms and triggers during diagnosis and treatment, respectively. Notecards of equal importance were stacked on top of one another. New note cards were added to both piles if the clinicians believed we had overlooked any important symptoms and triggers from our literature review. Likewise, irrelevant or difficult to understand notecards were discarded from both piles, as shown in **Figure 1**.

The priority ranking for each card was then computed transcribing the notecards into a threecolumned Excel spreadsheet that contained: (1) symptom/triggers names, (2) notecard positions during diagnosis, and (3) notecard positions during treatment, respectively and then summing the two sorted card indices for diagnosis and treatment as shown in Eq. (1).

$$\text{Priority ranking} \newline \text{= diagramis card index} \newline \text{+ treatment card index} \tag{1}$$

The spreadsheet was then sorted by our resulting priority ranking column from least to greatest for establishing a list of clinical data indicators that were considered important during both diagnosis and treatment. It should be noted that the exercise could have been accomplished

• Interviews and a literature review to identify aspects of seizure reporting.

devices may be suitable for complementing patient self-reporting capabilities.

establishing clinical seizure reporting needs during diagnosis and treatment.

seizure frequency, duration, type, and ability to observe seizure progression over time.

ment, and self-management.

errors are equally detrimental.

*3.2.2. Evaluating seizure reporting technologies*

some for patients during long-term use [33, 34].

*3.2.1. Investigating seizure reporting needs*

• Technology review and a meta-analysis to present common performance statistics.

Moreover, we discussed our findings with clinicians. This feedback highlighted several important yet underexplored data collection opportunities for supporting diagnosis, treat-

Self-Reporting Technologies for Supporting Epilepsy Treatment

http://dx.doi.org/10.5772/intechopen.70283

75

The findings are intended to help providers to assess the extent that current seizure detection

Interviews, a literature review, and an online survey were conducted as background for

The interviews included two fellows and one attending at the Emory School of Medicine and provided us with an opportunity to discuss seizure reporting practices among current patients and caregivers. The literature review included 27 papers and focused on identifying seizure reporting needs for informing clinical decision-making. The most common clinical information needs were

Next, we administered an online survey among an additional group of clinicians to further assess the perceived importance and accuracy of these seizure reporting measures. The survey was administered to 10 epileptologists at Emory (5 residents, 1 fellow, and 4 attendings) and included 23 Likert scale ratings. The Likert ratings were presented on a scale from 1 to 5 with 1 being "not important" and 5 being "most important" while ratings of self-reporting accuracy ranged between <20 and >80% with 5 even intervals. The respondents were also asked which type of patient reporting error would be the most detrimental during treatment and then given three choices: (a) patient over reporting, (b) patient underreporting, or (c) both

The online survey results highlighted the need for a follow-on technology review. Most notably, while survey respondents indicated a strong need for more accurate patient and caregiver seizure movements and seizure counts, limited research was available regarding the applicability of current technologies for addressing these needs. In addition, the literature did

The technology review addressed these shortcomings by evaluating the performance of current systems for detecting and counting seizures, characterizing patient seizure motion, and

Inclusion criteria included all systems that had been evaluated within a home or clinical setting. Exclusion criteria included vagus nerve [31] and brain stimulation [32] systems that required permanent surgeries and electroencephalogram (EEG) systems that can be burden-

not specifically compare patient self-reporting to system performance [37].

comparing performance against that of current patient self-reporting capabilities.

**Figure 1.** Expert panel card sorting exercise with four nurse practitioners and two epileptology attendings who specialized in diagnosing and treating pediatric epilepsy.

by using a single set of notecards, however, we opted to use two sets of cards to avoid having to document the cards before moving onto the second sorting session.

#### *3.1.3. Establishing self-reporting consensus*

Next, we conducted an online survey with the aim of further understanding several practical characteristics of these clinical data collection needs. The survey was administered to 6 clinicians over a 5-week period and included the following 5 questions for each of the "top 20" highest ranked symptoms and triggers:


The survey had two pages and was designed to take less than 15 min. The first page included demographics questions. The second page contained a 20 row by 5 column table of multiplechoice questions with symptoms and triggers as rows and questions as columns.

#### **3.2. Part 2: investigating seizure reporting performance and capabilities**

The second part of our study specifically investigated clinical patient self-reporting needs surrounding patient seizure reporting. This included:


Moreover, we discussed our findings with clinicians. This feedback highlighted several important yet underexplored data collection opportunities for supporting diagnosis, treatment, and self-management.

The findings are intended to help providers to assess the extent that current seizure detection devices may be suitable for complementing patient self-reporting capabilities.

#### *3.2.1. Investigating seizure reporting needs*

by using a single set of notecards, however, we opted to use two sets of cards to avoid having

**Figure 1.** Expert panel card sorting exercise with four nurse practitioners and two epileptology attendings who

Next, we conducted an online survey with the aim of further understanding several practical characteristics of these clinical data collection needs. The survey was administered to 6 clinicians over a 5-week period and included the following 5 questions for each of the "top 20"

The survey had two pages and was designed to take less than 15 min. The first page included demographics questions. The second page contained a 20 row by 5 column table of multiple-

The second part of our study specifically investigated clinical patient self-reporting needs sur-

choice questions with symptoms and triggers as rows and questions as columns.

5. Frequency How frequently would you ideally like this information to be collected?

**3.2. Part 2: investigating seizure reporting performance and capabilities**

rounding patient seizure reporting. This included:

to document the cards before moving onto the second sorting session.

*3.1.3. Establishing self-reporting consensus*

specialized in diagnosing and treating pediatric epilepsy.

74 Seizures

highest ranked symptoms and triggers:

1. Availability Is this information available? 2. Reliability Is this information useful?

3. Usefulness Is this information reliable in your opinion? 4. Difficulty Is it easy or hard for patients to report?

Interviews, a literature review, and an online survey were conducted as background for establishing clinical seizure reporting needs during diagnosis and treatment.

The interviews included two fellows and one attending at the Emory School of Medicine and provided us with an opportunity to discuss seizure reporting practices among current patients and caregivers. The literature review included 27 papers and focused on identifying seizure reporting needs for informing clinical decision-making. The most common clinical information needs were seizure frequency, duration, type, and ability to observe seizure progression over time.

Next, we administered an online survey among an additional group of clinicians to further assess the perceived importance and accuracy of these seizure reporting measures. The survey was administered to 10 epileptologists at Emory (5 residents, 1 fellow, and 4 attendings) and included 23 Likert scale ratings. The Likert ratings were presented on a scale from 1 to 5 with 1 being "not important" and 5 being "most important" while ratings of self-reporting accuracy ranged between <20 and >80% with 5 even intervals. The respondents were also asked which type of patient reporting error would be the most detrimental during treatment and then given three choices: (a) patient over reporting, (b) patient underreporting, or (c) both errors are equally detrimental.

The online survey results highlighted the need for a follow-on technology review. Most notably, while survey respondents indicated a strong need for more accurate patient and caregiver seizure movements and seizure counts, limited research was available regarding the applicability of current technologies for addressing these needs. In addition, the literature did not specifically compare patient self-reporting to system performance [37].

#### *3.2.2. Evaluating seizure reporting technologies*

The technology review addressed these shortcomings by evaluating the performance of current systems for detecting and counting seizures, characterizing patient seizure motion, and comparing performance against that of current patient self-reporting capabilities.

Inclusion criteria included all systems that had been evaluated within a home or clinical setting. Exclusion criteria included vagus nerve [31] and brain stimulation [32] systems that required permanent surgeries and electroencephalogram (EEG) systems that can be burdensome for patients during long-term use [33, 34].

The first step was to choose performance measures that would address two sets of findings from our earlier research. First, our survey respondents showed no consensus regarding the relative impact of over or under reporting seizures. Second, our interviews with clinicians indicated that most patients and caregivers report seizures themselves without the help of seizure detection devices [3, 5]. It was, therefore, important for us to choose performance metrics that would both quantify over and under reporting as well as support comparison between seizure reporting systems and patient self-reporting rates from the literature [9].

To address these requirements, we evaluated each system in terms of three statistics: precision, recall, and F-score. Recall or sensitivity is the fraction of all seizures that were detected. High recall values reflect a low chance of under reporting or missing a seizure. Missed seizure events are problematic as untreated seizures can have serious long-term health consequences.

$$\text{events are probable as untreated serious can have serious long-term helath consequences.}$$

$$\text{Recall} = \frac{\text{true positives}}{\text{true positives} + \text{false negatives}} \tag{2}$$

**4. Results**

*4.1.1. Self-reporting types*

lowest priority clinical information needs.

**4.1. Part 1: self-reporting types, priorities, and characteristics**

This section summarizes our key research findings. **Figure 2** presents the type, priority, and characteristics of important information that clinicians need patients to report along with notable perceived patient self-reporting challenges and agreement between participants.

Self-Reporting Technologies for Supporting Epilepsy Treatment

http://dx.doi.org/10.5772/intechopen.70283

77

The first step for our analysis was establishing the types of patient self-reported data that clinicians need from patients. The bottom row of **Figure 2** shows a sorted list with highest to

**Figure 2.** "Top 20" types, priorities, and characteristics of neurocognitive self-reporting needs (top row) and specific

self-reporting challenges (sorted from greatest to least importance) (bottom row).

Precision is the fraction of all relevant seizures that are detected. High precision values reflect a low chance of over reporting seizures or triggering false alarms. Low false alarm rates are important to avoid changing already effective medication.

$$\text{important to avoid changing already effective prediction}$$

$$\text{Precision} = \frac{\text{true positives}}{\text{true positives} + \text{false positives}}\tag{3}$$

The F-score balances over and under reporting and is expressed as:

$$\text{The } \mathbf{r} \text{-score} \text{ commutes with } \mathbf{u} \text{ and } \mathbf{r} \text{-coordinate}$$

$$\mathbf{F} = \frac{2 \text{ \* precision \* recall}}{\text{precision} + \text{recall}} \tag{4}$$

In practice, notable inconsistencies between studies required making several assumptions. Many systems did not report precision and recall directly. In some cases, these rates had to be calculated based on information in the papers. Next, several studies presented statistics in terms of only those patients with seizures (PWS) [38–40] while other studies reported statistics for all patients in a study [41–43]. Including all patients meant that some patients without seizures might also contribute false positives. To address this discrepancy, we recomputed precision to include only those false positives from patients with seizures. For example, Poh et al. [41] reported performance for all patient and precision subsequently increased 24.54% when calculated among only those patients with seizures.

Next, we calculated patient self-reporting performance based on previous studies [18]. In this case, we assumed perfect self-reporting precision. Blum et al. [7, 9] evaluated seizure awareness among 31 patients with partial and generalized type epilepsies and observed that patients never falsely reported seizures. Next, we calculated based on observations from a similar study from Hoppe et al. [9] in which 91 patients with focal type epilepsies failed to report 32.0% of seizures during the day and 85.8% of seizures while asleep at night. This resulted in a precision of 100% for both day and night time reporting, recall values of 68.0 and 14.5% and F-scores of 0.25 and 0.81 for day and night time reporting, respectively.
