**5.2 Graphical user interface of system**

The graphical user interface enables to control the scanning of the patient's head and neck and also to annotate the captured data. For annotating, the digital version of standardized sleep questionnaire is part of application and it is described later. Webbased design of this interface enables to use of any portable device in the network to provide scanning and annotating the data. Also, the interface can be accessed locally on the PC where the server runs. The main window of the interface is shown in **Figure 7**. Menu on the left side contains several settings:


Number of frames taken by "one shot" helps to provide temporal filtering of data. Considering our study [25], resulting depth image from given view is a product of averaging the stack of images in buffer. This averaging helps to eliminate noise artifacts in 3D reconstruction.

**Figure 7.** *Graphical user interface – Main window.*

Facial expression is functionality prepared for further research. It could be interesting to correlate the OSAS detection based on normalized face expression (e.g., smile, neutral expression).

After capturing request, the data are zipped and sent to the acquisition server. If any error, the data are saved locally and resent to the acquisition server in next capturing request. If the webserver is unavailable, the warning message is displayed. The data saved on server can be identified by personal ID of patient. The user can switch between color and depth view.

### **5.3 Normalized head position assistance algorithm**

For obtaining the most precise, accurate, and normalized 3D scans, we need to set the patient's head to a defined position (as equal as possible in all patients). This task may be very difficult and stressful for young patients. Based on this fact, we created the head position assistance tool based on the algorithm of eyes and head detection. Detected eye position is used to compute the difference from ideal eye position. The depth map is also available so, the algorithm can get the difference in eye positions in Z-axis. This information is obtained only from the central sensor images. In **Figure 8**, we can see the angle offsets of detected eyes from ideal position. The angles α and β are used for determining how much is the head rotated or tilted. The limits were chosen empirically and they can be improved during further research. Based on the depth map, we are also able to compute the distance of the head from sensors (if it is too close or too far from sensor). In other words, head positioning assistance tool helps to keep the head in red highlighted area according to **Figure 6b** [35].

When the head position is inside the optimal range, imaging can be provided. If the position of head is outside the tolerance, the moving, rotation, and tilting commands are shown for the user on the screen to adjust the head position. For detection of face in actual sensor view, the Viola-Jones algorithm is used. This algorithm is a frequently used tool for given object detection. The original algorithm is used to detect and classify the objects into several classes. In our case, it was trained for human faces. In comparison with other algorithms, the training time is relatively high, but detection is very fast. The algorithm uses Haar basis feature filters and it does not use multiplications [36]. The computation time is minimized by placing the classifiers with fewest

**Figure 8.** *Difference of eye positions: (a) X-axis offset. (b) Y-axis offset.*

features at the beginning of cascade. The features are most commonly trained using Ada-Boost algorithm. This method selects only those features that improve the detection accuracy and potentially decrease the execution time.

## **5.4 The annotation questionnaire**

In addition to mentioned functions, the application includes an online questionnaire, which is the digital format of the EU questionnaire [5]. Besides the 3D imaging of patient heads, the specialist (user) is able to insert additional information about the patient (age, weight, subjective rating of intraoral anatomy … ) [35]. This additive information can be used to extend features for machine learning methods and automated diagnostics based on artificial intelligence. Implemented electronic questionnaire is shown in **Figure 9**.
