**1. Introduction**

Sepsis is an enigmatic clinical condition that occurs when the patient's body reacts adversely to infection and as a consequence develops organ dysfunction. Sepsis can practically affect all organ systems however, the organs involved and the degree of dysfunction varies distinctly among patients and can even lead to death in most cases [1, 2]. In the early stages of the disease, the treatment of sepsis seems to be relatively easy with the availability of broad-spectrum antibiotics [3]. While in the later stages of the disease, diagnosis of sepsis becomes much easier but extremely hard to treat. Therefore, early diagnosis of sepsis is the need of the hour for better clinical management [4].

Current manual assessment of sepsis using screening tools, like the Sequential Organ Failure Assessment (SOFA) score for ICU-patients, are complex in terms of measured clinical signs and even lack adequate sensitivity [5, 6]. On the other hand, AI and machine learning-based automated clinical decision support systems that use easily accessible clinical data have reflected a significant improvement in agreement with these treatment protocols in ICUs by guiding physicians through predefined work-flows [7–11]. In the current era wherein we have abundant availability of electronic medical records (EMRs) has brought more feasibility to such automated realizations [12]. However, almost every machine learning (ML)-based AI model and automated decision support system lack proper explainability because of their uninterpretable black-box nature [13, 14]. This is where Explainable Artificial Intelligence (XAI) comes in rescue to address some of these restrictions imposed by a Black-box AI system by adding explainability. And thus assist clinicians in the interpretation of their diagnosis, and recommend future actions to be taken thereby improving the quality of predictions [15–17]. The development of such an explainable ML framework for sepsis onset prediction is an important and active area of investigation.

This work presents a novel clinical application of developing an explainable ML framework for sepsis onset prediction among ICU patients based on the physiological medical knowledge of given clinical signs, obtained via extensive analysis, and using popular gradient boosting ML techniques. The framework's design includes an optimal explainable gradient boosting architecture for clinical decision making that investigates questions of generalizability and interpretability of the proposed system.
