**8.2 Operators must have confidence when delegating tasks to an autonomous system**

The military will never use equipment or tools that they do not trust. This is the reason why a leader must have confidence in the way a machine behaves or could behave. For that, military engineers should develop autonomous systems capable of explaining their decisions.

Automatic systems are predictable, thus, one can easily anticipate how it will perform the task entrusted to it. However, this becomes more complex with autonomous systems, especially self-learning systems where one may well know *New Technologies and Decision-Making for the Military DOI: http://dx.doi.org/10.5772/intechopen.98849*

the objective of the task to be performed by the machine, but has no idea how it will operate. This raises a serious question of trust in this system. As an example, when I ask an autonomous mowing robot to mow my lawn, I know my lawn will be mowed, but I do not know exactly how the robot will proceed.

The best example to focus on are the expectations of the soldier about Artificial Intelligence embedded in autonomous systems.

AI should be trustable. This means that adaptive and self-learning systems must be able to explain their reasoning and decisions to human operators in a transparent and understandable manner;

AI should be explainable and predictable: one must understand the different steps of reasoning carried out by a machine that delivers a solution to a problem or an answer to a complex question. For this, a human-machine interface (HMI) that explains its decision-making mechanism is needed.

One must therefore focus on more transparent and personalised humanmachine interfaces for the operator and the leader [6].
