**1. Introduction**

The fog computing approach is an alternative to the cloud computing solution, once this paradigm reduces the amount of transmitted data on the network and the computational complexity required in the cloud. However, some approaches in the computing field try to take advantage of both approaches simultaneously. The degree of freedom presented by this new branch focuses mainly on the internet of things landscape, which needs an infrastructure that encompasses all its requirements, a situation in which fog computing fits, which allows the main focus on decisionmaking and data management locally [1, 2].

In fog computing, part of the data processing, which would be sent to a cloud, can take place between nearby personal devices situated at the edge network. Thus the latency problem can be mitigated, as part of the processing takes place close to the users' devices. In the fog computing model, edge devices could be set as small local data centers supporting multi-tenancy and [3] elasticity. Therefore, we can say that fog computing allows reducing the amount of data sent to the cloud, and consequently reducing the communication latency and the amount of data processed by it. Although fog computing is a good solution for dealing with the problems arising from cloud computing, this paradigm presents several challenges.

Fog computing is a solution designed to deal mainly with Internet of Things applications (IoT) [3], and this type of application tends to deal with the processing of information collected from one or more sources in real-time. From there, it is necessary to make decisions to satisfy the users' needs [3] while maintaining QoS and consequently QoE. However, relying exclusively on edge resources is not always possible, as some computing and data storage requirements may exceed the capacity of those of edge devices. In addition, a user resource configuration may not have enough capacity to meet user's request due to availability or even memory and processing limitations.

In addition, the technological diversity of edge computing devices, and the growth in user demand, generate difficulties to establish resource allocation in order to favor the environment and the applications individual. Edge devices impose a very high level of heterogeneity, making it difficult to allocate resources and establish technologies capable of dealing with different types of different devices. When performing an allocation of resources in any data center, it is important to meet the demands of the user; however, it is of fundamental importance to perform this task maintaining as much load balancing as possible so that the resources can be shared by other users. Therefore, resource scheduling in a fog environment must deal with the best fit between QoE and load balancing.

The related works considered in this article aim to establish techniques, to deal with computational resources management in distributed systems, focusing on system performance or user satisfaction. Due to the difficulty in establishing the trade-off between performance/satisfaction, the related works tried to focus on one of these parameters. Among the related works, knowledge models based on artificial intelligence and ontology are applied. Our proposal presents an approach to address this gap, considering the performance/satisfaction trade-off by developing and applying parameters of context and quality of experience. In this sense, this chapter is proposed a Quality of Context (QoC) based approach aiming at the user's QoE considered from jobs attendance time (makespan).

### **1.1 Paper organization**

The remainder of this paper is organized as follows: Section 2 addresses the basics concepts used by the proposed model in Section 3, and 4; Section 5 discusses the experiment conducted and presents results; Section 6 addresses related works, while final considerations are presented in Section 7.
