**4. Related work**

2 Will-be-set-by-IN-TECH

Many intended applications of Wireless Sensor Networks involve having the network monitor a region or a set of targets. To ensure that the area or targets of interest can be covered, sensors are usually deployed in large numbers by randomly dropping them in this region. Deployment is usually done by flying an aircraft over the region and air dropping the sensors. Since the cost of deployment far exceeds the cost of individual sensors, many more sensors are dropped than needed to minimally cover the region. The leads to a very dense network

A simplistic approach to meet the coverage objective would be to turn on all sensors after deployment. But this needlessly reduces the *lifetime* of the network since the overlap between monitoring regions implies that not all sensors need to be on at the same time. This can also lead to a very lossy network with several collisions happening in the medium access control (MAC) layer due to the density of nodes. In order to extend the lifetime of a sensor network while maintaining coverage, a minimal subset of the deployed sensors are kept active while the other sensors can sleep. Through some form of scheduling, this active subset changes over time until there are no more such subsets available to satisfy the coverage goal. In using such a scheme to extend the lifetime, the problem is two fold. First, we need to select these minimal subsets of sensors. Then there is the problem of *scheduling* them wherein, we need to determine how long to use a given set and which set to use next. For an arbitrarily large network, there are exponential number of possible subsets making the problem intractable

Centralized solutions like those in [6, 41] are based on assuming that the entire network structure is known at one node (typically the gateway node), which then computes the schedule for the network. The schedule is computed using *linear programming* based algorithms. Like any centralized scheme, it suffers from the problems of scalability, single point of failure and lack of robustness. The latter is particularly relevant in the context of sensor networks since sensor nodes are deployed in hostile environments and are prone to

Existing distributed solutions in [4, 5, 42] work by having a sensor exchange information with its neighbors (limited to *k*-hops). These algorithms use information like targets covered and battery available at each sensor to greedily decide which sensors remain on. Distributed algorithms are organized into rounds so that the set of active sensors is periodically reshuffled at the beginning of each round. The problem with these algorithms is that they use simple greedy criteria to make their decision on which sensors become active at each round and thus,

The lifetime problem can be stated as follows. Given a monitored region *R*, a set of sensors *S*

and a set of targets *T*, find a monitoring schedule for these sensors such that

• no sensor is in the schedule for longer than its initially battery.

and gives rise to an overlap in the monitoring regions of individual sensors.

and it has been shown to be NP-complete in [6, 20].

do not efficiently take into account the problem structure.

• the total time of the schedule is maximized, • all targets are constantly monitored, and

**2. Coverage problems**

frequent failures.

**3. Problem statement**

In this section, we briefly survey existing approaches to maximizing the lifetime of sensor networks, while meeting certain coverage objectives. [9] gives a more detailed survey on the various coverage problems and the scheduling mechanisms they use. [38] also surveys the coverage problem along with other algorithmic problems relevant to sensor networks. We end this section by focusing on two algorithms, LBP [4] and DEEPS [5], since we use them for comparisons against our algorithms.

A key application of wireless sensor networks is the collection of data for reporting. There are two types of data reporting scenarios: event-driven and on-demand [10]. Event-driven reporting occurs when one or more sensor nodes detect an event and report it to the sink. In on-demand reporting, the sink initiates a query and the nodes respond with data to this query.

Coverage problems essentially state how well a network observes its physical space. As pointed out in [32], coverage is a measure of the quality of service (QoS) for the WSN. The goal is to have each point of interest monitored by at least one sensor at all times. In some applications, it may be a requirement to have more than one sensor monitor a target for achieving fault tolerance. Typically, nodes are randomly deployed in the region of interest because sensor placement is infeasible. This means that more sensors are deployed than needed to compensate for the lack of exact positioning and to improve fault tolerance in harsh environments. The question of placing an optimal number of sensors in a deterministic deployment has been looked at in [17, 26, 34]. However, in this dissertation we focus on networks with very dense deployment of sensors so that there is significant overlap in the targets each sensor monitors. This overlap will be exploited to schedule sensors into a low power sleep state so as to improve the lifetime of these networks. Note that this definition of the network lifetime is different from some other definitions which measure this in terms of number of operations the network can perform [22].

The reason for wanting to schedule sensors into sense-sleep cycles that we talked about in Section 1, stems from the fact that sensor nodes have four states - *transmit, receive, idle and sleep*. As shown in [36] for the WINS Rockwell sensor, the transmit, receive and idle states all consume much more power than the sleep state - hence, it is more desirable for a sensor to enter a sleep state to conserve its energy. The goal behind sensor scheduling algorithms is to select the activity state of each sensor so as to allow the network as a whole to monitor its points of interest for as long as possible. For a more detailed look at power consumption

4 Will-be-set-by-IN-TECH 28 Wireless Sensor Networks – Technology and Protocols


### **Table 1.** Centralized Algorithms

models for ad-hoc and sensor networks we refer the reader to [18, 19, 25]. We now look at coverage problems in more detail.

The maximum lifetime coverage problem has been shown to be NP-complete in [1, 6]. Initial approaches to the problem in [1, 6, 41] considered the problem of finding the maximum number of *disjoint* cover sets of sensors. This allowed each cover to be used independently of others. However, [3, 8] and others showed that using non-disjoint covers allows the lifetime to be extended further and this approach has been adopted since.

Broadly speaking, the existing work in this category can be classified into two parts - Centralized Algorithms and Distributed Algorithms. For centralized approaches, the assumption is that a single node (usually the base station) has access to the entire network information and can use this to compute a schedule that is then uploaded to individual nodes. Distributed Algorithms work on the premise that a sensor can exchange information with its neighbors within a fixed number of hops and use this to make scheduling decisions. We now look at the individual algorithms in both these areas.

A common approach taken with centralized algorithms is that of formulating the problem as an optimization problem and using linear programming (LP) to solve it [3, 6, 16, 33]. In [41], the authors develop a most-constrained least-constraining heuristic and demonstrated its effectiveness on variety of simulated scenarios. In this heuristic, the main idea is to minimize the coverage of sparsely covered areas within one cover. Such areas are identified using the notion of the critical element, defined as the element which is a member of the smallest number of sets. Their heuristic favors sets that cover a high number of uncovered elements, that cover more sparsely covered elements, that do not cover the area redundantly and that redundantly cover the elements that do not belong to sparsely covered areas. [33] is a followup work by the same authors in which they formulate the area coverage problem using a Integer LP and relax it to obtain a solution. They also presented several ILP based formulations and strategies to reduce overall energy consumption while maintaining guaranteed sensor coverage levels. Additionally, their work demonstrated the practicality and effectiveness of these formulations on a variety of examples and provided comparisons with several alternative strategies. They also show that the ILP based technique can scale to large and dense networks with hundreds of sensor nodes.

In order to solve the target coverage problem, [6] considers the disjoint cover set approach. Modeling their solution as a Mixed Integer Program shows an improvement over [41]. The authors define the disjoint set covers (DSC) problem and prove its NP-completeness. They also prove that any polynomial-time approximation algorithm for DSC problem has a lower bound of 2. They first transform DSC into a maximum-flow problem (MFP), which is then formulated as a mixed integer programming. Based on the solution of the MIP, the authors design a heuristic to compute the number of covers. They evaluate the performance by simulation, against the most constrainedminimally constraining heuristic proposed in [41] and found that their heuristics has a larger number of covers (larger lifetime) at the cost of a greater running time.

4 Will-be-set-by-IN-TECH

Abrams, Goel [1] Area Yes Greedy: Max uncovered area Meguerdichian [33] Area No Integer Linear Program Cardei [6] Target Yes Mixed Integer Programming

models for ad-hoc and sensor networks we refer the reader to [18, 19, 25]. We now look at

The maximum lifetime coverage problem has been shown to be NP-complete in [1, 6]. Initial approaches to the problem in [1, 6, 41] considered the problem of finding the maximum number of *disjoint* cover sets of sensors. This allowed each cover to be used independently of others. However, [3, 8] and others showed that using non-disjoint covers allows the lifetime

Broadly speaking, the existing work in this category can be classified into two parts - Centralized Algorithms and Distributed Algorithms. For centralized approaches, the assumption is that a single node (usually the base station) has access to the entire network information and can use this to compute a schedule that is then uploaded to individual nodes. Distributed Algorithms work on the premise that a sensor can exchange information with its neighbors within a fixed number of hops and use this to make scheduling decisions. We now

A common approach taken with centralized algorithms is that of formulating the problem as an optimization problem and using linear programming (LP) to solve it [3, 6, 16, 33]. In [41], the authors develop a most-constrained least-constraining heuristic and demonstrated its effectiveness on variety of simulated scenarios. In this heuristic, the main idea is to minimize the coverage of sparsely covered areas within one cover. Such areas are identified using the notion of the critical element, defined as the element which is a member of the smallest number of sets. Their heuristic favors sets that cover a high number of uncovered elements, that cover more sparsely covered elements, that do not cover the area redundantly and that redundantly cover the elements that do not belong to sparsely covered areas. [33] is a followup work by the same authors in which they formulate the area coverage problem using a Integer LP and relax it to obtain a solution. They also presented several ILP based formulations and strategies to reduce overall energy consumption while maintaining guaranteed sensor coverage levels. Additionally, their work demonstrated the practicality and effectiveness of these formulations on a variety of examples and provided comparisons with several alternative strategies. They also show that the ILP based technique can scale to large and

In order to solve the target coverage problem, [6] considers the disjoint cover set approach. Modeling their solution as a Mixed Integer Program shows an improvement over [41]. The authors define the disjoint set covers (DSC) problem and prove its NP-completeness. They also prove that any polynomial-time approximation algorithm for DSC problem has a lower bound of 2. They first transform DSC into a maximum-flow problem (MFP), which is then formulated as a mixed integer programming. Based on the solution of the MIP, the authors

Shah [3] Area Yes LP, Garg Könemann Cardei [8] Target No Integer Linear Program

Name Area/Target Disjoint Main Idea

to be extended further and this approach has been adopted since.

look at the individual algorithms in both these areas.

dense networks with hundreds of sensor nodes.

**Table 1.** Centralized Algorithms

coverage problems in more detail.

[3] formulates a packing LP for the coverage problem. Using the (1 + ) Garg-Könemann approximation algorithm [21], they provide a (1 + )(1 + 2*lnn*) approximation of the problem. They also present an efficient data structureto represent the monitored area with at most *n*<sup>2</sup> points guaranteeing the full coverage which is superior to the previously used approach based on grid points in [41]. They also present distributed algorithms that tradeoff between monitoring and power consumption but these are improved upon by the authors in LBP and DEEPS.

A similar problem is solved by us for sensors with *adjustable* ranges in [16]. We present a linear programming based formulation that also uses the (1 + ) Garg-Könemann approximation algorithm [21]. The main difference is the introduction of an adjustable range model that allows sensors to vary their sensing and communication ranges smoothly. This was the first model that allows sensors to vary their range to any value upto a maximum. The model is an accurate representation of physical sensors and allows significant power savings over the discreetly varying adjustable model.

A different algorithm to work with disjoint sets is given in [7]. Disjoint cover sets are constructed using a graph coloring based algorithm that has area coverage lapses of about 5%. The goal of their heuristic is to achieve energy savings by organizing the network into a maximum number of disjoint dominating sets that are activated successively. The heuristic to compute the disjoint dominating sets is based on graph coloring. Simulation studies are carried out for networks of large sizes.

[1] also gives a centralized greedy algorithm that picks sensors based the largest uncovered area. They have designed three approximation algorithms for a variation of the SET K-COVER problem, where the objective is to partition the sensors into covers such that the number of covers that include an area, summed over all areas, is maximized. The first algorithm is randomized and partitions the sensors within a fraction of the optimum. The other two algorithms are a distributed greedy algorithm and a centralized greedy algorithm. The approximation ratios are presented for each of these algorithms.

[8] also deal with the target coverage problems. Like similar algorithms, they also extend the sensor network life time by organizing the sensors into a maximal number of set covers that are activated successively. But they allow non-disjoint set covers. The authors model the solution as the maximum set covers problem and design two heuristics that efficiently compute the sets, using linear programming and a greedy approach. The greedy algorithm selects a critical target at each step. This is the least covered target. For the greedy selection step, the sensor with the greatest contribution to the critical target is selected.

The distributed algorithms in the literature can be further classified into greedy, randomized and other techniques. The greedy algorithms [1, 4, 5, 8, 29, 41] all share the common property of picking the set of active sensors greedily based on some criteria. [41] considers the area coverage problem and introduces the notion of a *field* as the set of points that are covered by the same set of sensors. The basic approach behind the picking of a sensor is to first pick the one that covers that largest number of previously uncovered fields and to then avoid including

#### 6 Will-be-set-by-IN-TECH 30 Wireless Sensor Networks – Technology and Protocols


### **Table 2.** Distributed Algorithms

more than one sensor that covers a sparsely covered field. [1] builds on this work and presents three algorithms that solve variations of the set k-cover problem. The greedy heuristic they propose works by selecting the sensor that covers the largest uncovered area. [29] defines the sensing denomination (SD) of a sensor as its contribution, i.e., the area left uncovered when the sensor is removed. The authors assume that each sensor can probabilistically detect a nearby event, and build a probabilistic model of network coverage by considering the data correlation among neighboring sensors. The more the contribution of a sensor to the network coverage, the higher the sensors SD is. Based on the location information of neighboring sensors, each sensor can calculate its SD value in a distributed manner. Sensors with higher sensing denomination have a higher probability of remaining active.

[3] gives a distributed algorithm based on using the faces of the graph. If all the faces that a sensor covers are covered by other sensors with higher battery that are in an active or deciding state, then a sensor can switch off (sleep). Their work has been extended to target coverage in the load balancing protocol (LBP).

Some distributed algorithms use randomized techniques. Both OGDC [46] and CCP [44] deal with the problem of integrating coverage and connectivity. They show that if the communication range is at least twice the sensing range, a covered network is also connected. [46] uses a random back off for each node to make nodes volunteer to be the start node. OGDC addresses the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. They investigate the relationship between coverage and connectivity. They also derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range. OGDC achieves similar coverage with an upto 50% improvement in the lifetime of the network. A drawback of OGDC is that it requires that each node knows its own location.

6 Will-be-set-by-IN-TECH

CCP [44] Area No Random timers to evaluate

OGDC [46] Area No Random back off node volunteering Lu [29] Area No Highest overall denomination

Abrams [1] Area Yes Randomized, Greedy picks max

Cardei et al. [8] Target No Sensor with highest contribution to

LBP [4] Target No Targets are covered by higher

DEEPS [5] Target No Minimize energy consumption for

more than one sensor that covers a sparsely covered field. [1] builds on this work and presents three algorithms that solve variations of the set k-cover problem. The greedy heuristic they propose works by selecting the sensor that covers the largest uncovered area. [29] defines the sensing denomination (SD) of a sensor as its contribution, i.e., the area left uncovered when the sensor is removed. The authors assume that each sensor can probabilistically detect a nearby event, and build a probabilistic model of network coverage by considering the data correlation among neighboring sensors. The more the contribution of a sensor to the network coverage, the higher the sensors SD is. Based on the location information of neighboring sensors, each sensor can calculate its SD value in a distributed manner. Sensors with higher

[3] gives a distributed algorithm based on using the faces of the graph. If all the faces that a sensor covers are covered by other sensors with higher battery that are in an active or deciding state, then a sensor can switch off (sleep). Their work has been extended to target coverage in

Some distributed algorithms use randomized techniques. Both OGDC [46] and CCP [44] deal with the problem of integrating coverage and connectivity. They show that if the communication range is at least twice the sensing range, a covered network is also connected. [46] uses a random back off for each node to make nodes volunteer to be the start node. OGDC addresses the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. They investigate the relationship between coverage and connectivity. They also derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship

sensing denomination have a higher probability of remaining active.

sponsored area

sensor picks

bottleneck

energy nodes

bottleneck target

uncovered area

coverage requirements

Sliepcivic [41] Area Yes Greedy: Max uncovered fields Tian [42] Area No Geometric calculation of sponsored area PEAS [45] Area No Probing based determination of

Name Area/Target Disjoint Main Idea

**Table 2.** Distributed Algorithms

the load balancing protocol (LBP).

In [31] the authors combine computational geometry with graph theoretic techniques. The use Voronoi diagrams with graph search to design a polynomial time worst and average case algorithm for coverage calculation in homogeneous isotropic sensors. The also analyze and experiment with using these techniques as heuristics to improve coverage.

[44] sets a random timer for each node following which a node evaluates its current state based on the coverage by its neighbors. The authors present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications. They also integrate CCP to SPAN [11, 12] to provide both coverage and connectivity guarantees. [1] also present a randomized algorithm that assigns a sensor to a cover chosen uniformly at random.

A different approach has been taken in PEAS [42, 45]. PEAS is a distributed algorithm with a probing based off-duty rule is given in [45]. PEAS is localized and has a high resilience to node failure and topology changes. Here, every sensor broadcasts a probe PRB packet with a probing range *γ*. Any working node that hears this probe packet responds. If a sensor receives at least one reply, it can go to sleep. The range can be chosen based on several criteria. Note that this algorithm does not preserve coverage over the original area. The results for PEAS showed an increase in the network lifetime in linear proportion to the number of deployed nodes. In [42] the authors give a distributed and localized algorithm. Every sensor has an off-duty eligibility rule. They give an algorithm for a node to compute its sponsored area. To prevent the occurrence of blind-points by having two sensors switch off at the same time, a random back off is used. They show improved performance over PEAS.

To our knowledge, [44] was the first work to consider the *k*-coverage problem. [27] also addresses the *k*-coverage problem from the perspective of choosing enough sensors to ensure coverage. Authors consider different deployments with sensors given a probability of being active and obtain bounds for deployment. [47] solves the problem of picking minimum size connected *k*-covers. The authors state this as an optimization problem and design a centralized approximation algorithm that delivers a near-optimal solution. They also present a communication-efficient localized distributed algorithm for this problem.

Now, we look at the two protocols that we compare our heuristics against. The load balancing protocol (LBP) [4] is a simple 1-hop protocol which works by attempting to balance the load between sensors. Sensors can be in one of three states sense/on, sleep/off or vulnerable/undecided. Initially all sensors are vulnerable and broadcast their battery levels along with information on which targets they cover. Based on this, a sensor decides to switch to off state if its targets are covered by a higher energy sensor in either on or vulnerable state. On the other hand, it remains on if it is the sole sensor covering a target. This is an extension of the work in [3]. LBP is simplistic and attempts to share the load evenly between sensors instead of balancing the energy for sensors covering a specific target.

The other protocol we consider is DEEPS [5]. The maximum duration that a target can be covered is the sum of the batteries of all its nearby sensors that can cover it and is known as the life of a target. The main intuition behind DEEPS is to try to minimize the energy consumption rate around those targets with smaller lives. A sensor thus has several targets with varying

### 8 Will-be-set-by-IN-TECH 32 Wireless Sensor Networks – Technology and Protocols

lives. A target is defined as a *sink* if it is the shortest-life target for at least one sensor covering that target. Otherwise, it is a *hill*. To guard against leaving a target uncovered during a shuffle, each target is assigned an in-charge sensor. For each sink, its in-charge sensor is the one with the largest battery for which this is the shortest-life target. For a hill target, its in-charge is that neighboring sensor whose shortest-life target has the longest life. An in-charge sensor does not switch off unless its targets are covered by someone. Apart from this, the rules are identical as those in LBP protocol. DEEPS relies on two-hop information to make these decisions.
