**12.1. Local bottleneck target based generation of local cover sets**

Understanding the underlying equivalence class structure, we now present one possible way of generating a subset of the local cover sets. Our approach is centered around the bottleneck target. For any target *ti*, the total amount of time this target can be monitored by any schedule is given by:

$$lt(t\_i) = \sum\_{\{s \mid t\_i \in T(s)\}} b(s)$$

Clearly there is one such target with the smallest *lt*(*ti*) value, and is hence a bottleneck for the entire network [40]. Without global information it is not possible for any sensor to determine if the global bottleneck is a target in its vicinity. However, for any sensor *s*, there is a least covered target in *T*(*s*) that is the *local* bottleneck. A key thing to realize is the fact that the global bottleneck target is also the local bottleneck target for the sensors in its neighborhood. Hence, if every sensor optimizes for its local bottleneck target, then one of these local optimizations is also optimizing the global bottleneck target. We use *tbot* to denote this local bottleneck target. Let *Cbot* be the set of sensors that can cover this local bottleneck target.That is,

$$\mathsf{C}\_{bot} = \{ s \mid t\_{bot} \in T(s) \}$$

**Implementation:** This understanding of bottleneck targets, along with our definition of equivalence classes, now gives us a simple means to generate local covers. Since no coverage schedule can do any better than the total amount of time that the global bottleneck can be covered, instead of trying to generate all local covers, what we really need are covers in the equivalence classes corresponding to each sensor *si* ∈ *Cbot*, such that each class can be completely exhausted. Also, to only select covers that conserve the battery of the sensors in *Cbot*, we want to ensure that the covers we generate are disjoint in *Cbot*. In terms of equivalence classes, for any two classes [*si*] and [*sj*] such that *si*,*sj* ∈ *Cbot*, we want to generate cover sets that are in these classes but do not include *both si* and *sj*.

To generate such cover sets, we can start by picking only one sensor *s*� *bot* in *Cbot*. This ensures that the local bottleneck target is covered. For each target *ti* in the one/two-hop neighborhood being considered, we can then randomly pick a sensor *s*, giving preference to any *s* ∈/ *Cbot*. Note that this does not necessarily create a sensor cover in the class [*s*� *bot*], since any one of our randomly picked sensors could be the bottleneck for the cover generated. However, replacing that sensor with another randomly picked sensor that covers the same target ensures that the we finish by using a cover in [*s*� *bot*]. Such a selection essentially ensures that we burn the entire battery of this sensor *s*� *bot* in *Cbot* through different covers, while trying to avoid using other sensors in *Cbot*. This process is then repeated for every sensor in *Cbot*. Hence, instead of generating all local covers, we only generate a small sample (constant number) of these corresponding to the equivalence class for each sensor covering the bottleneck target and some related randomly picked covers. We already showed that there can be at most *n* equivalence classes for the network. Thus, the sampled graph generated has O(n) nodes. If we consider the maximum number of sensors covering any target as a constant for the network, sampling only takes cumulative time of *<sup>O</sup>*(*nτ*), where *<sup>τ</sup>* = *maxs*∈*S*|*T*(*s*)|, since we do this for *n* sensors, each of which has a maximum of *τ* targets to cover, which are in turn covered by a constant number of sensors (as per our assumption). Even if this assumption is removed, in the worst case, all *n* sensors could be covering the same target making the time complexity *O*(*n*2*τ*). Next, we run our basic heuristic from [35] on this sampled LD graph.

## **13. Performance evaluation**

18 Will-be-set-by-IN-TECH

Recall that even though the number of global sensor covers is exponential in the number of sensors, our heuristics presented in [15, 35] worked by constructing *local* covers. After exchanging one or two hop coverage information with neighboring sensors, a sensor can exhaustively construct all possible local covers. A local cover here is a sensor cover that covers all the local targets. The number of local covers is also exponential but is determined by the maximum degree of the graph and the number of local targets, typically much smaller values than the number of all sensors or targets. The heuristics then construct the LD graph over these local covers. The choice of which cover to use is determined by looking at properties of

By making use of the idea of related covers in the same equivalence class, our goal is to use our existing heuristics from [15, 35] but to modify them to run over a subset of the local covers as opposed to all local covers. This should give considerable speedup and if the subset is selected carefully, it may only result in a slight reduction of the overall lifetime. We present such a local cover sampling scheme in Section 12.1 and then present the modified basic algorithm of [15, 35] to operate on this sample in Section 8. Finally, we evaluate the effectiveness of

Understanding the underlying equivalence class structure, we now present one possible way of generating a subset of the local cover sets. Our approach is centered around the bottleneck target. For any target *ti*, the total amount of time this target can be monitored by any schedule

{*s* | *ti*∈*T*(*s*)}

Clearly there is one such target with the smallest *lt*(*ti*) value, and is hence a bottleneck for the entire network [40]. Without global information it is not possible for any sensor to determine if the global bottleneck is a target in its vicinity. However, for any sensor *s*, there is a least covered target in *T*(*s*) that is the *local* bottleneck. A key thing to realize is the fact that the global bottleneck target is also the local bottleneck target for the sensors in its neighborhood. Hence, if every sensor optimizes for its local bottleneck target, then one of these local optimizations is also optimizing the global bottleneck target. We use *tbot* to denote this local bottleneck target. Let *Cbot* be the set of sensors that can cover this local bottleneck

*Cbot* = {*s* | *tbot* ∈ *T*(*s*)} **Implementation:** This understanding of bottleneck targets, along with our definition of equivalence classes, now gives us a simple means to generate local covers. Since no coverage schedule can do any better than the total amount of time that the global bottleneck can be covered, instead of trying to generate all local covers, what we really need are covers in the equivalence classes corresponding to each sensor *si* ∈ *Cbot*, such that each class can be completely exhausted. Also, to only select covers that conserve the battery of the sensors in *Cbot*, we want to ensure that the covers we generate are disjoint in *Cbot*. In terms of equivalence classes, for any two classes [*si*] and [*sj*] such that *si*,*sj* ∈ *Cbot*, we want to generate cover sets

that are in these classes but do not include *both si* and *sj*.

*b*(*s*)

*lt*(*ti*) = ∑

the LD graph such as the degree of each cover in the LD graph.

**12.1. Local bottleneck target based generation of local cover sets**

sampling in Section 13.

is given by:

target.That is,

In this section, we evaluate the performance of the proposed sampling scheme and evaluate it against our degree based heuristics of [35]. By not constructing all local covers and instead constructing a few covers for key equivalence classes, we should achieve considerable speedup. But the effectiveness of sampling can only be evaluated by analyzing its tradeoff between faster running time for possible reduced performance. The objective of our simulations was to study this tradeoff. For completeness, we create both one-hop and two-hop versions of our sampling heuristic and also compare its performance to two other algorithms in the literature, the 1-hop algorithm LBP [4] and the 2-hop algorithm DEEPS [5].

In order to compare the equivalence class based sampling against our previous degree based heuristics, LBP, and DEEPS, we use the same experimental setup and parameters as employed in [4]. We carry out all the simulations using *C*++. For the simulation environment, a static wireless network of sensors and targets scattered randomly in 100*m* ×100*m* area is considered. We conduct the simulation with 25 targets randomly deployed, and vary the number of sensors between 40 and 120 with an increment of 20 and each sensor with a fixed sensing range of 60*m*. The communication range of each sensor assumed to be two times the sensing range [44, 46]. For these simulations, we use the linear energy model wherein the power required to sense a target at distance *d* is proportional to *d*. We also experimented with the quadratic energy model (power proportional to *d*2). The results showed similar trends to those obtained for the linear model.

Figure 8 shows the Network Lifetime for the different algorithms. As can be seen from the figure, the sampling heuristics is only between 7-9% worse than the degree based heuristic. Sampling also outperforms the 1-hop LBP algorithm by about 10%. It is interesting to observe that for smaller network sizes, sampling is actually much closer to the degree-based heuristics in terms of performance.

**Figure 8.** Comparison of Network Lifetime with 25 Targets

**Figure 9.** Comparison of Running Time with 25 Targets


**Table 3.** Comparison of Network Lifetime for 1-hop algorithms

Now that we have seen that sampling works well when compared to the degree based heuristic, the question that remains to be answered is how much faster is the sampling algorithm? Figure 9 compares head-to-head the running time for the degree based heuristic (potentially exponential in *m*) and the linear time sampling algorithm. As can be seen from the figure the running time for the sampling algorithm is about half of the running time for the degree-based heuristic.

Finally, we individually study the 1-hop (Table 3) and 2-hop (Table 4) sampling heuristics with comparable algorithms. For the 1-hop algorithms, we also include a randomized-sampling algorithm that makes completely random picks for each target, without considering properties of the equivalence classes. The intention is to ensure that the performance of our sampling-heuristic can be attributed to the selection algorithm. For the 2-hop versions of


**Table 4.** Comparison of Network Lifetime of 2-hop algorithms

our proposed sampling heuristic, the target set *T*(*s*) of each sensor is expanded to include <sup>∪</sup>*s*�∈*N*(*s*,1)*T*(*s*� ) and the neighbor set is expanded to all 2-hop neighbors, i.e., *N*(*s*, 2). Covers are now constructed over this set using the same process as before. As can be seen from the table, both the 1-hop and 2-hop version are under 10% worse than the comparable degree-based heuristics. Also, the 2-hop sampling slightly outperforms the DEEPS by a 5% improvement in network lifetime.
