**4. Object localization under human presence conditions**

#### **4.1. Object localization using sensor data on target node**

**Figure 6.** Experimental results with regard to the number of estimation locations.

case we attached reference nodes to the bed, the shoebox, and the desk.

**Figure 7.** Experimental results with regard to the number of reference nodes.

We conducted another experiment to investigate the influence of the number of reference nodes on the estimation accuracy. All the experiments above used the 5 reference nodes il‐ lustrated in Fig. 4. In this experiment, however, we changed the total number of reference nodes in two ways: one was to subtract two reference nodes from the existing nodes, and the other was to add three nodes to the existing nodes. In the former case, we only used the reference nodes installed at the kitchen cabinet, the TV shelf, and the sofa, while in the latter

The graph of the results shown in Fig. 7 suggests two things to us in particular. The first is that there is little difference in the best estimation accuracy with the different numbers of reference nodes. The other is that KNN and DKNN perform strongly with fewer reference nodes, whereas the 3-layered NN has trouble in estimating object locations with fewer refer‐

*3.3.3. Estimation by different numbers of reference nodes*

232 Radio Frequency Identification from System to Applications

The conditions for previous experiments are far from realistic. In a living environment, hu‐ mans are present and handle the tagged objects. The existence of a human degrades the lo‐ calization performance because the human body disturbs the radio waves. Our system can measure not only the environmental sensor data but also the sensor data on the target no‐ des. We extended our previous method [7] to be able to handle the sensor data on the target nodes. Because the previous method limits the location candidates based on estimated hu‐ man behavior, location candidates are also limited in the new method based on sensor data on target nodes.

In the following algorithm, we use DKNN for RSSI-based localization. Because the sensor data on the target nodes indicates the node location well, the algorithm merges the sensor data on the target nodes into the RSSI-based localization results before combination of the sensor data on the reference nodes.

#### *4.1.1. Integration of target-attached sensor data and rssi-based estimation results*

In our system, each target node contains a humidity sensor, a temperature sensor, and a lu‐ minous intensity sensor. The humidity and temperature sensors show changes only at spe‐ cific locations, whereas the luminous intensity sensor is highly sensitive to the environment. This is why the system changes the estimation priority relative to the sensors that have re‐ acted. First, the system integrates the estimation based on humidity or temperature sensors into the RSSI-based estimation. Then, the system integrates the estimation based on the lu‐ minous intensity sensor into the results.

**•** Integration of Humidity and Temperature Sensor Data

Because both the humidity sensor and the temperature sensor change dramatically only at specific locations, the system gives top priority to estimations based on these sensors. For example, because the system can detect object motion through the acceleration sensor, if the humidity rises around the time when an object is set down, it probably indicates that the object has been placed near the sink, because the sink is the only place that can cause a dra‐ matic change in humidity. In the same way, if the temperature drops around the time when an object is set down, it suggests that the object has probably been placed inside the refriger‐ ator, because the preliminary experiments indicate that the temperature only changes dra‐ matically in the refrigerator. The system places its highest level of trust in these sensor reactions because they limit the object location candidates to one in each case. A localization example based on this policy is shown in Fig 8a).

**•** Integration of Luminous Intensity Sensor Data

The luminous intensity sensor does not limit the object location candidates to only one. This sensor can provide the system with several candidates for the object location. For example, if the luminous intensity drops dramatically around the time when an object is set down, it suggests to the system that the object has been placed in a dark place, such as the inside of a drawer or underneath the bed. Because the luminous intensity changes sensitively depend‐ ing on the location, the system may even be able to tell the difference between the inside of a drawer and underneath the bed by comparing the sensor's outputs. A localization example based on this policy is illustrated in Fig. 8b).

riod, it might fail to detect them. However, by tracking the sensor reactions over longer peri‐ ods, the possibility of missed detection decreases. Thus, the system can use the referenceattached sensors to provide several location candidates, and with the following integration algorithm, shown in Fig. 9, the system integrates reference-based estimation into target-

Object and Human Localization with ZigBee-Based Sensor Devices in a Living Environment

http://dx.doi.org/10.5772/53366

235

To evaluate the performance of our system, an experiment was conducted. The experimen‐ tal room, the sensors for the reference nodes and the deployment locations are the same as those in Fig. 1. The target locations are illustrated in Fig. 10. The total number of target loca‐ tions is 19. For the training data sets, we collected 400 samples per target location in advance under human absent conditions. For the evaluation, the subject puts down and picks up the object at all of the target locations 5 times, which means 19×5=95 location test data were col‐ lected. Strictly speaking, in a single trial, one subject conveyed a target node from location to location in the following order: OnDeskCabinet, InDeskCabinet, StereoShelf, Sofa, Shelf, BedHead, BedBottom, OnKitchenCabinet, InKitchenCabinet, InCabinet, Table, Desk, Chair1, Chair2, TVShelf, ShoeBox, OnCabinet, Fridge, and Sink. For the performance evaluation, we calculated the estimation accuracy in the same way as in the previous experiments. We com‐

**1. RSSI Only:** Estimation based on RSSI data between target node and reference nodes

**3. Integration of RSSI and Target Sensor Data:** Estimation based on proposed integration

**2. RSSI & Target Sensors:** Estimation directly based on RSSI and target sensor data;

based estimation.

**4.2. Experiment**

only;

pared the following five conditions.

**Figure 9.** Typical example of sensor data integration on reference nodes.

algorithm using the RSSI and target sensor data;

**Figure 8.** Typical examples of sensor data integration on target nodes.

#### *4.1.2. Integration of sensor data on reference nodes into the results*

Because the reference-attached sensor data provide the system with information about hu‐ man behavior and locations, the system can limit the object location candidates. For exam‐ ple, if a sensor embedded on a sofa continuously reacts around the time when an object is set down, it is easy for the system to guess that the object location is not far from the sofa. In our experimental room, the reference-attached sensors consist of pressure-type switch sen‐ sors and microswitch sensors. Pressure-type sensors are installed in the chairs, the sofa, and the bed, whereas the microswitch sensor is installed in the drawer of a cabinet. Each time that an object is set down, the system refers to the reactions of all types of reference-attached sensors around that moment, and keeps track of them. The pressure-type switch sensors, such as those in the chair modules, usually continue to react, not only at the moment when the object is placed, but also during the periods before and after placement, so there is little possibility that the system will fail to detect them. For the microswitch switch sensors such as the drawer modules, however, the sensor reactions usually occur ahead of the moment when the object is placed. If the system only refers to the sensor data within a particular pe‐ riod, it might fail to detect them. However, by tracking the sensor reactions over longer peri‐ ods, the possibility of missed detection decreases. Thus, the system can use the referenceattached sensors to provide several location candidates, and with the following integration algorithm, shown in Fig. 9, the system integrates reference-based estimation into targetbased estimation.

**Figure 9.** Typical example of sensor data integration on reference nodes.

#### **4.2. Experiment**

reactions because they limit the object location candidates to one in each case. A localization

The luminous intensity sensor does not limit the object location candidates to only one. This sensor can provide the system with several candidates for the object location. For example, if the luminous intensity drops dramatically around the time when an object is set down, it suggests to the system that the object has been placed in a dark place, such as the inside of a drawer or underneath the bed. Because the luminous intensity changes sensitively depend‐ ing on the location, the system may even be able to tell the difference between the inside of a drawer and underneath the bed by comparing the sensor's outputs. A localization example

Because the reference-attached sensor data provide the system with information about hu‐ man behavior and locations, the system can limit the object location candidates. For exam‐ ple, if a sensor embedded on a sofa continuously reacts around the time when an object is set down, it is easy for the system to guess that the object location is not far from the sofa. In our experimental room, the reference-attached sensors consist of pressure-type switch sen‐ sors and microswitch sensors. Pressure-type sensors are installed in the chairs, the sofa, and the bed, whereas the microswitch sensor is installed in the drawer of a cabinet. Each time that an object is set down, the system refers to the reactions of all types of reference-attached sensors around that moment, and keeps track of them. The pressure-type switch sensors, such as those in the chair modules, usually continue to react, not only at the moment when the object is placed, but also during the periods before and after placement, so there is little possibility that the system will fail to detect them. For the microswitch switch sensors such as the drawer modules, however, the sensor reactions usually occur ahead of the moment when the object is placed. If the system only refers to the sensor data within a particular pe‐

example based on this policy is shown in Fig 8a). **•** Integration of Luminous Intensity Sensor Data

234 Radio Frequency Identification from System to Applications

based on this policy is illustrated in Fig. 8b).

**Figure 8.** Typical examples of sensor data integration on target nodes.

*4.1.2. Integration of sensor data on reference nodes into the results*

To evaluate the performance of our system, an experiment was conducted. The experimen‐ tal room, the sensors for the reference nodes and the deployment locations are the same as those in Fig. 1. The target locations are illustrated in Fig. 10. The total number of target loca‐ tions is 19. For the training data sets, we collected 400 samples per target location in advance under human absent conditions. For the evaluation, the subject puts down and picks up the object at all of the target locations 5 times, which means 19×5=95 location test data were col‐ lected. Strictly speaking, in a single trial, one subject conveyed a target node from location to location in the following order: OnDeskCabinet, InDeskCabinet, StereoShelf, Sofa, Shelf, BedHead, BedBottom, OnKitchenCabinet, InKitchenCabinet, InCabinet, Table, Desk, Chair1, Chair2, TVShelf, ShoeBox, OnCabinet, Fridge, and Sink. For the performance evaluation, we calculated the estimation accuracy in the same way as in the previous experiments. We com‐ pared the following five conditions.


**4. Integration of RSSI and Reference Sensor Data:** Estimation based on proposed inte‐ gration algorithm using the RSSI and reference sensor data;

grate target sensor data into RSSI more effectively, then the performance will become higher

Object and Human Localization with ZigBee-Based Sensor Devices in a Living Environment

http://dx.doi.org/10.5772/53366

237

Our proposed integration algorithm based on the RSSI and target sensor data shows much higher performance than the previous two algorithms. It clearly shows the effectiveness of our integration algorithm, which can correct the estimation even if the RSSI-based estima‐ tion provides a wrong result. Our proposed integration algorithm based on the RSSI and ref‐ erence sensor data also shows high performance, similar to that of the RSSI and target sensor data approach. To investigate this in more detail, the contribution of the integration of the reference sensor data is seen to be different from that of the integration of the target sensor data. This therefore indicates that our system should produce a higher performance than these two integration algorithms. The system that integrates RSSI with all kinds of sen‐

The details of the estimation based on each approach are shown in Fig. 12. These results demonstrate that the use of sensors and limitation of the candidates improves the object lo‐ calization. The locations where the performance improved are the sinks, the drawers, the bed and the sofa, i.e. locations where the sensor can easily localize the object. These im‐ proved locations indicate the effectiveness of the sensor data use. The results also showed that there are several locations that could not be correctly estimated by any of the five algo‐ rithms. Any of the five algorithms can estimate an object location based on the results of RSSI-based estimation, but if the RSSIs are heavily distorted by the presence of a human be‐

ing, even the integration algorithm can hardly correct the mistaken estimation.

than that of this simple combination of RSSI and target sensor data.

**Figure 11.** Results for the object localization algorithm at each K in DKNN.

sor data actually shows the highest performance.

**5. Integration of RSSI and All Sensor Data:** Estimation based on proposed integration al‐ gorithm using the RSSI and all sensor data (our system performance).

**Figure 10.** Experimental conditions for object localization with human presence.

The estimation results are shown in Fig. 11. Dynamic interference sources such as a human being had a serious effect on the RSSI-based estimation results. When we use the datasets in our learning database to conduct cross validation, the estimation accuracy is more than 90%. However, in this case, estimation based only on RSSI produced a poor performance.

Estimation based on the RSSI and target sensor data shows lower performance than that of RSSI-only based estimation. In this evaluation, we added another two dimensions (humidity and luminous intensity) to the original RSSI datasets. Because the luminous intensity changes are quite sensitive to the surroundings and to how the target node is placed, they might mislead the estimation to the wrong locations. However, this approach has one point of focus. In the RSSI-based approach, the sink is one of the most difficult places to estimate because it is surrounded by metal. However, by introducing the humidity data, the system estimated the sink correctly through all the scenario tests. This fact indicates that if we inte‐ grate target sensor data into RSSI more effectively, then the performance will become higher than that of this simple combination of RSSI and target sensor data.

**Figure 11.** Results for the object localization algorithm at each K in DKNN.

**4. Integration of RSSI and Reference Sensor Data:** Estimation based on proposed inte‐

**5. Integration of RSSI and All Sensor Data:** Estimation based on proposed integration al‐

gration algorithm using the RSSI and reference sensor data;

236 Radio Frequency Identification from System to Applications

**Figure 10.** Experimental conditions for object localization with human presence.

The estimation results are shown in Fig. 11. Dynamic interference sources such as a human being had a serious effect on the RSSI-based estimation results. When we use the datasets in our learning database to conduct cross validation, the estimation accuracy is more than 90%.

Estimation based on the RSSI and target sensor data shows lower performance than that of RSSI-only based estimation. In this evaluation, we added another two dimensions (humidity and luminous intensity) to the original RSSI datasets. Because the luminous intensity changes are quite sensitive to the surroundings and to how the target node is placed, they might mislead the estimation to the wrong locations. However, this approach has one point of focus. In the RSSI-based approach, the sink is one of the most difficult places to estimate because it is surrounded by metal. However, by introducing the humidity data, the system estimated the sink correctly through all the scenario tests. This fact indicates that if we inte‐

However, in this case, estimation based only on RSSI produced a poor performance.

gorithm using the RSSI and all sensor data (our system performance).

Our proposed integration algorithm based on the RSSI and target sensor data shows much higher performance than the previous two algorithms. It clearly shows the effectiveness of our integration algorithm, which can correct the estimation even if the RSSI-based estima‐ tion provides a wrong result. Our proposed integration algorithm based on the RSSI and ref‐ erence sensor data also shows high performance, similar to that of the RSSI and target sensor data approach. To investigate this in more detail, the contribution of the integration of the reference sensor data is seen to be different from that of the integration of the target sensor data. This therefore indicates that our system should produce a higher performance than these two integration algorithms. The system that integrates RSSI with all kinds of sen‐ sor data actually shows the highest performance.

The details of the estimation based on each approach are shown in Fig. 12. These results demonstrate that the use of sensors and limitation of the candidates improves the object lo‐ calization. The locations where the performance improved are the sinks, the drawers, the bed and the sofa, i.e. locations where the sensor can easily localize the object. These im‐ proved locations indicate the effectiveness of the sensor data use. The results also showed that there are several locations that could not be correctly estimated by any of the five algo‐ rithms. Any of the five algorithms can estimate an object location based on the results of RSSI-based estimation, but if the RSSIs are heavily distorted by the presence of a human be‐ ing, even the integration algorithm can hardly correct the mistaken estimation.

cations illustrated. Data for the human absence case were also collected. This problem is re‐ garded as 5-class classification. Direct human interference means that the RSSIs between two particular reference nodes are frequently missed, which means that the RF signals could not be received successfully. This data deficit may lead to human location estimation failure. We therefore compensated the part with the data deficit using the average of the successful‐ ly collected RSSIs. We evaluated the ratio of the true positive value in all data. Each pattern recognition method adopted the most suitable parameters for the estimation. Ten-fold cross-

Object and Human Localization with ZigBee-Based Sensor Devices in a Living Environment

http://dx.doi.org/10.5772/53366

239

The estimation results are shown in Table 1. These results indicate the possibility of estimat‐ ing the four assumed human locations using the RSSIs among the reference nodes. Estima‐ tion with the 3-layered NN algorithm appears to be a little difficult, but estimations based

The estimation results suggest two points in particular. The first is that the 3-layered NN algorithm is poor at distinguishing the human presence case from the human absence case. It is also weak at human location estimation when compared with the other two pattern recognition methods. The other point is that KNN and DKNN can not only tell the difference between the human presence case and human absence case, but can also estimate human locations with high accuracy even under the condition where the human

validation was also used for the evaluation.

on KNN and DKNN showed high accuracies.

**Figure 13.** Conditions for experiments on human localization in four areas.

presence is unknown.

**Figure 12.** Performance results for each location.
