**2. ZigBee-based sensor device**

tional active RFID systems can capture large segments of human behavior, even without the use of a complicated system like MITes. Environmental information can be measured using sensors in the tags. With the addition of the environmental information, the RFID system can easily identify the object with the attached tags. However, while identifica‐ tion alone is suitable for object management, information about object handling and ob‐ ject locations is required to measure human behavior. For object handling, the work of Philipose et al. [2] indicates that the object handling sequence assists with the estimation of human behavior. This information can be captured easily with sensors included in the tags. For the location, as an example of the use of location information for human behav‐ ior recognition, the information that a cup exists on a sink indicates that someone is washing the cup. The presence of the cup on a table suggests that someone is drinking from it. Also, if it is known that one specific person uses the cup, this information also identifies the person who is drinking. Although direct information about humans is de‐ sirable for behavior recognition, direct measurement is difficult with active RFID sys‐ tems. If the inhabitants wear tags, some information can be captured. However, wearing the tags constricts the natural behavior. Intille et al. [3] suggested that a rough human lo‐ cation is useful for human behavior recognition. Based on their work, we decided that our measurement target for humans using active RFID systems is sub-room-level human localization without the humans wearing tags. Therefore, our research goal is object and

Popular approaches for tag position estimation use radio signal strength indicators (RSSIs) for communication between tags and readers, because RSSI depends on the dis‐ tance between the tag and the reader [4-6]. The simplest approach uses a triangulation algorithm. However, in the home environment, which contains many obstacles for RFID systems such as furniture and electrical appliances, localization is more difficult because the strength of the radio wave can change easily with the room situation. One solution is the deployment of multiple reference tags, which indicate true position [5] [6]. However, this approach is impractical in a living environment because of the cost and difficulty. Distortion of the radio waves by the occupant's presence decreases the localization per‐ formance. When we consider the above applications, accurate position (i.e. x-y-z posi‐ tion) estimation is not necessary, but rough location (e.g., on a table, in a drawer, or in a cabinet) is required. Based on this idea, we have already proposed a method for localiza‐ tion of tag-attached objects [7]. The method uses a machine learning technique and a rule-based algorithm to combine RSSI data and sensor data captured by externally dis‐ tributed sensors across the room. This combination improves the performance in the presence of humans. However, this method has some disadvantages, including the cost of a commercial RFID system, the necessity for the tag readers to have a local area net‐ work (LAN) connection, the additional introduction of distributed sensors and the limita‐ tions of the estimation locations (e.g. the system cannot distinguish any drawers that do

To overcome these problems, we must use a new active RFID system instead of the cur‐ rent commercial active RFID systems. We have focused on ZigBee technology for wire‐

human localization using an active RFID system.

224 Radio Frequency Identification from System to Applications

not contain switch sensors).

To avoid limitations in the sensor variety and the communication protocols, we developed a new ZigBee-based prototype system. The system consists of the target nodes, which are tags in the RFID system, and the reference nodes, which are readers in the RFID system. The dif‐ ference between this system and the traditional RFID system is that our system enables com‐ munication among the readers and can gather RSSI data because the reference nodes are also regarded as a kind of target node. The devices consist of the XBee, which is a commer‐ cial ZigBee communication module, and the Arduino or Arduino Fio microcontroller, which is commonly used in prototype device construction because of its compactness and ease of programming. The antenna used for wireless transmission and reception is non-directional to reduce the system performance dependence on device direction. The developed sensors and deployment examples are shown in Fig. 1.

**ii.** Start Moving: object begins to move;

**v.** Stop Moving: object stops moving.

results using this algorithm are shown in Fig. 2.

than 90% according to our experimental results.

**Figure 2.** Motion sensing example results with acceleration sensor.

**iii.** Keep Moving: object continues to move;

**iv.** Ambiguous: object is either in "Moving" state or in "Stable" state;

To be specific, when a node shows noticeable changes in acceleration beyond a set threshold after a long time in the "Stable" state, our system judges this change to be to the "Start Mov‐ ing" state. Then, as long as the acceleration sensor continues to respond, the state is regarded as being the "Keep Moving" state. However, in the real case, even if an object is moving, the acceleration sensor attached to the node sometimes does not show any noticeable response because of the way it moves. To avoid mistaken estimation in such cases, where even changes in acceleration cannot be detected, the system does not instantly determine the state to be "Stop Moving". Instead, the system regards such a state as "Ambiguous", which means that the node is either in the "Keep Moving" state or the "Stop Moving" state. If the accelera‐ tion sensor does not output any noticeable changes after a fixed period of time, the system decides that the first moment where the acceleration sensor's response disappears is the "Stop Moving" state, and the subsequent moments are the "Stable" state. Typical detection

Object and Human Localization with ZigBee-Based Sensor Devices in a Living Environment

http://dx.doi.org/10.5772/53366

227

To examine the validity of this algorithm, we performed some preliminary experiments. Be‐ cause it is difficult to generalize all possible patterns of object motion, in the preliminary ex‐ periments, we simply raise an object with a node and move it for a time, and then set it down somewhere. However, despite the simplicity of the algorithm, the system can distin‐ guish the state of object motion from the other states quite well, with a success rate of more

**Figure 1.** Developed devices and deployment examples.

#### **2.1. Target node**

The target node is used for object identification and localization. The node is attached to an object in a room. The node consists of the Arduino Fio and the XBee. The node contains an acceleration sensor (ADXL355) for detection of object handling, along with a luminosity sen‐ sor (CdS cell), a humidity sensor (HIH-4030), and a temperature sensor (TEMP6000) for en‐ vironmental status measurement near the object. The node is battery powered. However, the current device has a battery life of only 3 days, and provision of longer battery life will be part of our future work.

The target node detects the object handling state by using an acceleration sensor, which acts as a trigger to localize the object position. In our research, we estimate the following five motion states by analysis of the acceleration changes:

**i.** Stable: object is in a stable state;


To be specific, when a node shows noticeable changes in acceleration beyond a set threshold after a long time in the "Stable" state, our system judges this change to be to the "Start Mov‐ ing" state. Then, as long as the acceleration sensor continues to respond, the state is regarded as being the "Keep Moving" state. However, in the real case, even if an object is moving, the acceleration sensor attached to the node sometimes does not show any noticeable response because of the way it moves. To avoid mistaken estimation in such cases, where even changes in acceleration cannot be detected, the system does not instantly determine the state to be "Stop Moving". Instead, the system regards such a state as "Ambiguous", which means that the node is either in the "Keep Moving" state or the "Stop Moving" state. If the accelera‐ tion sensor does not output any noticeable changes after a fixed period of time, the system decides that the first moment where the acceleration sensor's response disappears is the "Stop Moving" state, and the subsequent moments are the "Stable" state. Typical detection results using this algorithm are shown in Fig. 2.

To examine the validity of this algorithm, we performed some preliminary experiments. Be‐ cause it is difficult to generalize all possible patterns of object motion, in the preliminary ex‐ periments, we simply raise an object with a node and move it for a time, and then set it down somewhere. However, despite the simplicity of the algorithm, the system can distin‐ guish the state of object motion from the other states quite well, with a success rate of more than 90% according to our experimental results.

**Figure 2.** Motion sensing example results with acceleration sensor.

**Figure 1.** Developed devices and deployment examples.

226 Radio Frequency Identification from System to Applications

motion states by analysis of the acceleration changes:

**i.** Stable: object is in a stable state;

The target node is used for object identification and localization. The node is attached to an object in a room. The node consists of the Arduino Fio and the XBee. The node contains an acceleration sensor (ADXL355) for detection of object handling, along with a luminosity sen‐ sor (CdS cell), a humidity sensor (HIH-4030), and a temperature sensor (TEMP6000) for en‐ vironmental status measurement near the object. The node is battery powered. However, the current device has a battery life of only 3 days, and provision of longer battery life will

The target node detects the object handling state by using an acceleration sensor, which acts as a trigger to localize the object position. In our research, we estimate the following five

**2.1. Target node**

be part of our future work.

#### **2.2. Reference nodes**

A reference node is used for communication with the target nodes and for collection of the environmental sensor data. The node consists of an Arduino and an XBee. The node is capa‐ ble of connecting to various sensors for environmental data collection. In our experiment, the node contains the same sensors as the target nodes and switch sensors to detect human behavior such as sitting and sleeping. Because the reference nodes cannot move if they are to provide localization reference data, the nodes are attached to fixed objects such as furni‐ ture and electrical appliances. The electric power is supplied to these nodes by a power line, because they do not move.

rate, which is the minimum achievable error rate given the distribution of the data. KNN is guaranteed to approach the Bayes error rate, for some value of K. DKNN is an extension of KNN, which weights the contributions of the neighbors, so that the nearer neighbors con‐ tribute more to the average than the more distant neighbors. We use the inverse of the squared Euclidean distance as a weight function. NN is a kind of classification technique. It is known that NN can demonstrate high discrimination ability for data that has multiple di‐ mensions and is linearly inseparable. We therefore adopted these three methods in our work

Object and Human Localization with ZigBee-Based Sensor Devices in a Living Environment

http://dx.doi.org/10.5772/53366

229

To investigate the basic object localization performance of the ZigBee-based RFID sys‐ tem, we conducted three experiments. Generally speaking, the classification performance depends heavily on the parameters used in the pattern recognition algorithm. For exam‐ ple, the performance of KNN or DKNN is dependent on the parameters such as the val‐ ue of k, whereas the performance of the NN depends on parameters such as the number of nodes in the hidden layer. In our experiments, we tried various cases by varying the parameter values and chose the best combination of the parameters according to the esti‐

The experimental environment and conditions are shown in Fig. 4. The room contains various articles of furniture. Generally speaking, the largest contributors to reduced local‐ ization accuracy are environmental obstacles such as furniture made of metal. This envi‐ ronment provides extreme conditions for localization. However, the difficulty in

for object location estimation with RSSIs.

**Figure 3.** Communication protocol overview.

**3.2. Experimental conditions**

mation performance.

#### **2.3. Communication protocol**

The computer for object and human localization collects and controls all the sensor data and the RSSI values. For synchronization and simultaneous data collection, the computer con‐ trols the targets and the reference nodes separately with two gateway nodes, which are called client nodes. A typical communication example is shown in Fig. 3. In the figure case, the target nodes and reference nodes transmit sensor data periodically. The reference nodes also regularly gather RSSI values between the reference nodes to estimate human presence and human location based on the algorithm given in section 5 of this paper. When the target node detects object handling using the acceleration sensor, the node transmits a signal to in‐ dicate the handling of the object by the occupant. After transmission, the node sends the state of the target node periodically. When the node detects that the object has been put down somewhere, the node broadcasts the putting down action to all reference nodes. Final‐ ly, the target node receives each reference node's data with RSSI values and transmits all da‐ ta to the client node. The computer calculates the object location from the collected RSSIs.
