**2. Concurrency control in Real-Time Database Systems**

A Real-Time Database Systems (RTDBS) processes transactions with timing constraints such as deadlines [Ramamritham]. Its primary performance criterion is timeliness, not average response time or throughput. The scheduling of transactions is driven by priority order. Given these challenges, considerable research has recently been devoted to designing concurrency control methods for RTDBSs and to evaluating their performance (e. g. [Kuo, Fishwick, Haritsa]). Most of these methods are based on one of the two basic concurrency control mechanisms: locking [Lam] or optimistic concurrency control (OCC) [Kung].

In real-time systems transactions are scheduled according to their priorities. Therefore, high priority transactions are executed before lower priority transactions. This is true only if a high priority transaction has some database operation ready for execution. If no operation from a higher priority transaction is ready for execution, then an operation from a lower priority transaction is allowed to execute its database operation. Therefore, the operation of the higher priority transaction may conflict with the already executed operation of the lower priority transaction. In traditional methods a higher priority transaction must wait for the release of the resource. This is the priority inversion problem presented earlier. Therefore, data conflicts in concurrency control should also be based on transaction priorities or criticalness or both. Hence, numerous traditional concurrency control methods have been extended to the real-time database systems. In the following sections recent and related work in this area is presented.

### **2.1 Locking-based algorithms**

In classical two-phase locking protocol, transactions set read locks on objects that they read, and these locks are later upgraded to write locks for the data objects that are updated. If a lock requested is denied, the requesting transaction is blocked until the lock is released. Read locks Call be shared. while write locks are exclusive.

For real-time database Systems, two-phase locking needs to be augmented with a priority based conflict resolution scheme to ensure that higher priority transactions are not delayed by lower priority transactions. In High Priority scheme, all data conflicts are resolved in favour of the transaction with the higher priority. When a transaction requests a lock on an object held by other transactions in a conflicting lock mode, if the requester's priority is higher than that of all the lock holders, the holders are restarted and the requester is granted 174 Real-Time Systems, Architecture, Scheduling, and Application

The chapter is organized as follows. Section 2 reviews concurrency control protocols proposed in real-time database systems (RTDBSs) and describes our choice of concurrency control algorithms for accessing temporal data. Section 3 analyzes the validity of active realtime system model and the effects real-time data to the concurrent control. The relevant definition and the effective examination mechanism of transaction reading data were given in Section 4. In Section 5 based on Data temporal characteristics, concurrency control algorithm RTCC-DD (Real-time Concurrency Control Algorithm Based on Data-deadline) is put forward, and proved that the RTCC-DD was the correctness in theory. Section 6 carries on the analysis comparison between our proposed method and the existed method through experiment. Finally, Section 7 gives summary and conclusions of this chapter and future

A Real-Time Database Systems (RTDBS) processes transactions with timing constraints such as deadlines [Ramamritham]. Its primary performance criterion is timeliness, not average response time or throughput. The scheduling of transactions is driven by priority order. Given these challenges, considerable research has recently been devoted to designing concurrency control methods for RTDBSs and to evaluating their performance (e. g. [Kuo, Fishwick, Haritsa]). Most of these methods are based on one of the two basic concurrency

In real-time systems transactions are scheduled according to their priorities. Therefore, high priority transactions are executed before lower priority transactions. This is true only if a high priority transaction has some database operation ready for execution. If no operation from a higher priority transaction is ready for execution, then an operation from a lower priority transaction is allowed to execute its database operation. Therefore, the operation of the higher priority transaction may conflict with the already executed operation of the lower priority transaction. In traditional methods a higher priority transaction must wait for the release of the resource. This is the priority inversion problem presented earlier. Therefore, data conflicts in concurrency control should also be based on transaction priorities or criticalness or both. Hence, numerous traditional concurrency control methods have been extended to the real-time database systems. In the following sections recent and related

In classical two-phase locking protocol, transactions set read locks on objects that they read, and these locks are later upgraded to write locks for the data objects that are updated. If a lock requested is denied, the requesting transaction is blocked until the lock is released.

For real-time database Systems, two-phase locking needs to be augmented with a priority based conflict resolution scheme to ensure that higher priority transactions are not delayed by lower priority transactions. In High Priority scheme, all data conflicts are resolved in favour of the transaction with the higher priority. When a transaction requests a lock on an object held by other transactions in a conflicting lock mode, if the requester's priority is higher than that of all the lock holders, the holders are restarted and the requester is granted

control mechanisms: locking [Lam] or optimistic concurrency control (OCC) [Kung].

**2. Concurrency control in Real-Time Database Systems**

directions for the work.

work in this area is presented.

**2.1 Locking-based algorithms** 

Read locks Call be shared. while write locks are exclusive.

the lock; if the requester's priority is lower, it waits for the lock holders to release the lock. In addition, a new read lock requester can join a group of read lock holders only if its priority is higher than that of all waiting write lock operations. This protocol is referred to as 2PL-HP. It is important to note that 2PL-HP loses some of the basic 2PL algorithm's blocking factor due to the partially restart-based nature of the High Priority scheme.

Note that High Priority scheme is similar to Wound-Wait scheme, which is added to twophase locking for deadlock prevention. The only difference is that High Priority scheme uses priority order decided by transaction timing constraints for conflict resolution decisions, while Wound-Wait employs timestamp order usually decided by transaction arrival time. It is obvious that High Priority serves as a deadlock prevention mechanism, if the priority assignment mechanism assigns unique priority value to a transaction and does not dynamically change the relative priority ordering of concurrent transactions. Also, note that 2PL-HP is free from priority inversion problem, because a higher priority transaction never waits for a lower priority transaction, but restarts it.

In 2PL-WP (2PL Wait Promote) [Huang] the analysis of concurrency control method is enhanced from [Lindstrom]. The mechanism presented uses shared and exclusive locks. Shared locks permit multiple concurrent readers. A new definition is made – the priority of a data object, which is defined to be the highest priority of all the transactions holding a lock on the data object. If the data object is not locked, its priority is undefined.

A transaction can join in the read group of an object only if its priority is higher than the maximum priority of all transactions in the write group of an object. Thus, conflicts arise from incompatibility of locking modes as usual. Particular care is given to conflicts that lead to priority inversions. A priority inversion occurs when a transaction of high priority requests and blocks for an object which has lesser priority. This means that all the lock holders have lesser priority than the requesting transaction. This same method is also called 2PL-PI (2PL Priority Inheritance) [Stankovic].

Sometimes High Priority may be too strict policy. If the lock holding transaction Th can finish in the time that the lock requesting transaction Th can afford to wait, that is within the slack time of Tr, and let Th proceed to execution and Tr B wait for the completion of Th. This policy is called 2PL-CR (2PL Conditional Restart) or 2PL-CPI (2PL Conditional Priority Inheritance) [Lam, Menasce].

In Priority Ceiling Protocol [Sha] the aim is to minimize the duration of blocking to at most one elementary lower priority task and prevent the formation of deadlocks. A real-time database can often be decomposed into sets of database objects that can be modelled as atomic data sets. For example, two radar stations track an aircraft representing the local view in data objects O1 and O2. These objects might include e. g. the current location, velocity, etc. Each of these objects forms an atomic data set, because the consistency constraints can be checked and validated locally. The notion of atomic data sets is especially useful for tracking multiple targets.

A simple locking method for elementary transactions is the two-phase locking method; a transaction cannot release any lock on any atomic data set unless it has obtained all the locks on that atomic data set. Once it has released its locks it cannot obtain new locks on the same atomic data set, however, it can obtain new locks on different data sets. The theory of

Real-Time Concurrency Control Protocol Based on Accessing Temporal Data 177

Optimistic Concurrency Control (OCC), is based on the assumption that conflict is rare, and that it is more efficient to allow transactions to proceed without delays to ensure serializability. When a transaction wishes to commit, a check is performed to determine whether a conflict has occurred. There are three phases to an optimistic concurrency control

• Read phase: The transaction reads the values of all data items it needs from the database and stores them in local variables. In some methods updates are applied to a local copy of the data and announced to the database system by an operation named

• Validation phase: The validation phase ensures that all the committed transactions have executed in a serializable fashion. For a read-only transaction, this consists of checking that the data values read are still the current values for the corresponding data items. For a transaction that has updates, the validation consists of determining whether the current transaction leaves the database in a consistent state, with serializability

• Write phase: This follows the successful validation phase for update transactions. During the write phase, all changes made by the transaction are permanently stored

In optimistic concurrency control, transactions are allowed to execute unhindered until they reach their commit point, at which time they are validated. Thus, the execution of a transaction consists of three phases: read, validation, and write. The key component among these is the validation phase where a transaction's destiny is decided. Validation comes in several flavours, but it can carry out basically in either of two ways: backward validation and forward validation. While in backward scheme, the validation process is done against committed transactions, in forward validation, validating of a transaction is carried out

As explained above, in RTDBSs, data conflicts should be resolved in favor of higher priority transactions. In backward validation, however, there is no way to take transaction priority into account in serialization process, since it is carried out against already committed transactions. Thus backward scheme is not applicable to real-time database systems. Forward validation provides flexibility for conflict resolution such that either the validating transaction or the conflicting active transactions may be chosen to restart, so it is preferable for real-time database systems. In addition, forward scheme generally detects and resolves data conflicts earlier than backward validation, and hence it wastes less resources and time. All the optimistic algorithms used in the previous studies of real-time concurrency control in [Haritsa] are based on the forward validation. The broadcast mechanism in the algorithm, OPT-BC used in [Haritsa] is an implementation variant of the forward validation. We refer

A point to note is that unlike 2PL-HP, OCC-FV does not use any transaction priority information in resolving data conflicts. Thus, under OCC-FV, a transaction with a higher priority may need to restart due to a committing transaction with a lower priority. Several methods to incorporate priority information into OCC-FV were proposed and studied in

**2.2 Optimistic Concurrency Control-based algorithms** 

method:

pre-write.

maintained.

into the database.

against currently running transactions.

to the optimistic algorithm using forward validation as OCC-FV.

modular concurrency control permits an elementary transaction to hold locks across atomic data sets. This increases the duration of locking and decreases preemptibility. In this study transactions do not hold locks across atomic data sets.

Priority Ceiling Protocol minimizes the duration of blocking to at most one elementary lower priority task and prevents the formation of deadlocks. The idea is that when a new higher priority transaction preempts a running transaction its priority must exceed the priorities of all preempted transactions, taking the priority inheritance protocol into consideration. If this condition cannot be met, the new transaction is suspended and the blocking transaction inherits the priority of the highest transaction it blocks.

The priority ceiling of a data object is the priority of the highest priority transaction that may lock this object [Sha, Rajkumar, Son]. A new transaction can preempt a lock-holding transaction only if its priority is higher than the priority ceilings of all the data objects locked by the lock-holding transaction. If this condition is not satisfied, the new transaction will wait and the lock-holding transaction inherits the priority of the highest transaction that it blocks. The lock-holder continues its execution, and when it releases the locks, its original priority is resumed. All blocked transactions are alerted, and the one with the highest priority will start its execution.

The fact that the priority of the new lock-requesting transaction must be higher than the priority ceiling of all the data objects that it accesses, prevents the formation of a potential deadlock. The fact that the lock-requesting transaction is blocked only at most the execution time of one lower priority transaction guarantees, the formation of blocking chains is not possible [Sha, Rajkumar, Son].

The Priority Ceiling Protocol is further worked out in [Sha, Rajkumar, Son], where the Read/Write Priority Ceiling Protocol is introduced. It contains two basic ideas. The first idea is the notion of priority inheritance. The second idea is a total priority ordering of active transactions. A transaction is said to be active if it has started but not completed its execution. Thus, a transaction can execute or wait caused by preemption in the middle of its execution. Total priority ordering requires that each active transaction execute at a higher priority level than the active lower priority transaction, taking priority inheritance and read/write semantics into consideration.

A protocol called Real-Time Locking (RTL) that uses locking and dynamic adjustment of serialization order for priority conflict resolution was proposed. The basic idea of serialization order adjustment is to delay the decision of final serialization order among transactions, and to adjust the temporary serialization order dynamically in favour of transactions with high priority. Thus, this scheme can avoid blocking and aborts resulting from a mismatch between serialization order and priority order of transactions. In order to implement the dynamic adjustment of serialization order, in RTL, the execution of a transaction is phase-wise as in OCC. In the first phase called read phase, a transaction reads from database and writes to its local workspace as in OCC. However, unlike OCC where conflicts are resolved only in the validation phase, RTL resolves conflicts in the read phase using transaction priority. In the write phase of RTL, the final serialization order is determined, and updates are made permanent to the database. The use of the phasedependent control and local workspace for transactions also provides potential for a high degree of concurrency.
