**4. Conclusion**

234 Emerging Informatics – Innovative Concepts and Applications

The total memory size is 4GB in the computer and it is assigned 3GB OS area and 1GB L3 cache. UBC utilizes full size of 4GB memory without any limitation. L3 cache policy was designed write back policy with read through. The reason of read through cache on UBC is because it is read buffer cache when read event is on the buffer space. L3 cache dedicates the write event and holds write event data for the read event. The result shows that L3 cache gives double performance than UBC buffer at sequential read and write test. Other result at read/write 50%/50% shows 13706.78 IOPS with L3 cache in comparison to 2745.08 for UBC. It is five times gain at read/write 50%/50% compare UBC cache node system. L3 cache is dedicated storage partition block cache and it manages write back policy by L3 cache and read from L3 cache. Therefore, interoperable I/O performance is very effectiveness event. This result shows the proof of performance effects by L3 cache compare unified buffer space.

(a) Read sequential

(b) Write sequential

(c) Read/write 50% / 50%

Fig. 8. L3 cache VS OS UBC cache I/O performance.

Web service is heterogeneously demand model in today. User push type web service is so popular such as twitter and SNS. The demand from them is web evolving target model behavior. Therefore, web service needs adaptability, online expandability and data availability. Increasing user push type web service demand has dynamic behavior. Therefore the management of the coherency of written data is also a big issue. Proxy server is read cache model and it is data duplication model but it doesn't support duplication write data event dynamically. It also doesn't support coherency of write data too. Maintaining the low latency of time, single Data Field model can't achieve these issues with limited size of write cache memory space. The proposed system architecture ensures 1) Adaptability for user push type web service demands and 2) online node sustainability and 3) low latency of write data without write cache size limit and 4) P-node /C-Node cache autonomous contribution. Utilize L3 cache and L4 cache benefits by P-Node/C-Node L3/L4 cache. The write data coherency issues, P-Node plus C-Node write eventually Consistency process is enabled it. The concept is always executed by L3 cache P-Node and it guarantees the write event latency minimum network hops. Therefore, it achieves the low latency of time write I/O event for real time web application. P-Node also performs read data performance by L3 cache. L3 cache on P-Node evaluation shows double performance in comparison with UBC cache at sequential read and write test. Other results in case of read/write 50%/50% shows 13706.78 IOPS compare 2745.08 for UBC. It is five times faster than UBC cache node system. This is the proof of performance effects by L3 dedicated cache for low latency web service in comparison with unified buffer on autonomous node.Thus, the layer cache node would be utilized under many massive I/O applications by its autonomous decentralized node. Corelated P-Node and C-Node show high I/O advantage with dynamic data availability. Therefore, Autonomous multi-layer cache system architecture is the solution for interoperable communication with low latency of time web service with dynamic data availability. Our next step is variable service application level evaluation and autonomous node community expansion / reduction technology design.
