Preface

The word "anthology" originates from Latin, with its roots in two components: "anto," meaning existence, and "logia," meaning knowledge and study. Therefore, it can be associated with ontology or the study of existence. The first part of this combination refers to the realm of existence, while the second part denotes the knowledge that ancient Greeks sought to attain. Therefore, ontology emerged as a means for the Greeks to understand the world, hence its name. In the context of artificial intelligence, the term "ontology" describes the explicit meaning of the semantic web. It involves the comprehensive classification of objects and their connections in the universe. These concepts and ideas have their origins in deeper philosophical notions. Thomas Gruber, an influential figure in ontology, provided a notable definition for ontologies. According to him, an ontology is a formal representation of a set of terms and their relationships, expressed in a specialized language and stored in a computer-readable file. It is important to note that while "anthology" and "ontology" share a common root, they have distinct meanings.

"Anthology" refers to a collection of literary works, while "ontology" pertains to the formal representation of knowledge about a domain, particularly in the context of computer science and artificial intelligence.

Ontologies can be employed in several domains, such as global semantic networks, search engines, electronic commerce, natural language processing, knowledge engineering, information extraction and retrieval, multi-agent systems, qualitative modeling of physical systems, database design, information systems, and geographic and digital libraries.

The distinction between ontology in philosophy and ontology in computer science is fundamental. In philosophy, ontology emerges from the inherent order between concepts. However, in computer science, ontology is derived based on the order we assign to concepts. Furthermore, philosophical ontology aims for a comprehensive and universal perspective encompassing all concepts. Conversely, computer science ontology has a narrower focus, excluding elements outside the scope of discussion. It is worth noting that ontologies are not new to the Web. Each "cloud data design" can be seen as an ontology, as it defines a set of conceptual or physical attributes applicable to a specific user group. Typically, ontology is defined as the vocabulary of concepts in a particular field of knowledge organized in a hierarchical structure.

When discussing a topic, it is crucial to understand the subject clearly. Different fields use words with specific meanings, and sometimes even within a clear topic, people may interpret a word differently. The solution is to establish common words that everyone understands. This is where ontology comes into play on the Web.

An ontology lists all the entities in a domain, including their characteristics and connections. The information is organized in a specific format and linked to the Internet documentation. This helps establish standardized meanings. Ontologies are used by people, databases, and applications that need to share information about a particular domain.

The process of generating ontologies can be carried out manually using ontology engineering tools or through (semi-)automated methods of knowledge acquisition and construction. However, manual ontology building can be expensive, time-consuming, error-prone, and often reflects the personal opinions of the designer. Moreover, these manually built ontologies are typically inflexible and need more adaptability to specific changes for which they were developed. Automating the ontology construction process reduces costs and results in ontologies that are better suited to their intended applications. While ontology engineering tools serve as interfaces for application development, they still require human creators to utilize them effectively. However, by moving towards automating knowledge acquisition from various sources such as texts, databases, and existing ontologies, the challenges of ontology engineering can be addressed, leading to reduced costs and easier sharing of ontologies.

This book highlights the latest advancements and novel viewpoints in the field of ontology within information science.

## **Morteza SaberiKamarposhti**

Assistant Professor, Cyber Security Lab (CYBER), Department of Computer Engineering, Universiti Kebansaan Malaysia (UKM), Bangi, Malaysia

## **Mahdi Sahlabadi**

**1**

Section 1

Infrastructures

Post-doc Researcher, Universiti Kebansaan Malaysia (UKM), Bangi, Malaysia

Section 1 Infrastructures

## **Chapter 1**

## Data Centre Infrastructure: Design and Performance

*Yaseein Soubhi Hussein, Maen Alrashd, Ahmed Saeed Alabed and Saleh Alomar*

## **Abstract**

The tremendous growth of e-commerce requires an increase in the data centre capacity and reliability for appropriate quality of services. Optimisation of data centre design is considered to be within a green technology that shows great promise to decrease CO2 emission. However, a huge data centre requires huge power consumption due to higher capacity of racks that lead to more powerful cooling systems, power supply, protection and security. These make the data centre costly and not feasible for services. In this chapter, we will provide a tire 4 data centre design to be located in the optimal location of Malaysia, in Cyberjaya. The main purpose of this design is to provide e-commerce services, especially food delivery, with high quality of services and feasibility. All data centre components have been well designed to provide various services which include top-level security, colocation system, reliable data management and IT infrastructure management. Moreover, recommendation and justification have been provided to ensure that the proposed design outperforms compared to other data centres in terms of reliability, power effeminacy and storage capacity. In conclusion, analysing, synthesising and evaluating each component of the proposed data centre will be summarised.

**Keywords:** data centre, storage infrastructure, data centre infrastructure management (DCIM), security, scalability

## **1. Introduction**

Meza is one of the home-grown data centre companies, and it provides various services which include top-level security and reliable data management and IT infrastructure management. Meza is expected to be built in several data centres across Malaysia. A Malaysian food delivery application company has more than 5 million users of the services, and the number of users is increasing day by day. The current infrastructure is insufficient to handle the vast amount of data processing, and it might cause the users to face poor user experience due to longer response time from the server and slow process. Therefore, the company has appointed Meza to construct a data centre to cater to the continued growth of the company. The data centre will be required to process online food ordering and online payment, customer relationship management to manage the communication in one inbox and engage with their client.

This chapter will be proposing a data centre design with the essential components for this food delivery application company and analysing, synthesising and evaluating each component of the proposed data centre. Other data centre components, such as power usage effectiveness and efficiency, cooling system and protection, have been discussed in the chapter Data Centre Infrastructure Power Efficiency and Protection (**Figure 1**).

## **2. Analysis**

### **2.1 Customer requirements**

The first basic requirement that comes to mind when building a data centre is the customer requirements. The customers are an essential entity. This data centre proposal has been designed to accept enormous amount of traffic loads, which means that many customers can order at the same time without the need to face frustration which happens when a system crashes due to overload. Furthermore, a seamless online chat function has been proposed which functions from the time of the order till the order has been delivered. This feature will enable customers to be more confident to use this delivery system as they can raise any issues regardless of the payment, order and so on. This is possible due to the new proposed network infrastructure.

Moreover, due to its newly improved network infrastructure, orders can be grouped more efficiently and can be delivered quickly. Lastly, the data centre is designed to be transparent which will allow both the customers and the delivery guy to key in ratings, which will compel the individuals to be reputable in order to achieve perks and hence make the system more trustworthy.

**Figure 1.** *APU data centre.*

*Data Centre Infrastructure: Design and Performance DOI: http://dx.doi.org/10.5772/intechopen.109998*

## **2.2 Data centre requirements**

Moving on, there are a few requirements for a data centre to be robust such as:

• Availability/tier selection: to achieve high availability, Meza has decided to critically analyse between different types of data centre tiers. There are basically 4 types of tiers available. After comparing between the 4 types of tiers, the company has decided to go with tier 4 since the company is used for food delivery application and has a huge number of customers. As stated by [1], a tier 4 data centre has an uptime of 99.995% per year and has a '2 N + 1' completely redundant infrastructure which serves the sole purpose of the food delivery application. It has an annual downtime of only 26.3 minutes per year when compared to 1.6 hours per year for tier 3.

Furthermore, tier 4 has been chosen for Meza because of the failure-tolerant design. As expressed by [2], failure-tolerant design is an essential part of the many benefits offered by a tier 4 data centre. This allows unplanned failures to be maintained that would otherwise cause critical loads in the site's infrastructure. Additionally, if any distribution or capacity component fails, the computer equipment of a tier 4 data centre shall not be affected. In order to avoid further disasters, the system will respond automatically. Moreover, there are also several distribution paths in a tier 4 data centre that can handle the computer equipment of the site at the same time. The IT equipment are all powered doubly and offer additional backup. Lastly, it is also supported by [3] that for mission-critical applications and systems, fault tolerance is especially crucial. This tier has the level of protection that is the most important, and tier 4 also provides support for electricity outage protection for 96 hours (**Figures 2** and **3**).

• Scalability: The food-ordering application company has more than 5 million active users and is increasing with time. The planned data centre will also be able to offer

**Figure 2.** *Some features of different data centre tiers [4].*


#### **Figure 3.**

*Some features of different data centre tiers [5].*

continuous scalability and colocation facilities. This is the most critical aspect in constructing data centres, because expansion capability and the handling of additional data or customers are necessary which may impact the architecture of data centre in long run if scalability is not taken into account. Any future change in the data centre which requires more space, devices or other technical aspect must be effectively managed without affecting the key existing data centre elements.


According to [9], DCIM covers monitoring, measuring, managing and controlling data centre utilisation and energy consumption of all IT-related equipment and facility infrastructure components. These equipment and components include power *Data Centre Infrastructure: Design and Performance DOI: http://dx.doi.org/10.5772/intechopen.109998*


#### **Figure 4.** *A commercial DCIM software developed by Intel [8].*

distribution units, servers and network switches to name a few. A typical data centre has a lot of workload. The workload increases immensely depending on the size of the data centre. For the food delivery company with millions of users, the data centre will have a huge workload if managed manually, and it would be unrealistic and impossible to run effectively. DCIM does tasks which are otherwise performed by data centre personnel. An important feature of DCIM is the real-time [10] central dashboard which displays information about critical systems from sensors and equipment. Data centre personnel are more informed about the operations and are likely to predict the next outage and avoid it. In addition to this, DCIM can handle non day-to-day tasks such as management of change. Therefore, DCIM is a critical piece of software for improving data centre manageability. The use of DCIM by Meza in this data centre will have huge benefits, resulting in less downtime and making manageability more robust (**Figure 5**).

Since cabling performance is a major factor in system outages, Meza will be using cabling from providers that ensure their cables can sustain higher performance. Meza hopes that using high-quality data centre fabric can reduce system outages and increase the overall manageability of the data centre.

• Cost: All business organisations strive to put the best performance with the lowest possible cost. It is the interest of both Meza and the food delivery company to bring the cost down while meeting business requirements. Total Cost of Ownership (TCO) is an estimate which includes the building the data centre and operating it. For the food delivery company, it is necessary that the TCO of building and operating a data centre is lower compared to hosting their application on a public cloud such as Amazon Web Services (AWS). According to [12], the largest driver of cost is determined to be the unnecessary unabsorbed costs

resulting from the oversizing of the infrastructure. Meza has decided to deploy an adaptable physical infrastructure system. An adaptable physical infrastructure system reduces the waste due to oversizing substantially. As a result, the total cost of ownership is reduced too (**Figure 6**).

As shown in **Figure 6**, the room capacity design is non-adaptable (a), at the beginning, compared to adaptable physical infrastructure system (b), as the load increases.

In addition to this, Meza plans that with the use of Data Centre Infrastructure Management (DCIM) software, the operating costs can be reduced. One of the fundamental features of DCIM software is the use of automation across the board. Automation reduces manual labour with situational awareness. For example, resources such as energy can be increased during peak hours automatically instead of having maximum performance all day long regardless of the load. Moreover, use of DCIM software also allows data centre personnel to predict the life cycle of physical

#### **Figure 5.**

*Cabling contributes to large number of system outages [11].*

infrastructure equipment, so they can change the equipment before they become faulty without compromising additional equipment due to failure.

## **2.3 Environment**

The data centre will be located at Cyberjaya, and it is a specialised Information Technology district in Kuala Lumpur, Malaysia. The location is around 30 minutes away from the Kuala Lumpur city centre as well as the Kuala Lumpur International Airport.

Geographically, Malaysia is a well-known, stable region where natural disaster risk like tsunamis and earthquakes is extremely low. Especially in Cyberjaya, the district is above sea level throughout the year, so there is nearly no chance for massive flood happening. Cyberjaya is the core of Multimedia Super Corridor (MSC Malaysia), with MSC status, it guarantees the world-class infrastructure for IT industry and 99.9% guaranteed reliability in advance telecommunication technologies. The rental rate is from MYR 2.50 per square foot [14].

The environment is highly secured, it has a state-of-the-art CCTV system integrated with Malaysia's Emergency Response System and police personal monitoring the CCTV footage all the time with a quick response time for any emergency. These create a secure environment for the community (**Figures 7**–**9**).

**Figure 7.** *(NTT, ND).*

**Figure 8.** *The environment of Cyberjaya [15].*

**Figure 9.** *Illustrate the proposed data centre floor plan.*

## **3. Data centre design**

## **3.1 Data centre floor plan**

## *3.1.1 Floor plan justification*

The above data centre floor plan design consists of 8 unique components that are necessary for a data centre including a surveillance room for monitoring the physical security and to create daily reports and analytics. The components that have been included are:


*Data Centre Infrastructure: Design and Performance DOI: http://dx.doi.org/10.5772/intechopen.109998*

**Figure 10.** *Racks inside a data centre [17].*


Furthermore, the data centre has been equipped with an Uninterruptible Power Supply (UPS) backup battery and a diesel generator in case of a power failure in order to achieve a higher availability for the system. Lastly, state-of-the art closed circuit television (CCTV) has also been placed in the data centre cabinet to be monitored remotely by higher officials (**Figure 10**).

## **4. Data centre components**

## **4.1 Racks**

In a data centre, racks can be considered as the building blocks. Traditionally, racks were mostly used for stacking IT equipment and saving floor space. However, racks in data centres today play a vital role in mounting heavy IT equipment, providing an organised environment for power distribution, air flow distribution for better cooling performance and cable management among many features [18]. Data centres demand a rack infrastructure that can mount a variety of equipment such as servers and switches. Therefore, it is important that the rack infrastructure can meet the requirements while offering sustainable performance.

#### *4.1.1 Equipment in racks*

The major equipment inside the rack will be the compute servers, storage servers and networking equipment such as switches. Different racks will have different compositions of these equipment.

#### • Compute servers

The main compute resources in a data centre are the servers. Most of the racks will be utilised for mounting rack servers for compute purposes. These servers are used for compute-intensive tasks such as processing and database hosting. These servers will be using enterprise-level processors such as Intel Xeon or AMD EPYC which have multiple physical cores providing high-level performance.

#### • Storage servers

Similar to compute rack servers, storage servers are mounted in the racks. Storage servers have a high density of storage capacity such as hard disks and SSDs. The emphasis on processing power in storage servers compared to compute servers is less. Therefore, storage servers typically use much less RAM and less performant processors. More on storage infrastructure will be discussed in this proposal.

### • Switches

Switches act like a hub which connects different equipment such as servers in the rack with other servers or racks in the data centre. They are an integral part of the networking infrastructure.

#### *4.1.2 Rack enclosures*

Selecting a rack for a data centre requires consideration into some criteria such as dimension, design, capacity and material. According to [19], rack is available in three major types: open frame racks, rack enclosures or cabinets and wall-mount racks (**Figure 11**).

Rack enclosures or cabinets are a rack with four posts, doors and panels on the side. Depending on the design and manufacturer, the side panels can be removed to offer maximum flexibility. Among the most distinctive features of rack enclosures are airflow management, security, cable management and power distribution. These types of rack are ideal for use cases where the rack needs to store heavier equipment, hotter equipment and higher wattages per rack [19]. Doors that are on the front and back of the rack are ventilated for better airflow. Additionally, doors provide some levels of security. Most rack enclosures come with doors that can be locked which provide an additional layer of security (rack-level). Rack enclosures have a means of

*Data Centre Infrastructure: Design and Performance DOI: http://dx.doi.org/10.5772/intechopen.109998*

**Figure 11.** *42 U rack enclosure or cabinet [20].*

providing dedicated power distribution units (PDU) for the rack. The PUDs in rack enclosures are installed at the back or on the side, so they provide power without congesting the space inside the rack.

The size of the rack depends on many attributes. Some of these include:


Most equipment used in racks are standardised with a width of 482.6 mm or 19inches. This current standard of 19-inch was established by Electronic Industry Alliance (EIA) [18]. In racks, the usable vertical space is measured in rack units. A rack unit is equal to 1.75 inches in height. Although racks consisting of deeper equipment and higher cable densities drive the need for a bigger rack size, the most widely used rack dimension is 42 U tall, 600 mm wide and 1070 mm deep.

Depending on the equipment mounted inside the rack, the rack can be considered a server rack or a networking rack. In comparison with server racks, network racks are much wider as they need additional room for cabling.

### *4.1.3 Justification*

Based on the three types of racks, rack enclosures or cabinets will be used across the data centre. Since the data centre is going to be a newly built, wall-mount racks

**Figure 12.** *Top-of-rack vs. end-of-row architecture [23].*

can be avoided because there is enough floor space inside the data centre for the planned capacity. Compared to a wall-mount rack, the other two racks provide more racking of equipment for a given floor space. While open-frame racks offer a lot of features for a much lower cost than rack enclosures, the features such as better airflow control and better security are too important to be overlooked. Open-frame racks offer very little control over airflow. In addition to this, the use of side panels in rack enclosures prevents unrestricted hot air flowing inside the rack, heating up the equipment unnecessarily. According to [21], between 30 and 55% of a data centre's energy consumption goes into powering its cooling and ventilation systems. It is important that the racks chosen for the data centre can lower the cost of overall cooling as much as possible. In general, low cost racks such as open-frame racks have a significant effect on how much time it takes to complete rack-based work due to inefficiencies in areas such as cable management or mounting [18]. In [22], a decision support model has been proposed to the use of liquid-based cooling to measure and assess the waste heat resource accessible from retrofit within the High Performance Computing (HPC) and data centre (DC) industry (**Figure 12**).

As the data centre will be using top-of-rack switching, the use of networking racks will be limited. Top-of-rack switching architecture is considered for this data centre because it provides the benefit of better cabling, future-proofing with emerging standards and better support for multi-core servers by offering more bandwidth with low latency [24, 25]. Top-of-rack architecture avoids the number of cables going to the networking server considerably. Therefore, the size of racks in the data centre will be consistent.

Based on the consideration of attributes, the data centre will be using the standard 42 U tall, 600 mm wide and 1070 mm deep racks. Most servers mounted in the server will be 2 U. 2 U servers offer more advantages than a smaller 1 U server or oversized 5 U server. Due to limitation of physical size, 1 U server is affected by heating issues. While, 5 U servers are more powerful, they are more expensive and less cost-effective. Therefore, 2 U servers offer a compromise between performance and cooling [26]. When using standard equipment, the oversizing of data centre is not necessary. 42 U tall racks also provide several additional benefits [18]:


In conclusion, 42 U rack enclosures provide better features and are more suitable to be used in this data centre.

### **4.2 Storage infrastructure**

In modern data centres, storage is becoming a highly complex component with increasing demands to store more and more data. Storage infrastructure for a data centre includes architectures, hardware equipment such as hard disks, SSDs and so on. Storage infrastructure in a data centre is tightly coupled with the networking for accessibility and delivery. In today's world, there are two challenges to high-performance storage systems: capacity and performance [27].

Capacity: Usage of computers, Internet of things (IoT) devices, mobile phones and other digital equipment has created a high demand for data storage. Data storage is increasing at a rapid pace every day. With the advancements in technologies such as image quality, average file sizes have risen considerably. As a data centre, the facility needs to have a storage infrastructure that has the capacity to meet these demands while offering the best performance possible.

Performance: Data centres need to focus on the storage performance regardless of the capacity requirements. It is vital for the storage infrastructure to be scalable and highly available. While storing hundreds of terabytes of data, unoptimised and poorly

**Figure 13.** *Storage area network [28].*

designed infrastructure could lower the performance of the overall data centre as data can be used in other areas like compute. The storage infrastructure in the data centre must be able to handle these requirements while overcoming the challenges faced. Traditionally, data centres use 3 popular storage solutions [27] (**Figure 13**).

### *4.2.1 Storage area network*

Storage Area Network (SAN) is a dedicated network consisting of multiple storage devices. A SAN is a pool of block-level storage resources. SAN provides a higher level of management with the inclusion of multiple servers which manage data access and storage management [29]. Additionally, a SAN uses high-speed cabling and dedicated networking equipment such as switches. Modern SANs are based on fibre channel that can deliver high bandwidth and throughput with data speeds of up to 16GB per second. With the reductions in Solid State Drives (SSDs), SAN can consist of SSD arrays which offer much more I/O performance than Hard Disk Drives (HDDs). Although SAN is complex to deploy and manage, it is highly scalable and available. Since SAN runs on its own dedicated network, it does not face the issue of network-attached storage (NAS) solutions due to shared bandwidth and network congestion (**Figure 14**).

A SAN consists of various components which can be grouped into 3 main categories [30]. These categories are Host components, Fabric components and Storage components.

• Host components

These components are located in the computer servers or any other type of server accessing the SAN. Compute servers (hosts) use a host-based adapter (HBA) which has a fabric port that enables communication between the server (host) and SAN switches.

**Figure 14.** *SAN component layers [30].* *Data Centre Infrastructure: Design and Performance DOI: http://dx.doi.org/10.5772/intechopen.109998*

• Fabric components

Fabric components include the switches, cables and communication protocols [30]. The switches used in SAN according to the SAN topology will be Fibre Channel (FC) switches. These switches will provide 64-128 ports per switch and have built-in fault tolerance. Since this SAN uses FC, the majority of the cables used in the SAN would be fibre optical cables. Fibre optical cables provide higher bandwidth and data speeds. In addition, the fabric components define the communication protocol. For this SAN, FC is used as the protocol, and based on that, a switched fabric topology is used.

• Storage components

The fundamental parts of any SAN are the storage components. Storage components are the storage arrays. Storage arrays contain storage processors which communicate with disk arrays. In this proposed data centre's storage infrastructure, the SAN will use SSD disk arrays. SSDs are one of the fastest storage mediums available today (**Figure 15**).

The SAN will use Core-Edge topology which is based on the switched fibre channel. The two most important traits of Core-Edge topology are the resiliency and performance that this topology provides. In this topology, two or more core switches are used to interconnect two or more edge switches. Edge switches can be the switches that connect with core switches from servers or disk arrays. In addition to this, the use of this topology in SAN will encourage a balance between usable ports and dedicated inter-switch communication [31].

**Figure 15.** *Core-edge SAN topology [31].*

#### *4.2.2 Justification*

Based on the comparisons made above, Meza's new data centre will be using a Storage Area Network (SAN). The growing food delivery company's active users are increasing, and they require a scalable storage solution. Therefore, DAS with no scalability cannot be chosen. While NAS is cheaper and easier to maintain, SAN offers better performance. For a large organisation and data centre, SAN is ideal. Another key factor is that SAN works with virtualisation [32]. Virtualisation is a popular technology not only used in data centres but also heavily used even today. Other benefits of SAN include improved storage utilisation, better data protection and recovery and elimination of network bottlenecks [33].

A key difference in how data is stored is that SAN uses block-level storage, while NAS uses file-level storage. The biggest advantage of block-level storage is that it offers better access and control privileges. This is critically important since the food delivery company already has 5 million users, and easier management of users' files is a key business requirement.

As for the SAN technology, Meza will be choosing the Fibre Channel (FC). The key factor in making this decision is that FC provides significantly better performance and reliability. For such a scale of growing 5 million active user base, performance and reliability are crucial. It is possible to build a storage network of thousands of nodes without affecting throughput and latency. In addition to this, the SAN will use arrays of SSDs instead of HDDs (**Figure 16**). SSDs provide significant increase in speeds, and the price difference between the two has narrowed over the past few years [34].

The topology for SAN infrastructure used is Core-Edge. According to [35], SAN designs should always use two isolated fabrics for high availability. Since this data centre is a tier 4 data centre, high availability and resiliency are crucial. One of the reasons why Core-Edge FC is selected is because point-to-point or FC-AL does not have high availability, as in if case one link fails, the entire storage network becomes

**Figure 16.** *SSDs have higher read and write speeds over HDDs [34].*

unavailable. Finally, Core-Edge supports millions of nodes, offering high level of scalability [31]. Scalability in storage is imperative as it needs to continuously grow every day.

## **5. Conclusion**

We can conclude by saying that the world of IT is constantly increasing, and this will never stop the demand for innovative and better solutions. There is no question that future confirmation of the solution and equipment chosen for this task is same. From security to smart execution, the planned data centre is carefully considered. Scalability, CO2 reduction, system resilience, sustainability, applying machine learning and other emerging technologies are important to be considered for the data centre design. Moreover, the colocation system allows clients to locate their data by renting a space in the data centre and choosing the equipment. The focus on this role should be incredibly strong if carried out and can make sure every requirement of the food ordering system is encountered.

## **Conflict of interest**

The authors declare no conflict of interest.

## **Author details**

Yaseein Soubhi Hussein1 \*, Maen Alrashd2 ,Ahmed Saeed Alabed1 and Saleh Alomar2

1 Computer Science and Information Systems Department, Ahmed Bin Mohammed Military College, Qatar

2 Faculty of Science and Information Technology, Jadara University, Irbid, Jordan

\*Address all correspondence to: dr.yaseein@abmmc.edu.qa

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Colocation America. Data Center Standards (Tiers I-IV). 2015. Available from: https://www.colocationamerica. com/data-center/tier-standardsoverview.htm

[2] CtrlS. Significance of Tier 4 Data Center. 2014. Available from: https://www.ctrls.in/ blog/significance-tier-4-data-center/

[3] Greengard S. Data Center Tiers: Formulating a Strategy. 2019. Available from: https://www.datamation.com/ data-center/data-center-tiers.html

[4] Impact. Tier IV Data Centers. 2009. Available from: https://www. impactmybiz.com/blog/blog-why-youneed-a-tier-iv-4-data-center/

[5] WHOA.com. Tier IV Data Centers. 2017. Available from: https://www.whoa. com/data-centers/

[6] DCNewsAsia. Manageability Top Concern for Data Center Professionals. 2016. Available from: https:// datacenternews.asia/story/manageabilitytop-concern-data-center-professionals

[7] Sadri AA, Rahmani AM, Saberikamarposhti M, Hosseinzadeh M. Fog data management: A vision, challenges, and future directions. Journal of Network and Computer Applications. 2021;**174**:1-24

[8] Intel. Intel® Data Center Manager. 2020. Available from: https://www.intel. com/content/www/us/en/software/ intel-dcm-product-detail.html

[9] Gartner. Data Center Infrastructure Management (DCIM). 2020. Available from: https://www.gartner.com/en/ information-technology/glossary/datacenter-infrastructure-management-dcim [10] Javadzadeh G, Rahmani AM, Kamarposhti MS. Mathematical model for the scheduling of real-time applications in IoT using dew computing. The Journal of Supercomputing. 2022;**78**:7464-7488

[11] CXtec. Just How Manageable is Your Data Center?. 2020. Available from: https://www.cxtec.com/resources/blog/ just-how-manageable-is-your-datacenter/

[12] Rasmussen N. Determining Total Cost of Ownership for Data Center and Network Room Infrastructure. 2015. Available from: https://download. schneider-electric.com/files?p\_File\_ Name=CMRP-5T9PQG\_R4\_EN.pdf

[13] Rasmussen N. Avoiding Costs from Oversizing Data Center and Network Room Infrastructure. 2015. Available from: https://download.schneiderelectric.com/files?p\_File\_Name=SADE-5TNNEP\_R7\_EN.pdf

[14] Malaysia C. Available from: https:// www.cyberjayamalaysia.com.my/ community/overview

[15] Richard. Essential Information about Cyberjaya - Malaysia's Technology and Innovation Hub. 2019

[16] Hussein Y, Alrashdan M. Secure payment with QR technology on university campus. Journal of Computer Science & Computational Mathematics. 2022;**12**:31-34

[17] Facebook. Opening our Newest Data Center in Los Lunas, New Mexico. 2019. Available from: https://engineering. fb.com/data-center-engineering/ los-lunas-data-center/

[18] Pearl H, Wei Z. How to Choose an IT Rack. 2015. Available from: https://

*Data Centre Infrastructure: Design and Performance DOI: http://dx.doi.org/10.5772/intechopen.109998*

download.schneider-electric.com/files?p\_ Doc\_Ref=SPD\_VAVR-9G4MYQ \_EN

[19] Tripp Lite. Rack Basics: Everything You Need to Know Before You Equip Your Data Center. 2018. Available from: https://www.anixter.com/content/dam/ Suppliers/Tripp%20Lite/White%20 Papers/Rack-Basics-White-Paper-EN.pdf

[20] Tripp Lite. 42U SmartRack Standard-Depth Rack Enclosure Cabinet with Doors, Side Panels & Shock Pallet Shipping. 2020. Available from: https:// www.tripplite.com/42u-smartrackstandard-depth-rack-enclosurecabinet-doors-side-panels-shock-palletshipping~SR42UBSP1

[21] DataSpan. Data Center Cooling Costs. 2019. Available from: https://www.dataspan.com/blog/ data-center-cooling-costs/

[22] Ljungdahl V, Jradi M, Veje C. A decision support model for waste heat recovery systems design in data Center and high-performance computing clusters utilizing liquid cooling and phase change materials. Applied Thermal Engineering. 2022;**201**:1-10

[23] Parés C. Top of the Rack vs End of The Row. 2019. Available from: https://blogs.salleurl.edu/en/ top-rack-vs-end-row

[24] Juniper Networks. Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking. 2016. Available: https:// www.juniper.net/us/en/local/pdf/ whitepapers/2000508-en.pdf

[25] Hussein YS. Impact of applying channel estimation with different levels of DC-bias on the performance of visible light communication. Journal of Optoelectronics Laser. 2021;**40**

[26] Thinkmate. 2U Rack Server. 2017. Available from: https://www.thinkmate. com/inside/articles/2u-rack-server

[27] Scala Storage. Scala Storage Scale-Out Clustered Storage White Paper. 2018. Available from: http://www.scalastorage. com/pdf/White\_Paper.pdf

[28] Lee G. Storage Network. 2014. Available from: https:// www.sciencedirect.com/topics/ computer-science/storage-network

[29] RedHat. What is Network-Attached Storage?. 2020. Available from: https:// www.redhat.com/en/topics/data-storage/ network-attached-storage

[30] VMware. SAN Conceptual and Design Basics. 2016. Available from: https://www.vmware.com/pdf/esx\_san\_ cfg\_technote.pdf

[31] Gençay E. Configuration Checking and Design Optimization of Storage Area Networks. 2009. Available from: https://www.researchgate.net/ publication/314245428\_Configuration\_ Checking\_and\_Design\_Optimization\_of\_ Storage\_Area\_Networks

[32] Bauer R. What's the Diff: NAS vs SAN. 2018. Available from: https://www.backblaze.com/blog/ whats-the-diff-nas-vs-san/

[33] Robb D. Storage Area Networks in the Enterprise. 2018. Available from: https://www.enterprisestorageforum. com/storage-networking/storage-areanetworks-in-the-enterprise.html

[34] Rubens P. SSD vs. HDD Speed. 2019. Available from: https://www. enterprisestorageforum.com/storagehardware/ssd-vs-hdd-speed.html

[35] Singh S. Core-Edge and Collapse-Core SAN Topologies. 2017. Available from: https://community.cisco.com/ t5/data-center-documents/core-edgeand-collapse-core-san-topologies/ ta-p/3149001

## **Chapter 2**

## Data Centre Infrastructure: Power Efficiency and Protection

*Yaseein Soubhi Hussein, Maen Alrashd, Ahmed Saeed Alabed and Amjed Zraiqat*

## **Abstract**

The rapid expansion of e-commerce necessitates expanding the capacity and dependability of data centres in order to provide services with the proper level of quality. A green technology that has a lot of potentials to reduce CO2 emissions is optimization data centre design. However, a large data centre required a large amount of electricity because the capacity of the racks is higher, which required more potent cooling systems, power supplies, protection and security. These will increase the cost of the data centre and render it unusable for services. In this chapter, we provide a design for a tire-four data centre that will be situated in Cyberjaya, one of Malaysia's best locations. This design's primary goal is to offer highly functional and high-quality e-commerce services, particularly food delivery. Each component of the data centre has been carefully developed to deliver a range of services, including the administration of IT infrastructure, co-location, cooling system and protection. Additionally, advice and support have been given to guarantee that the suggested design outperforms competing data centres in terms of dependability, power efficiency and storage capacity. The analysis, synthesis, and evaluation of each element of the proposed data centre will be considered in this chapter.

**Keywords:** data centre, power efficiency, power usage effectiveness (PUE), protection, cooling system, scalability

## **1. Introduction**

Meza is one of the native data centre firms that offer a range of services, including the administration of IT infrastructure and top-notch security and data management. Meza is anticipated to be constructed in a number of data centres throughout Malaysia. More than five million people in Malaysia use a food delivery application company's services, and that number keeps growing. Due to the current infrastructure's inability to handle the massive quantity of data processing, users may have a bad user experience as a result of the lengthy response times from servers and delayed processes. In order to accommodate the company's continuous growth, Meza has been chosen by the corporation to build a data centre. Customer relationship management, which unifies all client communication into one inbox, will be needed by the data centre to conduct online food orders and payments.

For this chapter, a data centre design with the necessary components will be provided, and each component of the proposed data centre will be examined, synthesised, and evaluated. These data centre components including power usage effectiveness and efficiency, cooling system and protection. Other data centre components, such as storage infrastructure, networking and environment have been discussed in another chapter Data Centre Infrastructure Design and Performance.

## **2. Power system**

#### **2.1 Electrical power system**

Power is an important element that gives life to a data centre and maintains the IT infrastructure even when an interruption takes place. The value of the power system in the data centre cannot be emphasised enough. Power is one of the significant factors when it comes to cost estimation of colocation services as well. Additionally, the type of currents used to power the servers, switches, routers and associated IT infrastructures are of two different types. One of them is DC (Direct Current) and the other is AC (Alternating Current) [1].

Meza has decided to use 277/480 (277 for Single phase or 480 for three-phase power) Volt AC power supply for the food delivery data centre because it removes the PDU (Power Distribution Unit) and passes directly the power to the server cabinet at a higher voltage. It is also energy efficient and provides a decreased load for the cooling systems. Furthermore, it has increased consistency [2]. Moreover, in [3] reinforcement learning method is applied for automating energy efficiency.

Since Tier 4 has been chosen for this data centre, there are a few components that will be discussed which will allow the data centre to be up and running 24/7. The components that Meza has decided to use are:


#### *2.1.1 Automatic transfer switch*

When there is an interruption of power supply, an Automatic Transfer Switch (ATS) is used. It is an electrical switch that moves the source of the power supply from the main source to a backup source. If the primary power source detects a power loss, the ATS triggers the substitute power sources which will provide a continuous power supply [4]. There are four types of ATS, and after a copious amount of research, Meza has decided to use Closed Transition ATS. This switch functions in a way where a little pause is not accepted in the power source supply which allows it to detect a power blackout and power fluctuation earlier and allows the data centre to have a smooth switchover while running the main power supply and the backup supply simultaneously without any interruption of services in the data centre [4] (**Figure 1**).

*Data Centre Infrastructure: Power Efficiency and Protection DOI: http://dx.doi.org/10.5772/intechopen.110014*

### **Figure 1.**

*A comparison between traditional ATS and closed transition ATS [5].*

## *2.1.2 Backup power sources*

The need for backup power resources is very crucial to provide with uninterruptable power supply for the data centre in the event that the primary source of power supply fails. Two of the backup power sources chosen are:

	- Standby/offline UPS
	- Line Interactive
	- Online/Double Conversion

a week. If the event of a power lapse, files can be lost or corrupted, mainframes can malfunction [8].

## *2.1.3 RPDUs (rack power distribution units)*

The RPDU generates no power but rather delivers power from the available power supply. In the world of data centres, the RPDU is able to monitor, manage and regulate the power usage of many devices. It can supply vast quantities of electricity and can be accessed via the local or remote network. RPDUs can withstand high power density and are immune to greater temperatures in order to satisfy the ever-changing needs of the data centre [9]. Rack PDUs can be categorised into two categories which are nonintelligent PDUs and smart PDUs [10]. There are mainly four types of PDUs, they are:


Since the aim of the proposed data centre is to be robust, highly reliable and highly effective, only a smart PDU will be discussed in this paper which will be the modern Switched Rack PDU.

Switched Rack PDU is a power management unit that can be installed on a typical industry rack and have the capability to power on and off remotely for individual outputs. Moreover, the Switched Rack PDU offers outlet control for rebooting locked devices as well as remote access capabilities to power and environment information. It also offers current, voltage, power (kW), apparent energy and cumulative energy measurements per outlet (**Figure 2**) [11].

**Figure 2.** *A switched rack PDU design [12].*

## *2.1.4 IT space allocation*

The above picture showcases an electrical power system supply arrangement in a data centre as well as how the cablings are handled (**Figure 3**).

## *2.1.5 Justification*

As the data centre for the food delivery company has been decided to go with tier 4 data infrastructure, the electrical power supply has been tailored to those needs. Two main utility power grids have been arranged. The data centre has also been equipped with a Closed Transition ATS because this automatic transfer switch will make sure that there will not be any power outage when transferring the power from the main supply to the backup supply which will ensure a smooth service for the food delivery application. Furthermore, the data centre has been equipped with Online double conversion UPS because, with this, there is very less chance of electrical load loss. It is also efficient and has a good PUE (Power Usage Efficiency) as supported by [13]. Moreover, a backup generator has been placed in case both the power grid somehow blackouts.

Additionally, for the rack power distribution unit, Switched Rack PDU has been placed because it will allow the higher officials of the data centre to remotely monitor all the data load and electricity consumption and as well as remotely change the voltages for the racks/servers which will make the energy consumption of the data centre even less. This statement is also approved by [14]. Lastly, the power utilisation of this data centre is highly efficient because of the necessary features that have been added which are also supported by [15] that by adding features such as power-saving "standby" modes, energy management software and efficient cooling systems, data centres can become more energy-intensive. Such improvement in efficiency will produce significant energy savings and reduce the electricity grid load.

### **2.2 Fire detection system**

Fire detection systems are made for the early identification of fires while time for the safe evacuation of individuals is still available. In order to ensure the health of

**Figure 3.** *Proposed IT space for the electrical power supply [10].*

emergency response workers, early detection also plays an important role. Furthermore, fire damage and operational downtime can be minimised as monitoring measures begins while the fire remains low. Most alarm systems supply emergency responders with information about the location of the place where the fire has started which speeds up the process of controlling the fire [16].

There are mainly three levels in a data centre that needs to be protected, they are:


So, to minimise the downtime and loss of data for the food delivery data centre in case of a fire breakout, the recommended smoke detector, and the fire alarm system will be discussed in this chapter.

## *2.2.1 Smoke detector*

A Smoke Detector is an electronic fire detection unit which senses the presence of smoke automatically. Smoke detectors are typically managed by a central fire alarm device, operated by building power with a battery backup in building infrastructures [18]. There are typically 3 types of smoke detectors. Air-aspirating/air-sampling smoke detector has been recommended by Meza.

## *2.2.1.1 Air-aspirating/air-sampling smoke detector (ASD)*

This smoke detector has become very popular as it can detect fires at a very early point, even before smoking happens as well as before an open fire and before excessive smoke happens. ASD can locate fires considerably faster than point or beam detectors, meaning the first signs of smoke can be reacted to quickly. This early detection is important for sensitive and high-risk infrastructure. These detectors can also be set to a traditional point detector sensitivity level (**Figures 5** and **6**) [21].

## *2.2.2 Fire alarm system*

Once the smoke has been detected, the smoke detector will notify the fire alarm system. A Fire Alarm System is intended to alert people to an emergency in order to protect themselves. Whatever detection system it is, the sounders can be used to alert building staff about the risk of fire or evacuation if an alarm is activated. The Fire Alarm Control Panel is the 'brain' of the fire detector system. The central hub gives a status indicator to users for all the detector signals. This system can also be programmed to simulate an alarm for regular fire drilling and evacuation, so that all workers know what to do in case of a real fire [22].

Generally, fire detectors are of three types which are Conventional fire alarm system, Addressable fire alarm system and Wireless fire alarm system as stated by [23]. After plentiful amount of research, it is believed to go with Wireless Fire Alarm System for the food data centre infrastructure because the data centre already has many cablings with the IT equipment and resources. So, wireless fire alarm system has been chosen over addressable fire alarm system and regarding the cost efficiency, they both will cost roughly similar price because even though wireless alarm system is expensive but for an addressable fire system, it will take a high cost to arrange all the cablings as stated by [24].

**Figure 5.** *How an air-sampling smoke detector works [19].*


#### **Figure 6.**

*Performance levels for fire detection systems [20].*

Wireless Fire Alarm System: The solution of choice for many applications is wireless fire alarm device. The huge versatility and endless combination of wireless alarm devices make it a good choice for organisational sensitivity. Each unit of the range communicates with self-optimising amplitude and frequency through sophisticated bidirectional encrypted radio transmission. Additionally, multi-directional integrated antennas guarantee the virtual removal of signal corruption. Furthermore, wireless fire alarm systems have shown that they provide the best security in the premises in a reliable and cost-effective manner (**Figure 7**) [26].

**Figure 7.** *How a fire detection system along with a fire alarm system generally operates [25].*

#### *2.2.3 Recommendation*

After critically analysing the different types of smoke/fire detectors, Meza has decided to use Air-Sampling smoke detector because since the data centre will handle a huge load and huge processing power for the food delivery infrastructure, this early smoke detection system should be installed in case there is an overload and overheating which may cause a fire. As supported by [20], it can help with relatively early fire warning detection in rooms containing IT and telecommunications equipment. Additionally, the very early warning of smoke for the staff running the infrastructure is a crucial aspect of air sampling smoke detectors. The early warning capability enables managers to evaluate a smoke long before it enters an emergency state and activates a fire suppression system.

Lastly, wireless fire alarm system has been suggested as well because the data centre needs to be protected explicitly from fires and needs to be cost-effective and more reliable. As stated by [27], wireless fire alarm system is easier to deploy with low downtimes and has easier maintenance. It also has higher reliability, is more costvaluable in the long run and can be repositioned easily if necessary which are the key points that are being kept in mind by Meza to make the data centre more efficient, robust and safe; see **Figure 6**.

## **3. Fire suppression systems**

For a data centre fire suppression system is mandatory. The food ordering application holds many data and computes many processes and it is mandatory to keep the data centre safe from a fire outbreak if happens. A fire suppression system is a collection of designed units installed with the application of a material to extinct flames. The fire protection device typically has a built-in component that identifies fires by flame, smoke and other alert signals at the beginning stages. These are connected to an alarm device to warn in the presence of the fire and to take action to avoid the fire. Most fire detectors activate an application of an exterior material upon identification and/or warning immediately to extinguish the fire. However, certain fire suppression devices are issued manually (**Figure 8**).

**Figure 8.** *Fire suppression system 1 [28].*

#### **3.1 Level of protection**

There are three levels of protection for data centre. Which will ensure the safety of the data centre and the information stored in the system. First level of protection is building-level fire protection. The key goal is to defend the buildings and their workers from fire. Fire sprinklers and handheld extinguishers are the most widely used type of fire protection. The construction policy for handheld extinguishers requires that, if the class (A) combustible materials are in the workplace, there is a portable fire extinguisher for every 3000 square meters. A building may also use passive fire safety, including the construction of firewalls and floor mounts which dramatically delay the expansion of the fire in other areas of the building [29].

Second level of protection is room-level fire protection. The National Fire Protection Association (NFPA) sets the standards for room-level protection. The water still occurs in the piping of a wet piping network which will escape instantly after triggering of the warning. The downside to this device is that the pipe will leak and spill on the room facilities. The most commonly employed space fire safety is a pre-action device. The triggering of the sprinkler device needs at least two fire detection points. Other devices divide the space into sprinklers, so they just go off in the quadrant triggered. The best solution for data centre fire safety is fire sprinkler devices. Novec 1230 and FM-200 are the two rising safe agent gas systems. By raising the fire heat by absorption, they contain the fire. Such gases have zero ozone loss, rendering them biologically and humanly free. The physical footprint is smaller than inert gas systems as no agent is required to occupy a whole space. Electrically non-conductive, noncorrosive, sterile agent gases do not leave any traces upon evaporation. It renders them the best fire prevention contractor in data centres. As with fire sprinklers, the space is equipped with a tube network [29].

Third level of protection is rack-level fire protection. Relevant appliances and loss control need this fire protection. Although the mandatory fire sprinters protect the building and the room from fire, the appliance is not unprotected, which is worth 57 per cent of the cost in the room. To order to conserve money, the hardware has to be secured from a fire on a shelf. The implementation of a preconceived automated fire deletion mechanism safeguards the unit with the identification and deletion of the fire within seconds until it is triggered by the complete flute or sprinkler machine. It avoids the disruption to the facilities done by a water-based sprinkler and allows huge amounts of agents to be released into an expensive overall flood container [29].

To ensure the protection of the data stored in the data centre all three levels of protection should be taken. This will help the Meza team to protect the data centre of the food ordering application system. And reduce the cost of damage.

### **3.2 Types of protection**

The data centre for food ordering applications can be protected from fire in a variety of ways. The first is the water-powered sprinkler device that manages the flames, stops them from spreading and avoids structural harm. The sprinkler device is considered as an inexpensive and simple approach utilising about 25 gallons per minute of water. This leads to certain risks, which may be greater than the fire loss if electrical conductivity of data centre appliances, is because the spot where the fire takes place would tremendously wash, which will take the business more.

The Water Neck System, a recent entrant to the water-based fire protection program, is another water-based method. The spreading of the water in the specified area needs strong pressure pumps, which is advised only for wide areas, as it resulted in poor fire output in order to prevent flooding the area entirely, particularly in the fire is blocked. In fact, after suppression, the mist systems leave residual vapour, and costs and a problem exist with the equipment.

The clean agent, which can extinguish the fire very easily and connect the fire damage to the data centre equipment in its location which does not need water, is also a way of protecting the data centre from fire accidents. The main purpose of this sort is to protect the resources that are important, dynamic and essential. It is distinct from other forms as it does not involve extinction washing so no corrosive or residue is left behind. In complex environments, it can extinguish fires in blocked or threedimensional areas. In addition, two forms of cleaning agents are usable. Firstly, halocarbon agents including carbon, hydrogen and halogens including fluorine, thus creating dangerous effects for those in the fire because fuel is being polluted [30] and causes a breathing problem. Secondly, inert gas agents are prepared for gasses such as nitrogen, argon and carbon dioxide which are less risky to humans and less damaging to the resources than the first type. Both agents are regarded as electrically nonconductive and can be used in typically covered areas.

#### **3.3 Recommendation**

Kidde, fire system, a global pioneer in the development and manufacturing of fire detection and safety systems between clean agents-inert gas type and water sprinkler, was the focus of the previous debate and several real-world experiments, such as a study. As a Meza team, it suggests the clean agent-inert gas network that is installed for the food ordering application data centre, as well as the different benefits and functionality it provides, so that a fire will easily be extinguished and damage in a certain region limited, as well as no cleaning after the fire takes place. Therefore, the expense is inexpensive to introduce and eventually it can be built and managed very effectively and does not affect the safety or atmosphere of people. Therefore, beyond the core of the data centre, we do need to use the sprinkler network for assisting in certain circumstances by positioning several halocarbon agents.

#### **3.4 Network infrastructure**

See (**Figure 9**).

#### *3.4.1 Cabling*

Despite too much emphasis on various forms of technology and how organisations really bring their networks to use, it's simple to think about the physical framework that allows every data centre networking system feasible. Cabling is a massively critical element of data centre architecture. Weak cable implementation is not just unclear it can obstruct airflow, hinder the correct removal of hot air and stop cold air from entering. Cable air damming over time may cause overheating and failure of equipment, resulting in costly downtime. Data centre cabling was usually mounted beneath a high level. Designs have, though, improved in recent years to at least some flexibility for overhead cabling that also helps to lower electricity costs and minimise refrigeration needs. In order to maintain continuity of output and easy usage, wellmanaged facilities using organised cables procedure. Unstructured point-to-point cables cannot be mounted at all but are also correlated with higher operational

**Figure 9.** *Network infrastructure 1 [31].*

expenses and severe maintenance issues. A successful first move to networking for the food ordering application is the proper cable control [32].

## *3.4.2 Connectivity*

The abundance of ISP networking solutions is one of the key benefits of a carrierneutral data centre. Basically, a data centre, like any other person, links to the internet: through a different service provider cable. Moreover, in comparison to a traditional house, data centres provide many links with different vendors, enabling the food ordering application a variety of choices. The various networking solutions often provide a lot of flexibility, and it is nearly always easy to reach the external internet. Blended networking solutions often have major protections against DDoS assaults (**Figure 10**) [32].

**Figure 10.** *Network infrastructure 2 [33].*

### *3.4.3 Routers and switches*

Cabling from data centres is difficult enough even without routers and improvements to guide network traffic flow through and across the facility will approach nightmarish rates of difficulty. These devices act as unified nodes that enable data to move as easily as possible from one location to another. Properly installed, they can handle vast volumes of traffic and form a vital part of the topology of the data centre without losing efficiency. Incoming public internet data packets first reach the edgerouters of the data centre, which evaluate from where any packet arrives and where it has to go. It transmits the packets to the core routers that form an additional layer at the deployment stage. Such appliances are more aptly defined as switches, as they handle traffic in the data centre networking infrastructure. The grouping stage is named a set of key switches as all traffic is guided inside the datacentres. When it requires data to move through not physically linked computers, it must be transferred through the main switches. Having a wide number of addresses for the core to handle and sacrificing pace becomes necessary for individual servers to contact each other, and the data centre networks prevent the issue by linking server batches to a third layer of switches. Often these classes are called pods which encrypt data packets to allow the core to recognise what which traffic should be guided to rather than handling individual server requests for the food ordering application [32].

#### *3.4.4 Servers*

Deployments with high-density servers appear to have higher cabling, cooling and power usage specifications. The food ordering application wants their equipment in racks with convenient access to direct connections and individual cross-connections that provide better efficiency, pace and reduced downtime effect [32].

#### *3.4.5 Direct connection*

Perhaps the internet of the data centre is not quick enough to meet the demands of a client. It cannot require the lag or downtime of a cloud service [34] provider link. Data centres may in these situations give them the benefit of directly linking the server with one cross-connection to the servers of the provider. Through the direct cable running between servers, customers may work better when reducing latency and downtime. Although data centre networks are complicated structures which must be carefully maintained to guarantee high-quality efficiency, in any facility the basic framework is focused on exactly the same principles. By improving these networks, data centres will also be an enticing venue for businesses wanting to position their IT systems with a third-party supplier while providing a selection of creative offerings to their clients [32].

## **4. Cooling system**

In most data centres, the server room's temperature is always lower than in other places in the same building. For every single data centre, when the data collected on servers are growing, and a continuous increase in processor performance, it will lead to the servers will produce more heat. When the server's temperature reaches a critical point, it might cause the server not to be able to work properly, and the processor will slow down the performance to avoid overheating. When coming to the worst-case scenario, extremely high temperatures will burn the processor, eventually cause services interruption, and require replacing the hardware for resuming the services. For many reasons showing why a data centre needs a cooling system, and the cooling system is one of the essential components in the data centre. Hence, selecting a cooling system that can operate continuously, and the reliability is the top priority.

The cooling system operates in a variety of ways and has different levels of performance. Generally, there are two main methods for data centre cooling, which are an air-based cooling system and a liquid-based cooling system [35]. In liquid-based cooling system is a reduction of heat from the server by exploiting the properties of liquids [36]. For the air-based cooling system, commonly there are three types, transitional cooling, hot aisle containment, and cold aisle containment. These cooling systems cool down the server room temperature via cold air.

According to the research, the air-based cooling system method has commonly been used for years, and it is simple to compare to the liquid-based cooling system. For a liquid-based cooling system has the risk of leakage across the server rooms, it could damage the components if not appropriately used [37]. Therefore, in this project, we will focus on the air-based cooling system.

#### **4.1 Hot aisle containment system (HACS)**

Hot aisle containment system (HACS) composes the hot aisle to collect the hot exhaust air from the server at the back of the racks, and cold supply air is brought in by the equipment at the front. This is similar to a traditional hot aisle/cold aisle arrangement where data centre racks are arranged in alternate rows, cold air intakes facing one way and hot air exhausts facing the other, it keeps hot air in one aisle and cold in the other. While the traditional hot aisle/cold aisle cooling system, which only uses rack placement, worked well in a low-density environment but did not completely isolate the aisles and prevent hot and cold air from mixing [38]. The only method to stop the hot and cold air from mixing up is to form a physical barrier. This is where the containment systems are coming in. Typically, the containment forms a physical barrier from the top of the server racks to the drop ceiling, and it contains the hot aisle, and the exhausted hot air directly back to the cooling units [39]. The cold air will come from the CRAC unit, and the environment outside of the hot aisle becomes a large cold air plenum [40]. This is ensuring the hot and cold air are isolated (**Figure 11**).

Pros of HACS


Cons of HACS

• The cost of the construct is higher.

### **Figure 11.**

*Hot aisle containment system [35].*


## **4.2 Dynamic cooling management and optimisation**

Cooling management and optimises system continuously optimise airflow in the data centre equipment, performing improved reliability and availability. The system uses a dense array of temperature sensors to discover precisely where the hot spot is in the data centre. It helps identify potential equipment risks and automatically eliminates up to 95% of hot spots. As the IT load changes, integrated machine learning automatically adapts cooling to varying IT loads to balance the dynamic data centre environment. The system adjusts the cooling need with the lowest possible energy consumption, achieving the immediate cost savings and a suitable amount of cooling in the data centre (**Figure 12**) [42].

**Figure 12.** *Dynamic cooling control [42].*

## **4.3 Justification**

In order to achieve a tier 4 data centre, all components in the data centre must be fully redundant, including the cooling system. A cooling system design for a tier 4 rated data centre should fulfil the requirements below:

Redundant components—a backup of equipment for a cooling system such as:


After comparing the air-based cooling system, the hot aisle containment system is the chosen system implement for this project. According to Gavin Banks, HACS is significantly more cost-saving, compare to cold aisle containment system (CASC), it can save 43% in energy cost, which could relate to a 15% reduction in PUE [43]. This is also supported by Schneider Electric, in the report showing HACS provides 40% more energy cost-saving annually compare to CACS [40]. The legacy/traditional cooling system could be more expensive due to inefficiency and uneven cooling, and it may also require more and oversized equipment to accomplish the task. When looking at the bigger picture and all the considerations, other IT equipment in the same room that would need cooling as well, cold aisle containment system might make the server room area extremely hot. It will become a challenge for those who require to be inside the server room for maintenance or servicing, as well as for other IT equipment work under high temperatures. Therefore, hot aisle containment would be the appropriate choice (**Figure 13**).

## **4.4 Space allocation**

A proposed layout is as follows (see **Figures 14** and **15**).

## **4.5 Physical security**

A breach of physical security may cause unimaginable damage to a data centre. Given the growing need to protect valuable information, any loss of data or even the

**Figure 13.** *Hot aisle containment [39].*

## *Data Centre Infrastructure: Power Efficiency and Protection DOI: http://dx.doi.org/10.5772/intechopen.110014*


**Figure 14.** *Proposed layout [44].*

incapability to comply with mandatory regulatory requirements may result in obloquy, loss of customers, fines and loss of revenue. Interoperability is a critical building block for the physical security of a data centre. The entire ecosystem of

manufacturers and integrators serving the data centre physical security market needs to ensure that the products work together to provide a scalable, layered physical security solution [46].

The prime purpose of implementing physical security is to protect the information, devices and IT infrastructure of the data centre from any threat that could disrupt the operation of the data centre. It could be caused by any illegal activity, such as theft, leakage of data or damage by any physical involvement in the data centre. Building a layered approach to data centre security helps to customise the solution to the needs of a data centre. The organisation needs to determine the right layered approach, and understand the current system, the working environment and future needs (**Figure 16**).

A practical, layered approach requires all systems to function coherently. Generally, the security architecture consists of multiple layers of physical security that need to be considered to protect the data centre as a whole and to comply with the data centre protection guidelines (**Figure 17**).

#### *4.5.1 Layer 1: perimeter defence*

The first layer of physical security is perimeter defences, a physical boundary or fence at the property edge to deter external threats, which is controlling and restricts the access to the data centre property. There are three D's to describe the purpose of perimeter security, Deter, Detect, and Delay [49]. Usually, there are only two doors that are allowed to enter the data centre, the front door and the loading bay. The perimeter fence detection system can integrate with trespassing alarms, highdefinition CCTV system, limited access control points, and motion-activated security lighting.

**Figure 16.** *Security map showing the depth of security [47].*

*Data Centre Infrastructure: Power Efficiency and Protection DOI: http://dx.doi.org/10.5772/intechopen.110014*

#### *4.5.2 Layer 2: clear zone*

The second layer called as Clear Zone creates a buffer zone between the perimeter, and the data centre to have better detect physical invasion [48]. The clear zone is also a large area containing critical infrastructures such as fuel containment, generators, and main power supplies [46]. This zone needs security measures that provide a total awareness of the situation.

#### *4.5.3 Layer 3: facility entrance and reception*

Control visitor's access to the data centre and validate authorised access. All the employees and visitors before entering the data centre must check-in or register at the front desk, visitors, must obtain a temporary pass in order to access the secured areas.

#### *4.5.4 Layer 4: services Corridor (Escorted areas and Grey space)*

Validate the rights of authorised persons to access specific areas within the building. The corridors, grey spaces and escorted areas that head to the data centre floor are often where the proper security measures are overlooked [46]. This could lead to unauthorised access to critical mechanical and electrical infrastructure.

#### *4.5.5 Layer 5: data centre room*

In this layer, it is further restricting access through various forms of authentication, and monitoring all authorised access. Implement high-security electronics to prevent general staff or trespassers from accessing sensitive areas. To prevent unauthorised persons from entering white space, access control such as dual-factor biometrics is essential for the control of authorised access to the data centre.

#### *4.5.6 Layer 6: date centre cabinet*

Establish the protection of sensitive electronics (servers) that contain crucial data. The security measures to accomplish include cabinet locking, audit trails and an intelligent infrastructure strategy. This layer is especially essential and effective in reducing the critical and frequently forgotten the insider threat.

Most data centres did an excellent job of accomplishing the first few layers. Still, the absence of reliable control of the cabinet may result in costly data breaches caused by a malicious or disgruntled employee, or maybe even unknowing and unintentional access to data.

## **5. Synthesis**

Power: The data centre for the food ordering application will be using the grid electricity as the main power source to power up all the equipment and components in the data centre. In case of a power break or power down all the equipment's will be powered with the UPS backup battery for a short time and by that time the diesel generator will replace the power source with it. For every data centre uptime for its servers are highly crucial since the clients should have uninterrupted services. And 99.9% uptime will ensure uninterrupted services for all the customers and keep the data secure and ongoing.

#### **5.1 Server racks and computing resources**

The server racks will consist of all the necessary components for the servers to compute the desired function. Which will process placing an order, processing the payment and creating data for the customer. Which should be done within no time and should keep things fast are reliable. The servers are responsible for doing all the backend processes of the application and should be running all the time without any issues. All of the components should be working properly. And should be connected with each other to compute the tasks.

#### **5.2 Storage infrastructure**

Every customer detail and data should be kept in a storage device. The storage system consists of many storage components which will be connected to the servers all the time and store the necessary data in the storage. They will compute the data to the server interaction with it. Hard disk drive, tape drive and other forms of internal and external storage devices make process and stores the computed data. There will be a backup for all the data in case of storage devices get interrupted and to keep it working depending on necessary times. The storage utility software keeps monitoring [50] the process for uninterrupted processes.

## **6. Evaluation**

#### **6.1 PUE and efficiency**

Data centres use significant amounts of power to operate. Majority of the power is consumed by the cooling systems. A successful data centre must be efficient. One of



#### **Table 1.**

*Estimated power usage for different components and equipment of the data centre based on enterprise IT equipment.*

the metrics for calculating the efficiency of the data centre is Power Usage Effectiveness (PUE). PUE is calculated by using the total amount of power consumed by the energy used by the IT equipment [51, 52]*.*

In order to calculate the PUE, values for power consumption and IT energy needs are to be determined. The **Table 1** shows the estimated power usage for different components and equipment of the data centre based on enterprise IT equipment.

Based on the estimates above, the PUE for the data centre is calculated by Eq. (1),

$$\mathbf{PUE} = \mathbf{96}, \mathbf{418}/\mathbf{55}, \mathbf{790} = \mathbf{1.73}. \tag{1}$$

The PUE value lies between efficient and average according to **Figure 18**. One of the main reasons why the efficiency is slightly below efficiency is that cooling system requires more energy due to the geographical location of Malaysia. In other countries such as Iceland, the cooling system will not need that much power because they are also closer to the poles and they experience the winter season which Malaysia does not. However, the efficiency is still optimal for a data centre in this region.

#### **6.2 Expandability**

A data centre is expected to meet future business needs and expand accordingly. The food company already has 5 million users, in the next few years, they estimate their user base will increase given the current popularity. Therefore, the data centre design must be able to take this into consideration and allow future expansions.


#### **Figure 18.**

*Level of efficiency relative to the PUE value [37].*

According to [53], many data centre expansions result in failure. As for expandability in this data centre, it will initially occupy 10 racks which are about 25% of the floor space available to avoid the mistake of oversizing and wasting resources. This allows the facility to not be overcrowded when demands rise. Starting with few racks lowers the cost to build (CapEx). In addition to this, the usage of rack enclosures/cabinets helps with cooling. Therefore, in the future cooling systems will not be overburdened with additional racks in the facility since rack enclosures have better airflow and cooling. The data centre is designed with a modular approach. Designs that are modular and flexible are the key to long-term success [53]. For example, increasing the storage capacity is trivial since the data centre storage infrastructure is based on Storage Area Network (SAN) which offers great scalability and expandability compared to other architectures. Finally, through proper planning using the total cost of ownership (TCO) approach and flexibility of the facility, the data centre can meet the requirements of recent market demand.

## **7. Conclusion**

We may sum up by noting that because the IT industry is continually expanding, there will always be a need for new and improved solutions. There is no doubt that the solution and tools selected for this work will remain the same in the future. The intended data centre is meticulously thought out, from security to smart execution. Considerations for the design of a data centre in terms of power efficiency, cooling systems and protection should include scalability, power effectiveness, CO2 reduction, system resilience, sustainability, the use of machine learning, and other cuttingedge technology. Additionally, by renting space in the data centre and selecting their own equipment, clients using the co-location system can locate their data. Finally, through proper planning using the total cost of ownership approach and flexibility of the facility, the data centre can meet the requirements of today and tomorrow.

## **Conflict of interest**

The authors declare no conflict of interest.

*Data Centre Infrastructure: Power Efficiency and Protection DOI: http://dx.doi.org/10.5772/intechopen.110014*

## **Author details**

Yaseein Soubhi Hussein<sup>1</sup> \*, Maen Alrashd<sup>2</sup> , Ahmed Saeed Alabed1 and Amjed Zraiqat<sup>3</sup>

1 Computer Science and Information Systems Department, Ahmed Bin Mohammed Military College, Qatar

2 Faculty of Science and Information Technology, Jadara University, Irbid, Jordan

3 Faculty of Science and Information Technology, Al-Zaytoonah University of Jordan, Amman, Jordan

\*Address all correspondence to: dr.yaseein@abmmc.edu.qa

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Datacenters.com. Everything You Need to Know About Data Center Power. 2020. [Online]. Available from: https://www.datacenters.com/news/eve rything-you-need-to-know-about-datacenter-power

[2] Kutsmeda K. Data Center Power Strategies. 2013. [Online]. Available from: https://www.csemag.com/article s/data-center-power-strategies/

[3] Shaw R, Howley E, Barrett E. Applying reinforcement learning towards automating energy efficient virtual machine consolidation in cloud data centers. Information Systems. 2022;**107**:1-21

[4] Lorbel. What is power distribution unit & automatic transfer switch?. 2018. [Online]. Available from: https://www. lorbel.com/what-is-power-distrib ution-unit-automatic-transfer-switch

[5] Osemco. Closed Transition Transfer Switch, [Online]. 2016. Available from: http://osemco.co.kr/en/pro/closedtransition-transfer-switch.html

[6] Evanuik S. UPS in Critical Data Centers. 2019. [Online]. Available from: https://www.allaboutcircuits.com/tech nical-articles/uninterruptible-powe r-supply-systems-in-critical-data-cente rs/#::text=The%20uninterruptible% 20power%20supply%20(UPS,disturba nces%20and%20power%20quality% 20issues

[7] Bergum M. Line Interactive vs. Double Conversion UPS—Which One's Best?. 2019. [Online]. Available from: https://www.qpsolutions.net/2019/ 11/line-interactive-vs-double-conve rsion-ups-which-ones-best/

[8] Woodstock Power. Backup generators for data centers. 2019. [Online]. Available from: https://wood stockpower.com/blog/backup-genera tors-for-data-centers/#::text=Backup% 20generators%20for%20data%20centers %20provide%20power%20when%20the %20main,high%20risk%20of%20opera tional%20loss

[9] Vertiv. What is a Rack PDU?, 2020. [Online]. Available from: https://www. vertiv.com/en-emea/about/news-andinsights/articles/educational-articles/whatis-a-rack-pdu/#::text=Rack%20Power% 20Distribution%20Units%20(rPDUs,e quipment%20within%20the%20data% 20center.&text=The%20rPDU%20then% 20distributes%20power,each%20ind

[10] Data Center Power Chain: How It Works. Somerset, New Jersey, NJ, USA: Raritan; 2018

[11] Rundquist R. What Type of Rack PDU is Right for Your Data Center?. 2019. [Online]. Available from: https://www. vertiv.com/en-emea/about/news-andinsights/articles/blog-posts/what-type-ofrack-pdu-is-right-for-your-data-center/

[12] Bord M. On Switched Rack PDUs and Decreasing Energy Consumption in the Data Center. 2014. [Online]. Available from: https://www.raritan.c om/blog/detail/on-switched-rack-pdus-a nd-decreasing-energy-consumption-inthe-data-center

[13] Mitsubishi Electric. Double Online Conversion UPS Technology [Online]. 2020. Available from: https://www. mitsubishicritical.com/technologies/d ouble-conversion-vs-line-interactive/#: :text=Online%20double%20conve rsion%20UPS%20systems,(PUE)%20inc rease%20their%20ROI

[14] DPS. How a rack switched PDU protects your remote sites, [Online]. *Data Centre Infrastructure: Power Efficiency and Protection DOI: http://dx.doi.org/10.5772/intechopen.110014*

2022. Available from: https://www.dpste le.com/power-distributionunit/switched-rack-pdu-definition.ph p#::text=The%20Strong%20ROI%20of %20a%20Switched%20Rack%20Power %20Distribution%20Unit&text=With% 20switching%20functionality%2C%20a %20remote,or%20reboot%20a%20re mote%20device

[15] Office of Energy Efficiency & Renewable Energy. Energy 101: Energy Efficient Data Centers, 2020. [Online]. Available from: https://www.energy. gov/eere/videos/energy-101-energyefficient-data-centers#::text=Data% 20centers%20can%20become%20 more,and%20help%20protect%20the% 20nation

[16] Schroll RC. Fire Detection and Alarm Systems: A Brief Guide. 2007. [Online]. Available from: https://ohsonline.com/ Articles/2007/12/Fire-Detection-and-Ala rm-Systems-A-Brief-Guide.aspx

[17] Walker T. Three Level of Data Center Fire Protection. 2019. [Online]. Available from: https://www.firetrace.c om/fire-protection-blog/three-levels-ofdata-center-fire-protection

[18] IFSEC GLOBAL. Smoke Detectors Explained. 2019. [Online]. Available from: https://www.ifsecglobal. com/smoke-detectors/

[19] Menke J. Air Sampling Smoke Detection. 2016. [Online]. Available from: https://blog.a1ssi.com/air-sa mpling-smoke-detection/

[20] Kaiser L. What is an Air Sampling Smoke Detection System?. 2015. [Online]. Available from: https://www. orrprotection.com/mcfp/blog/air-sa mpling-smoke-detection-system#::te xt=Air%20sampling%20detectors%20a re%20chosen,at%20air%20handling%

20return%20grilles.&text=Buildings% 20where%20other%20smoke%20detec tors%20have%20failed

[21] Honewell Gent. Products Aspirating Smoke Detection. [Online]. 2019. Available from: https://www.gent.co.uk/ products/aspirating-technology/#::te xt=Honeywell%20Gent's%20range% 20of%20Aspirating,and%20before% 20intense%20smoke%20develops

[22] Crimmins D. What is a Fire Alarm System?. 2019. [Online]. Available from: https://realpars.com/fire-alarm-system/

[23] London Fire Brigade. Fire Alarms. Learn about Different Kinds of Fire Alarm Systems, and Get a Better Understanding of What You Might Need to Instal in Your Property. 2020. [Online]. Available from: https:// www.london-fire.gov.uk/safety/prope rty-management/fire-alarms/

[24] Kennedy K. Wired Vs. Wireless Fire Alarms—What's the Best Choice for Your Business?. 2018. [Online]. Available from: https://www.sshfireand security.co.uk/news/wired-vs-wireless-f ire-alarms-whats-the-best-choice-f or-your-business#::text=Addressable% 20systems%20are%20therefore% 20more,control%20panel%20using% 20radio%20waves

[25] BOSCH. 2020. [Online]. Available from: https://www.boschsecurity.com/ gb/en/solutions/fire-alarm-systems/

[26] Discount FireSupplies. Wireless Fire Alarms. [Online]. 2020. Available from: https://www.discountfiresupplies.co.uk/ category/22/Wireless-Fire-Alarms#::te xt=Wireless%20fire%20alarm%20syste ms%20are,existing%20wired%20fire% 20alarm%20systems

[27] FireSystems INC. Why your Business needs a Wireless Fire Alarm System. 2019. [Online]. Available from: https://firesystems.net/2019/03/16/ why-your-business-needs-a-wirelessfire-alarm-system/

[28] PROEN Internet. PROEN Internet, 2019. [Online]. Available from: http:// www.siamidc.com/firesuppression.php

[29] Walker T. Firetrace.com, 2020. [Online]. Available from: https:// www.firetrace.com/fire-protectionblog/three-levels-of-data-center-fireprotection

[30] Ghaffari E, Rahmani AM, SaberiKamarposhti M, Sahafi A. An optimal path-finding algorithm in smart cities by considering traffic congestion and air pollution. IEEE Access. 2022; **2022**:55126-55135

[31] Network Infrastructure. 2019. [Online]. Available from: Trustedne tworksolutions.com

[32] Alan Seal. Vxchnge.com, 2019. [Online]. Available from: https://www. vxchnge.com/blog/data-center-ne tworking-101-everything-to-know

[33] 1Connect SOHO. 1 Connect SOHO, 2020. [Online]. Available from: https:// 1connectsoho.com/upgrade-network-inf rastructure-asap/

[34] Sadri AA, Rahmani AM, Saberikamarposhti M, Hosseinzadeh M. Data reduction in Fog computing and internet of things: A systematic literature survey. Internet of Things. 2022;**2012**:1-31

[35] Isberto M. The Latest Innovations in Data Center Cooling Technology, 2018

[36] Rouse M. What is Liquid Cooling?— Definition from WhatIs.com. [Online]. Available from: https://whatis. techtarget.com/definition/liquid-cooling

[37] Mezzanotte M. How to Calculate the PUE of a Datacenter. 2019. [Online]. Available from: https://submer.com/ blog/how-to-calculate-the-pue-of-adatacenter/

[38] B. N. Technologies. 2018. [Online]. Available from: http://www.bluewavene twork.net/data-centers.html

[39] Shield C. Hot Aisle Containment HAC Systems For Data Centers, 2016

[40] Avelar k. Brown. Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency, 2011

[41] Ahdoot A. Is cold or hot aisle containment better for your data center? 2014. [Online]. Available from: https:// www.colocationamerica.com/blog/hotvs-cold-aisle-containment

[42] Schneider Elecrtic. StruxureWare data center operation, 2016. [Online]. Available from: www.apc.com/ struxureware

[43] Banks G. Advantages of Hot Aisle vs. Cold Aisle Containment, 2019. [Online]. Available from: https://www.source ups.co.uk/hot-aisle-vs-cold-aislecontainment/

[44] ColocationPLUS. Take the Red or the Blue? How We Chose Hot Aisle Containment for Our Newest Data Center—ColocationPLUS. [Online]. 2019. Available from: https://coloca tionplus.com/2018/10/16/take-redblue-chose-hot-aisle-containment-newe st-data-center/

[45] Moumiadis T. Tier 4 data center cooling system design—My engineering notes. [Online]. 2019. Available from: http://moumiadis.blogspot.com/2019/ 03/tier-4-data-center-cooling-systemdesign.html

*Data Centre Infrastructure: Power Efficiency and Protection DOI: http://dx.doi.org/10.5772/intechopen.110014*

[46] Anson S. Data Center Security Best Practices | Security Info Watch. 2017. [Online]. Available from: https://www. securityinfowatch.com/perimetersecurity/physical-hardening/article/ 12336001/datacenter-security-bestpractices

[47] Niles S. 2011. [Online]. Available from: https://securitytoday.com/-/med ia/1837DD6DB5F441D69D C863B6D7EEF94A.pdf

[48] ICD. How to Secure Your Data Center—ICD Security Solutions, ICD Security Solution. [Online]. Available from: https://www.icdsecurity.com/ 2019/08/28/how-to-secure-your-datacenter/

[49] Anixter. The Four Layers of Data Center Physical Security for a Comprehensive and Integrated Approach. 2012

[50] Fathi A, Hussein YS, Sabri NA. Leverage networking tasks using network programmability. TEST Engineering & Management. 2020;**83**: 905-910

[51] Felter B. What is Power Usage Effectiveness and How it Impacts Your Costs. 2020. [Online]. Available from: https://www.vxchnge.com/blog/powerusage-effectiveness

[52] Manaserh YM, Tradat MI, Bani-Hani D, Alfallah A, Sammakia BG, Nemati K, et al. Machine learning assisted development of IT equipment compact models for data centers energy planning. Applied Energy. 2022;**2022**:1-17

[53] Hagan MM, Lusky J, Hoang T, Walsh S. 2016. [Online]. Available from: https://download.schneider-electric. com/files?p\_File\_Name=VAVR-8K4U25\_R1\_EN.pdf

Section 2
