*4.2.1 Value and status from condition monitoring systems*

Field data collection is the beginning of CBM process. As the single line diagram of PDS of data center appointed 12 equipment installations for StruxureWare by set-point value as specify in **Table 1**, and status monitoring as specify in **Table 3**.

The maintenance set-point value at the beginning refers from IEEE 493, MTBF, plus condition of P-F interval. Mostly, device status condition comes from supplier data sheet's for maintenance. Both of data collection sources are sending to StruxureWare, which intends for manipulating after; condition monitoring and data collection process; and data processing and signal processing. DCIM will execute function selection as operator's requirement and create statistic modeling for fault diagnostics and prognostics for calculating RUL. All data collection will input through the predictive maintenance function for setting up the new value and status as the beginning of condition monitoring, PDCA process, as represented in **Table 3**. Almost 12 months of data collection by StruxureWare and PPM model, there are no blackout in PDS of data center Tier IV. No blackout does imply no any device or system failure but Tier IV topology designs as fully redundancy 2(N + 1), therefor, some devices or systems can be failure but the other still perform without system interruption. The StruxureWare can detect and discover before sending information to administrator team to repair it under MTTR condition. Because data center Tier III is designed as 2 N and Tier IV is designed as 2(N + 1) topology. It allows more fault tolerance to devices and systems failure. The system warning occurs a few times but data center administrator can fix the problems by warning instruction from StruxureWare monitor guides. The StruxureWare has designed for easing to understand and predict any device or system failure and resolve it before it fails, which implies CBM help decrease planned and unplanned downtime, labor hours, and spare part inventory, while increases throughput of system productivity. Moreover, CBM supports the provision and early warning system for all devices and systems failure functions, StruxureWare has capable to controls inventory level much more effectively and no need as many emergency spare parts [26].

#### *4.2.2 Value and status from idle servers*

Idle server is a physical server that is still running but has no perform any computing resources or any transaction processing, that it consumes power but is serving no useful purpose. The Uptime Institute survey reports around 30 percent of global data center servers are either underutilization or completely idle. This server can consume power an impressive 175 watts when it is idle mode. A survey of server PSUs [27] reports the range of efficiency related to load of PSUs, as illustrated in **Figure 12**.

In the red zone, power loaded of PSU is lower than 20 percent the efficiency drops off precipitously. In the yellow zone, 20–40 percent, PSU efficiency begins to drop but typically exceeds 70 percent. In the green zone, the PSU operates above 40


*Values and status of data collection from condition monitoring*

 *systems.* percent loaded, where their efficiency is at or above 80 percent. At idle mode, current servers still draw power about 60 percent of peak load electricity. In normal data center operations, average server utilization is only 20–30 percent [27]. Now data center operators deal with growing cost restraints and energy efficiency goals, it is become primal objective to identify and eliminate these severs promptly. **Table 4** shows the saving costs due to idle power draw of each server per year

Power supply size (Watts) 400 400 400 400 400 Idle power draw (kW) 0.6 0.6 0.6 0.6 0.6 Power waste (Watts) 240 240 240 240 240 Hours per year 8760 8760 8760 8760 8760 Cost of electricity per kW/hr (\$) 0.08 0.1 0.12 0.14 0.15 **Savings (\$) 168.19 210.24 252.29 294.34 315.36**

*Condition-Based Maintenance for Data Center Operations Management*

*DOI: http://dx.doi.org/10.5772/intechopen.93945*

Locating and identifying an idle server is performed function through DCIM solution. The DCIM applies database from field data collection is the beginning of CBM process at device level of PSUs and PDUs. The DCIM and intelligent PDU can give data center operator the insights which data need to gain complete control of power usage, load profile or utilization of servers, and cost-efficiency IT

After design the single line diagram of PDS, in **Figure 10**, all main devices and systems had monitoring through IT sensing devices such as transformer, entrance switchgear, automatic transfer switch (ATS), diesel generator, uninterruptable power supply (UPS), leaded acid batteries, distribution switchgear, power distribution unit (PDU), and rack-Power supply unit (PSU), for measurement of the

compare to range of cost of electricity per kW/hour.

environment.

**51**

**Figure 12.**

**Table 4.**

*Idle server and electricity costs.*

*Power supply efficiency.*

**5. Results and discussion**

*Condition-Based Maintenance for Data Center Operations Management DOI: http://dx.doi.org/10.5772/intechopen.93945*

#### **Figure 12.** *Power supply efficiency.*


#### **Table 4.**

*Idle server and electricity costs.*

percent loaded, where their efficiency is at or above 80 percent. At idle mode, current servers still draw power about 60 percent of peak load electricity. In normal data center operations, average server utilization is only 20–30 percent [27]. Now data center operators deal with growing cost restraints and energy efficiency goals, it is become primal objective to identify and eliminate these severs promptly. **Table 4** shows the saving costs due to idle power draw of each server per year compare to range of cost of electricity per kW/hour.

Locating and identifying an idle server is performed function through DCIM solution. The DCIM applies database from field data collection is the beginning of CBM process at device level of PSUs and PDUs. The DCIM and intelligent PDU can give data center operator the insights which data need to gain complete control of power usage, load profile or utilization of servers, and cost-efficiency IT environment.

## **5. Results and discussion**

After design the single line diagram of PDS, in **Figure 10**, all main devices and systems had monitoring through IT sensing devices such as transformer, entrance switchgear, automatic transfer switch (ATS), diesel generator, uninterruptable power supply (UPS), leaded acid batteries, distribution switchgear, power distribution unit (PDU), and rack-Power supply unit (PSU), for measurement of the

**Components/**

**50**

**Infrared**

**Precise**

**Visual**

**Insulation**

**Motor**

**Polarization**

**Cable**

**Oil**

**Vibration**

**Lubricant**

**Wear**

**Bearing**

**Leakage**

**Performance**

**Ultrasonic**

**detection**

**monitoring**

**monitoring**

**thermography**

**timing**

**inspection**

**resistance**

**circuit**

**index/**

**condition**

**and**

**monitoring**

**analysis**

**particle**

**temperature**

**analysis**

**analysis**

**analysis**

**dissipation**

**monitoring**

**gas**

**levels**

✓✓

 ✓

✓

*Operations Management - Emerging Trend in the Digital Era*

**factor**

**and**

**trending**

**systems**

1 Transformer

2 Entrance

✓

 ✓✓

 ✓

 ✓

✓

switchgear

3 Automatic

✓

 ✓✓

 ✓

 ✓

✓

transfer switch

(ATS)

4 Diesel generator

5

Uninterruptable

✓

power supply

(UPS)

6 Leaded acid

✓

 ✓

batteries

7 Distribution

✓

 ✓✓

 ✓

 ✓

✓

switchgear

8 Power

✓

 ✓✓

 ✓

 ✓

✓

distribution unit

(PDU)

9 Rack-PDU

**Table 3.** *Values and status of data collection from condition monitoring*

✓

 ✓✓

 ✓

 ✓

 *systems.*

✓

✓

 ✓

✓

 ✓ ✓

 ✓

 ✓

 ✓✓

✓

✓

✓

 ✓

 ✓ instantaneous and trending of all electrical status; voltage, amperage, phase, total harmonic distortion (THD); and mechanical status; alarm, vibration, noise, temperature, leakage, oil level or other status. All data collection had recorded through DCIM system for define set-point or condition-based maintenance (CBM) of each critical device and system to prevent potential failure or P-F Curve. The results from installed and operations data center with StruxureWare software show system warning of DCIM reduce data center operator time in day-by-day to fine out root causes of the problems such as location of devices or systems, history condition of operations device, with device is broken first and cascading failure to which system, and more easy for operator to make decision with completely information for future provision.

**References**

p.319-327

1998. p. 901–910

2016. p. 97-102

Maintenance, 2018

2010.

[1] Ponemon Institute, Cost of Data Center Outages: Data Center Performance Benchmark Series, Sponsored by Vertive, January, 2016.

*DOI: http://dx.doi.org/10.5772/intechopen.93945*

*Condition-Based Maintenance for Data Center Operations Management*

[10] Jonge B, Teunter R, Tinga T, The influence of practical factors on the benefits of condition-based maintenance over time-based maintenance, Reliability Engineering and System Safety, vol.158,

[11] Uptime Institute, Data Center Site Infrastructure Tier Standard: Topology. Uptime Institute Professional Services,

[12] BICSI-002, Data Center Design and Implementation Best Practices, BICSI

[13] Martorell P, Marton I, Sanchez I, Martorell S, Unavailability model for demand-caused failures of safety devices addressing degradation by demand-induced stress, maintenance effectiveness and test efficiency, Reliability Engineering and System Safety, vol.168, 2017, p.18-27

[14] Keizer O, Teunter H, Veldman J, Joint condition-based maintenance and inventory optimization for systems with multiple devices, In: European J. of Operational Research, vol.257, 2017,

[15] StruxureWare, StruxureWare Data Center Operation 8, Schneider Electric,

[16] Gregory V, Brian W, Moneer H, A review of diagnostic and prognostic capabilities and best practices for manufacturing. In: J. Intell. Manuf.,

[17] IEEE 493, IEEE Std. 493-2007 (Revision of IEEE 493-1997),

Recommended Practice for Design of Reliable Industrial and Commercial Power System, Gold Book, 2007.

[18] Wiboonrat M, Transformation of system failure life cycle, In: Int. J. of Management Science and Engineering

LLC. Uptime Institute, 2014.

2017, p.21-30

002-2014, 2014.

p.209-222

2016. p.1-17

2018

[2] Vitucci F, Predictive maintenance in

[3] Keizer O, Teunter H, Veldman J, Babai Z, In: Condition-based

[4] Lee J, Teleservice engineering in manufacturing: challenges and opportunities. In: International Journal of Machine Tools & Manufacture, 38 (8),

[5] Sobral J, Soares G, Preventive maintenance of critical assets based on degradation mechanisms and failure forecast, IFAC-Paper Online, 49-28,

http://www.maintworld.com/ Applications/Maximizing-the-P-F-Interval-Through-Condition-Based-

[6] Blann R, Maximizing the P-F Interval Through Condition-Based Maintenance,

[7] Emerson, Addressing the Leading Root Causes of Downtime, White Paper: SL-24656-R10-10, Liebert Corporation,

[8] Shin H, Jun B, On condition based

Computational Design and Engineering,

maintenance policy, In: J. of

[9] Compare M, Bellani L, Zio E, Reliability model of a device equipped with PHM capabilities, Reliability Engineering and System Safety, vol.168,

vol.2, 2015, p.119-127

2017, p.4-11

**53**

maintenance for systems with economic dependence and load sharing Int. J. of Production Economics, vol.195, 2018.

a mission critical environmant, Emerson: White Paper CSI 2130 Machinery Health Analyzer, 2017
