An Empirical Investigation on the Effect of Airflow Management on the Energy Efficiency of Data Centres

DOI : 10.17577/IJERTV5IS030509

Download Full-Text PDF Cite this Publication

Text Only Version

An Empirical Investigation on the Effect of Airflow Management on the Energy Efficiency of Data Centres

T. T Alonge1, E. A Adedokun2 , M. B Muazu 3

123Department of Electrical and Computer Engineering Faculty of Engineering

Ahmadu Bello University, Zaria, Kaduna State, Nigeria.

Abstract – The cooling equipment in a data centre consumes the largest amount of energy as a result of the heat released from the information technology equipment in the data centre. It is pertinent to look into how to manage the heat release from these equipment. Developing an improved airflow design is one of the methods to achieve this. The improved airflow design allowed for the introduction of separate hot and cold aisle with the hot aisle isolated to the ceiling return plenum. This was done to avoid recirculation and mixing of air streams which if allowed, would increase the energy consumption in the data centre as a result of the required increased cooling. This paper focus on the effect of an improved airflow design on a data centre using a case study to evaluate the effect of this design on the energy efficiency of a data centre.

Keywors ; Data centre, airflow, Power Usage Effectiveness, Enercy efficiency

  1. INTRODUCTION

    The Data centre is a facility hosting large number of servers dedicated to massive computation and storage. The data centre is used for different purposes which include interactive computation (e.g., web browsing), batch computation (e.g., renderings of images and sequences), or real-time transactions (e.g., banking). Data centres are seen as a composition of information technology (IT) systems and non-information technology (IT) systems also known as support infrastructure. The information technology systems provide services to the end users while the non- information technology systems provide power and cooling. Information technology systems include servers, storage and networking devices, middleware and software stacks, such as hypervisors, operating systems, and applications. The non- information technology includes backup power generators, uninterruptible power supplies (UPSs), power distribution units (PDUs), batteries, and power supply units that generate and/or distribute power to the individual IT systems. The

    cooling technology (CT) systems, including server fans, computer room air conditioners (CRACs), chillers, and cooling towers, generate and deliver the cooling capacity to the IT systems [1].

  2. AIRFLOW MANAGEMENT

    Air management for Data Centres entails all the design and design details that goes into minimizing or eliminating mixing between the cooling air supplied to equipment and the hot air rejected from the equipment. When designed correctly an air management system can reduce operating costs, increase the data centres density capacity and reduce heat related processing interruptions or failures [2]. Removing hot air immediately as it exits the equipment allows for higher capacity and much higher efficiency than mixing the hot exhaust air with the cooling air being drawn into the equipment. Poor air management will reduce both the efficiency and capacity of computer room cooling equipment [3].

    1. Hot and Cold Aisle Separation

      The basic hot aisle/cold aisle is created when the equipment racks and the cooling systems air supply and return are designed to prevent mixing of hot rack exhaust air and the cool supply air drawn into the racks. As the name implies, the data centre equipment is laid out in rows of racks with alternating cold (rack air intake side) and hot (rack air heat exhaust side) aisles between them. The aisles are typically wide enough to allow for maintenance access to the racks and meet any cold requirements. All equipment is installed into the racks to achieve a front-to-back airflow pattern that draws conditioned air in from cold aisles, located in front of the equipment, and rejects heat out through the hot aisles behind racks. Figure 1.1 shows an example of a hot and cold aisle arrangement.

      Figure 1.1: An example of the Cold and Hot Aisle Configuration [3].

      With proper isolation, the temperature of the hot aisles no longer impacts the temperature of the racks or the reliable operation of data centre; the hot aisle becomes a heat exhaust. The Heating, Ventilation and Air Conditioning (HVAC) system is configured to supply cold air exclusively

      to the cold aisles and pull return air only from the hot aisles [2]. Figure 1.2 shows a sealed Hot and cold aisle within the Data Centre.

    2. Flexible Barrier

      Figure 1.2: Sealed Hot and Cold Aisle Configuration (John et al., 2010).

      (iv) Implement Cables Management

      Using flexible clear plastic barriers, such as plastic supermarket refrigeration covers or other physical barriers, to seal the space between the tops of the rack and the celling or air return location can greatly improve hot aisle/cold aisle isolation while allowing flexibility in accessing, operating, and maintaining the computer equipment below. This design supplies cool air via an underfloor plenum (raised floor) to the racks; the air then passes through the equipment in the rack and enters a separated, semi-sealed area for return to an overhead plenum. This displacement system does not require that air be accurately directed or overcooled. This approach uses a barrier above the top of the rack and at the ends of the cold aisles to eliminate short-circuiting (the mixing of hot and cool air) as shown in Figure 1.2 [2].

    3. Ventilated Racks

    The ideal air management system will duct cooling air directly to the intake side of the rack and draw hot air from the exhaust side, without diffusing it through the data centre room space at all [2].

    A data centre should have a cable management strategy to minimise air flow obstruction caused by cables and wiring. This strategy targets the entire cooling air flow path, including the rack-level Information Technology equipment air intake and discharge area as well as under-floor areas [3].

  3. DATA CENTRE EFFICIENCY

    The development and implementation of energy efficient resource management strategies in data centres has become a prequisite to implement energy efficient, green and environment-friendly data centre. The actual energy consumed by the data centre does not affect the cost of infrastructure, but is reflected in the electricity cost consumed by the system during the period of operation [4]. Energy efficiency has become a significant metric that is progressively implemented to evaluate and measure the energy utilization of device installed in data centre. Energy efficiency metrics and benchmarks are used to track the performance of a data centre in power and energy use at different levels [4]. The green grid have defined the power

    usage effectiveness (PUE) and data centre infrastructure efficiency (DCiE) metrics which have been useful in understanding the energy usage of the data centre. The power usage effectiveness and data centre infrastructure metrics enables data centre operator to estimate the energy efficiency of their data centre, compare the results against other data centres and determine if any energy efficiency improvements needs to be made [5].

  4. DATA CENTRE INFRASTRUCTURE EFFICIENCY

(DCIE)

Data Centre Infrastructure Efficiency matrices is defined as the ratio of Information Technology equipment power to the total facility power [6]. The total facility power is defined as the power measured at the incoming utility meter. The Information Technology equipment power is the power consumed by the quipment supported by the data centre as opposed to the power delivery, cooling component and other miscellaneous loads [7]

IT Equipment Power

laptops/workstation used to monitor or otherwise control the Data Centre.

2. Total power facility: It is defined as the power measured at the utility meter. This power is dedicated solely to the Data Centre [10]. It includes all the Information Technology equipment power plus everything that supports the Information Technology equipment load such as; Power delivery components like Uninterrupted Power Supply (UPS), switch gears, generators, batteries, Power Distribution Units (PDU) and so on. Cooling system equipment are the chillers, computer room air conditioning (CRAC), direct expansion air handler (DX) units, pumps and cooling towers. Other miscellaneous components such as data centre lightning and so on.

There are two methods used to obtain data for calculating the Power Usage Effectiveness of the data centre. They are [11]:

1. Estimating the power by using available

DCiE

Total Facility Power

(0.1)

information on the equipment factoring in operational and ambient properties.

  1. POWER USAGE EFFECTIVENESS (PUE)

    The Power Usage Effectiveness is defined as a ratio of total data centre energy use to Information Technology device energy use. The power usage effectiveness is a common metric that accounts for the electricity use of the infrastructure equipment [8]. The Power Usage Effectiveness indicates how much power is used by the facility infrastructure to power and cool the Information Technology equipment and to power the redundant distribution system required for maintaining the expected availability and reliability of the information factory services [9].

    2. Measure the actual power consumption of the required component.

    The Power Usage Effectiveness is not a static value. It varies according to server and storage utilization.

    The best efficiency of the data centre can be achieved as the PUE value reduces to reach 1. Most data centres have an operational PUE value between 1.2 and 3, where the PUE of

    1.2 is a very efficient data centre. On the other hand, the data centre with 3 is considered to be a very inefficient data centre. The data centre infrastructure efficiency (DCiE) is defined as the inverse of the PUE value multiplied by 100, and its value varies between 0 to 100% [12].

    PUE

    1

    DCiE

    100

    (0.2)

  2. CASE STUDY: RESULTS AND INTERPRETATION

    Power Usage Effectiveness is a complex metric. In order to

    use this metric, it is important to understand first the load component and second the categories of measurement. The components are [5]:

    1. Information Technology Equipment Power: The Information technology (IT) equipment power is defined as the equipment that is used to manage, process, store or route data within the raised floor space in the Data Centre [10]. It includes all the load associated with the Information Technology devices like servers, storage equipment, network equipment, computer and the supplementary equipment such as switches, monitors

      In carrying out this investigation, a data centre in northern Nigeria was used as a case study. The data centre was modelled and solved using the 6SigmaRooomLite software which has embedded within it a computational fluid dynamics (CFD) solver. The data centre has an equipment room of 136m2 and power room of 64m2 with three (3) cooling unit of 17kW each. The data centre was built on a raised floor. It consists of two rows and ten cabinets with 120 information technology equipment in the cabinets, the room has enough space to accommodate two more additional rows of ten cabinets. The data centre was evaluated using the software CFD solver and the energy efficiency was deduced. Table 1 shows the energy efficiency values of the data centre.

      Table 1: Energy Efficiency of the Existing Data Centre

      Room Summary: Energy Efficiency Measures

      Room Efficiency

      47.1 %

      Power Usage Effectiveness

      2.1233

      Maintaining the same structure and adding a closed hot containment with a ceiling return plenum, the data centre was evaluated by the computational fluid dynamics solver of the 6SigmaRooomLite software. The result obtained is shown in Table 2

      Table 2: Energy Efficiency Measures for the Improved Airflow Design.

      Room Summary: Energy Efficiency Measures

      Room Efficiency

      62.3 %

      Power Usage Effectiveness

      1.6042

      From the result in table 1.2, the improved airflow design has a better efficiency. The energy efficiency in terms of power usage effectiveness was 1.6042 having a data centre infrastructure efficiency of 62.34%. Comparing Tables 1.1 and 1.2, its obvious that there is an improvement in the efficiency of the data centre. An improvement of 15.24% occurred on the existing data centre.

      1. CONCLUSION

        Based on the evaluation carried out, it was discovered that improving the airflow design of the data centre improved the energy efficiency by 15.24%. This shows that by eradicating recirculation or mixing of hot and cold air, there would be a significant improvement in the energy efficiency of the data centre.

      2. REFERENCES

  1. Parolini, Luca, Bruno Sinopoli, Bruce H. Krogh, & Zhikui Wang. (2011). A CyberPhysical Systems Approach to Data Center Modeling and Control for Energy Efficiency. Proceedings of the IEEE, 100(1), 254-268.

  2. Mahdavi, R., Paul Mathew, Dale Sartor, Bill Tschudi, Jeff Thomas, John Bruschi, . . . Rumsey, Peter. (2012). DataCenters Best Practices Guide – Energy efficiency solutions for high performance Data centres (pp. 80). Pacific Gas and Electric Company (PG&E).

  3. John, Bruschi, P, Rumsey, R, Anliker, Chu, Larry, & Rumsey, Stuart Gregson of. (2010). Best Practices Guide for Energy- Efficient Data Center Design. United States Department of Energy, Energy Efficiency & Renewable Energy Information Center, , 1-24.

  4. Uddin, Mueen, Alsaqour, Raed, Shah, Asadullah, & Saba, Tanzila. (2013). Power Usage Effectiveness Metrics to Measure Efficiency and Performance of Data Centers. AppliedMathematics & Information Sciences An International Journal, 8(5), 2207-2216.

  5. Belady, Andy Rawson, Amd John Pfleuger, Dell Tahir Cader, & Spraycool. (2008). Green Grid Data Center Power Efficiency Metrics: PUE AND DCIE. The Green Grid, San Francisco, CA(WHITE PAPER #6), 1-9.

  6. Grid, The Green. (2007). The Green Grid Data Center Power Efficiency Metrics: PUE AND DCiE. The Green Grid, 1-16.

  7. Newcombe, L. (2009). Data centre energy efficiency metrics

    – Existing and proposed metrics to provide effective understanding and reporting of data centre energy Data Centre Specialist Group BCS (pp. 1-54). http://dcsg.bcs.org.

  8. Masanet, Brown, Richard E., Shehabi, Arman, Koomey, Jonathan G., & Nordman, Bruce. ( 2011). Estimating the Energy Use and Efficiency Potential of U.S Data Centre. Proceedings of the IEEE, 99(8), 1440-1452.

  9. Dumitru, L. , Stamatescu G., FGRAN L. , & ILIESCU, Sergiu Stelian. (2013). Dynamic Management Techniques for Increasing Energy Efficiency within a Data Center. UNITE Doctoral Symposium, 1-5.

  10. Uddin, Mueen, Talha, Muhammad, Rahman, Azizah Abdul, Shah, Asadullah, Khader, Jameel Ahmed, & Memon, Jamshed. (2012). Green Information Technology (IT) framework for energy efficient data centers using virtualization. International Journal of Physical Sciences (IJS), 7(13), 2052-2065.

  11. Dumitru, Lulia, Said, Yanis Hajd, loana F AR ¸AN, ILIESCU, Sergiu, & , Stephane PLOIX. (2011). Increasing Energy Efficiency in Dta Centers using Energy Management. Paper presented at the 2011 IEEE/ACM International Conference on Green Computing and Communications.

  12. Almoli, Ali Mubarak. (2013). Air flow management inside data centres: University of Leeds.

Leave a Reply