AI For Energy Efficiency: How Google Applied Reinforcement Learning to Reduce Energy Spending

By Osama Kheshaifaty

SPE-KSA YP Vice-chairperson, and Reservoir Engineer at Saudi Aramco

The growing demand for cloud services is leading to an increased usage of data centers. For modern data centers, dealing with excess heat is a critical factor; as cooling the running components adds a heavy cost load and increases the carbon footprint as an energy-extensive process. The data center cooling market is predicted to reach $20 billion US dollars’ worth by the year 2024, which makes finding an efficient solution for this problem a key objective for tech companies. (Patrizio, 2017) For end-users, reduced cooling costs translate into enhanced services at lower prices.

Figure 1: Data Center Energy Consumption Breakdown for 2016 (Modern Intelligence, 2021)

Several attempts have been made in the past to solve the data centers heating problem, such as relocating to cooler climates, or even placing data centers underwater as Microsoft did in Project Natick. (Benveniste, 2020) Google, instead, took an AI approach rather than going through a costly relocating of their data centers. Through collaborating with DeepMind, a British AI company, Google implemented an algorithm that successfully reduced their data center cooling energy expenditure by 40%. (DeepMind, 2016) (Knight, 2018)

Figure 2: Microsoft testing an underwater data center (ALYSSA NEWCOMB, MICROSOFT, 2016) 

How it works

The algorithm:

Reinforcement Learning (RL) is a feedback-based unsupervised machine learning algorithm where an agent is trained to perform a set of actions and the results are simulated. The agent receives positive feedback, or points, through favorable actions, and is penalized for bad actions. In other words, such a model is used when a simulation model of the environment is available, but not an analytical solution, and the information about the environment is collected through interacting with it. (JavaTPoint, 2020)

The system:

A system of neural networks was initially trained using data from a center’s operating parameters collected by sensors spread across the data center. The data includes power, temperatures, setpoints and pump speeds. The neural networks were then trained on the average future Power Usage Effectiveness, (PUE), which is the ratio of the energy consumption in the building to IT energy consumption. Next, two other deep networks were trained for pressure and temperature prediction for the next hour; in order to simulate the results of recommendations from the first model with PUE ensuring they do not exceed operational constraints. The PUE control model was tested on a data center throughout an operating day, during the testing the model control was turned on and off. The results are shown below:

Figure 3: PUE measured throughout a typical testing day (DeepMind, 2016)

The graph of the PUE shows the effectiveness of the ML’s control in lowering the energy consumption of the cooling process, with a 40% total reduction in PUE. Moreover, the algorithm produced the lowest measured PUE in the center’s history. The potential of this algorithm goes beyond data center operations. Other applications of this algorithm include increasing power plant energy conversion efficiency, enhancing semiconductors manufacturing process by reducing resources usage, and overall improvement in manufacturing processes through increasing output per input unit. (DeepMind, 2016) (Adhikari & Chen, 2021)

Conclusion

Google and DeepMind unveiled the remarkable potential of unsupervised learning in increasing operational efficiency for the tech industry. The significance of this application of ML goes beyond reducing cost. In the long run, and when implemented at a larger scale across various industries, unsupervised learning will enable operators to lower energy consumption, reduce carbon footprint, and with the help of AI get a step closer towards a sustainable future. (ADG Efficiency, 2017)