A good DCIM, the all-seeing eye of LCL

header-DCIM-2

LCL has relatively large data centers at 3 locations. For data centers of this scale, good management of the infrastructure is crucial for reliable, smooth and organized operation. We use sophisticated software for DCIM, developed specifically for LCL. This includes our exact floor plans and the electrical diagrams for each data room. We work together with Perf-IT for this. Their application brings all our hardware together in one DCIM system. For example, cooling systems from different brands can be seamlessly integrated into the system.

lcl-data-center-infrastructure-management-screenshot-blur

What is mainly being watched at DCIM? It is about checking all important parameters in a data center: the temperature, power consumption, capacity and efficiency. At LCL, one system controls all these factors. In this way, employees can monitor their proper functioning day and night at a glance. This central, comprehensive approach makes the information much clearer than when you have to consult different systems. In the event of a warning or problem, the application also immediately gives an indication of the cause. The seriousness of the report becomes immediately apparent, even remotely.

Regarding temperature for example, we see the values ​​in the various halls and the impact thereof for each customer at a glance. In addition, the redundancy is closely monitored. Everything must be able to work redundantly in terms of temperature, but also in terms of electricity. Capacity management is also much simpler, both for the data center in general and for the customers individually. For example, when a customer consumes 80% of his available capacity, a warning may appear.

Thanks to our DCIM system, our customers enjoy extensive and clear reporting. This way they are informed in detail about the status of their servers. Thanks to our extensive analyzes, we can also inform them well in advance of evolutions that are best addressed. A too high power consumption can, for example, cause the redundancy to be too small. In that case we contact our customers preventively and indicate what the possible solutions for this are.

A good DCIM system bears fruit not only for our customers, but also for LCL. The “Power Usage Efficiency” (PUE) is closely monitored: after all, the ratio between customer power usage and infrastructure load should be as close as possible to one. This way we avoid unnecessary costs. On the other hand, it helps us to work more efficiently and sustainably. For example, the cooling has already been fine-tuned. Our energy consumption decreased significantly by changing the temperature control in the server rooms and by adjusting the rotation speed of the fans of the air-conditioning outdoor units.The DCIM application is also linked to invoicing, which means that the consumption per customer is thoroughly documented and calculations are made automatically.


After almost 2 years, the DCIM project at LCL has almost been finalised. The project took a lot of time because it was tackled one site at a time. Moreover, many analyzes preceded: for example, every power board and every flow meter was checked. The customized user interface required a large investment, but we think it is more than worth it. In the future we will also have a mobile DCIM app, so that employees can consult all the information via their mobile phone. That is something to look forward to!

By Laurens van Reijen

Datacenters may like it hot(ter)

Electricity used for cooling accounts for one third and sometimes even half of the total electricity bill. For a datacenter of about 3.000 m² and consuming about 10 MW, the electricity cost for cooling your datacenter may rise up to 3 to 6 million euro, according to the calculations by a research team in the university of Toronto made less than a year ago.
Research of the Department of Computer Science – University of Toronto

This same research team has investigated what would be the effect on the system reliability if you would raise the temperature by some degrees, a measure that could lead to considerable cost savings on the electricity bill.

The results are quite revealing. Without going into too much detail, the effects of turning up the temperature by several degrees are far less than generally assumed. There is little or no correlation between higher temperatures and DRAM failures or node outages. And the correlation between higher temperatures on the one hand and latent sector errors in disks and disk failures on the other are far weaker than expected.

What seemed to impact the hardware reliability far more, however, were huge variances in temperature. So rather than keeping the temperature as low as possible, datadenter operators should strive to keep the datacenter temperature as consistent as possible.

Does this mean that one can allow the temperature to rise up to 50° Celsius or more? Not exactly, because a rising temperature would lead to a dramatic increase of the power consumption of individual servers, if only because of the increasing fan speed of these servers. That way, the positive effect of a lower cost of cooling in general would be neutralized by the higher cost of the servers’ power consumption.

But the results do indicate that the average datacenter temperature may well be turned up a few degrees and the monitoring efforts should focus more on keeping the temperature constant than on keeping it low. By the way: too low temperatures are harmful to your hardware as well.

The scientists were reluctant to provide specific advice on ideal datacenter temperatures, but they did find the results encouraging and worth some further investigation.

Up to recently, the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) recommended an inlet temperature range of 18 to 27 degrees Celsius as a safe temperature without damaging the equipment, but if the above research is confirmed, the recommended range may be expanded significantly.

Cooling of datacenter in Belgium
Cooling of datacenter in Belgium