A good DCIM, the all-seeing eye of LCL

header-DCIM-2

LCL has relatively large data centers at 3 locations. For data centers of this scale, good management of the infrastructure is crucial for reliable, smooth and organized operation. We use sophisticated software for DCIM, developed specifically for LCL. This includes our exact floor plans and the electrical diagrams for each data room. We work together with Perf-IT for this. Their application brings all our hardware together in one DCIM system. For example, cooling systems from different brands can be seamlessly integrated into the system.

lcl-data-center-infrastructure-management-screenshot-blur

What is mainly being watched at DCIM? It is about checking all important parameters in a data center: the temperature, power consumption, capacity and efficiency. At LCL, one system controls all these factors. In this way, employees can monitor their proper functioning day and night at a glance. This central, comprehensive approach makes the information much clearer than when you have to consult different systems. In the event of a warning or problem, the application also immediately gives an indication of the cause. The seriousness of the report becomes immediately apparent, even remotely.

Regarding temperature for example, we see the values ​​in the various halls and the impact thereof for each customer at a glance. In addition, the redundancy is closely monitored. Everything must be able to work redundantly in terms of temperature, but also in terms of electricity. Capacity management is also much simpler, both for the data center in general and for the customers individually. For example, when a customer consumes 80% of his available capacity, a warning may appear.

Thanks to our DCIM system, our customers enjoy extensive and clear reporting. This way they are informed in detail about the status of their servers. Thanks to our extensive analyzes, we can also inform them well in advance of evolutions that are best addressed. A too high power consumption can, for example, cause the redundancy to be too small. In that case we contact our customers preventively and indicate what the possible solutions for this are.

A good DCIM system bears fruit not only for our customers, but also for LCL. The “Power Usage Efficiency” (PUE) is closely monitored: after all, the ratio between customer power usage and infrastructure load should be as close as possible to one. This way we avoid unnecessary costs. On the other hand, it helps us to work more efficiently and sustainably. For example, the cooling has already been fine-tuned. Our energy consumption decreased significantly by changing the temperature control in the server rooms and by adjusting the rotation speed of the fans of the air-conditioning outdoor units.The DCIM application is also linked to invoicing, which means that the consumption per customer is thoroughly documented and calculations are made automatically.


After almost 2 years, the DCIM project at LCL has almost been finalised. The project took a lot of time because it was tackled one site at a time. Moreover, many analyzes preceded: for example, every power board and every flow meter was checked. The customized user interface required a large investment, but we think it is more than worth it. In the future we will also have a mobile DCIM app, so that employees can consult all the information via their mobile phone. That is something to look forward to!

By Laurens van Reijen

The shift towards the edge

LCL Data Center

The data center world is evolving as the amount of data in the world is constantly increasing. New technologies like the Internet of Things, blockchain, 5G, Artificial Intelligence require a different approach. These technologies require rapid response and real time analysis. Extra data processing and storage capacity is thus needed very close to the source of the data. That’s what edge computing is about: storing, processing and analysing data as close as possible to the point where it is generated.

The shift towards the edge means a shift towards decentralised data centers. Data transfer to a centralised hyperscale cloud data center sometimes just takes up too much time. Pushing computation and analytical capabilities closer to the edge reduces traffic and can reduce round-trip delay in sending data for analysis to and from a centralised cloud platform. This results in better security, improved availability, more privacy and increased resiliency. Every city or region will need their own data center, so this will require a lot of extra data center space.

Edge processing can raise network speed, reduce latency and help with capacity issues. Failures or congestion in networks may cause serious problems for machines, devices or user experience. Think about Pokémon Go: people all over the world were walking around with their smartphones trying to catch ‘em all. Who would like it if the connection goes down at the exact moment they’re catching a rare Pokémon. The same goes for smart watches: the output is needed immediately, so there’s no time to send all the data to the cloud to be analysed.

Another example are autonomous cars. These self-driving vehicles will produce an enormous amount of data and will exchange information with each other. If one car detects a pothole in the road, it sends this information to the next car, which will adept the suspension at the exact location of the pothole. Processing data like this must happen within less than a microsecond or accidents will happen. That’s why the processing needs to happen very close to the point of usage. Availability is key here.

The data center world is evolving, but so is LCL. We are ready for the shift towards the edge. We’re connected in three cities in Belgium: Antwerp, Aalst and Brussels. Our data centers are scalable and flexible and have all the necessary components for security, cooling, energy … already in place. We’re striving for maximum availability and reliability.

Pros and cons of outsourcing your datacenter

The most recent Business Meets IT seminar earlier this month focused on datacenters, so naturally it caught our attention. Keynote speaker of the day was Luc Verbist, CIO of media concern De Persgroep. After a presentation of their own datacenters (2 fully redundant DC's with 4 cubes in total), he also shared his thoughts on internal versus external datacenters. Some of the arguments sound very familiar: if you need a 24x 7 operations, you will more likely outsource your datacenter. If you don't have enough critical mass, you will too. But the decision will also depend on other variables such as: the availability of skilled resources, building restrictions and regulations, and whether your company has an opex or capex strategy. There are many variables but more often than you's expect, you will be driven to the decision pro outsourcing.

DSC_4126

Other thoughts worth mentioning: "Experienced project and maintenance teams as valuable as the product itself" (Serge Bogaerts from Cenaero) and "In the year 2000 only Walmart had 200 terabyte worth of data, nowadays any average company with over 1.000 employees already has more than 200 terabyte of data." (William Visterin, Smart business) And this evolution will only accelerate, so data centers and their suppliers can rest assured: there are challenging times ahead. Not challenging as in 'will we have enough business?' but as in: 'how will we manage to keep on growing faster and faster?' A challenge that we gladly accept and that we're already tackling today.