What will the sustainable data center of the future look like?

What will the sustainable data center of the future look like?

Data centers are required to become increasingly sustainable as time marches on. Rather than having this be a requirement imposed on itself, the industry wants to be proactive. How do experts in the field envision the data center of the future? We will discuss this in a round-table discussion with NorthC Datacenters, Eurofiber, Dell Technologies, and Schneider Electric.

Four experts from these companies will be joining us for the discussion. Ronald van den Bosch is Group Director Projects at NorthC Datacenters, Jan Bonke is Business Innovation Officer at Eurofiber Netherlands, Ruud Mulder is Country Pre-sales & Solutions Architecture Manager Netherlands at Dell Technologies and Loek Wilden is Offer Manager Secure Power BeNe at Schneider Electric.

We have already discussed the current state of affairs regarding data centers and sustainability at length with these specialists. Now we will continue discussing the near future and prospects for this industry. The previous report can be read below:

Tip: Sustainability in data centers: where do things stand?

Passports and PIN transactions

“There is no method that easily describes the energy efficiency of a card transaction or the sale of a loaf of bread,” says Ronald van den Bosch of NorthC. “Even if you are standing on a train platform, you’re using electricity because a screen tells you when the train is coming.” This realization is of great importance. Energy is lost with every form of consumption. Or, to put it more accurately: energy is often converted into heat, in addition to the kind of energy it was actually intended for. That extra energy will have to be reused more often, although the process to utilize it is expensive. What’s the advantage of this?

“Liander has suggested that 2,000 megawatts of energy will be needed for data centers in the coming years,” says Jan Bonke of Eurofiber. “If, on the other hand, a comparable amount of energy is also required, then you can create a win-win situation.” In other words: killing two birds with one stone. The extra consumption can be organized in such a way as to make optimal use of the energy and recycle it. To do this, data centers must be close to the energy source, says Bonke, and also close to the place where the remaining energy is used. This limits transportation costs.

The pricing model for customers is also changing. Currently, these clients are charged based on their energy efficiency. However, Bonke states that the next step is to work with green certificates to reward the percentage share of green energy and a measurable amount of residual heat. “Going green is fun, but it should also be good for your wallet,” says Bonke. We will come back to that later.

Vijf mannen zitten rond een houten tafel in een vergaderruimte en discussiëren verwikkeld over datacenterstrategieën. Ze hebben naambordjes, een bril en een laptop op tafel.
From left to right: Loek Wilden (Schneider Electric), Ronald van den Bosch (NorthC Datacenters), Sander Almekinders (round table moderator), Jan Bonke (Eurofiber), Ruud Mulder (Dell Technologies)

The energy issue not only affects operators/suppliers. If a customer uses a lot of energy, they will get a bill to match, says Van den Bosch. “I [as a supplier] have to provide the customer with all kinds of details about my infrastructure. That includes everything from air conditioning to the specific energy consumption of the server racks. The customer is then able to reduce that energy consumption.” This can be done, for example, by replacing servers earlier than they have been before, a departure from the upgrade cycles of yesteryear.

Ruud Mulder of Dell Technologies calls for the sustainability of equipment to be made measurable in great detail. This can be done by means of a digital passport, showing where all the materials come from and how recyclable they are. He thinks there is still much room for improvement in this area. For example, future designs can be recycled better by separating plastic and gold from each other, refurbishing components and more. This yield increase is often attractive, as more computing power is required for ambitious AI plans, and the efficiency of chips increases with each generation. “The transition to AI means that you sometimes have to say goodbye to your equipment sooner,” says Mulder.

The AI issue is highly relevant to the future of the modern data center in any case. The technology will require new designs with different focal points, as is to be discussed later. But first, it is essential to map out its exact impact on IT infrastructures. Loek Wilden of Schneider Electric points out that we are still constantly talking about GenAI training, but that inferencing will dominate actual use. Measuring this impact will become a requirement in all sectors. Wilden anticipates a “significant increase in energy consumption”. This will affect all sectors, although not equally. He states that hospitals will probably go from 50 kW of power consumption for their computers to 150 kW. All of this will proportionally increase cooling requirements.

Speaking of AI

There are various preconditions for a successful AI rollout. One of these is simply housing hundreds of GPUs in a data center. Mulder says that the network layer must be better organized for this. GPUs quickly go out of sync, which consumes much power and heat. This problem only becomes more apparent as computing power increases and connectivity fails to keep pace with that increase.

Operators have already made many gains in making things more efficient, significantly improving the overall sustainability. For example, NorthC uses sensors to continuously measure whether the airflow is correct, which results in an efficiency gain of 6 to 8 percent. The system easily pays for itself, says Van den Bosch. Wilden also states that AI can reduce network congestion, allowing existing connectivity to prevent bottlenecks where possible. Gains without replacing hardware are extremely lucrative.

Eurofiber’s Bonke suggests that storage will increasingly be separated from computing. In a distributed setup, it is much easier to scale and to deal flexibly with changing IT requirements. It also means that customers would have to use each other’s hardware, so that congestion can be eliminated ‘on paper’. While Bonke sees that Eurofiber locations have enough breathing room for the workloads, capacity has been contracted away, which stands in the way of optimal utilization. “We’re actually going back to the old days,” he says. “Sharing resources without everyone having one’s own server rack is the future.” This idea ties in with the broader perspective that emerges during the conversation: the opportunities for optimization are everywhere, the will is there, but what else is needed to realize the most positive future possible?

“We must improve our measurements, including in the software layer,” Bonke emphasizes. Even with all the work that has been done, as we already mentioned at NorthC, some parties need to measure more. As the discussion shows, Things must also go beyond an evolutionary policy. “These improved insights sometimes lead to radical choices in data center design,” Bonke continues. One of those choices concerns cooling. Is water cooling the future, especially given the ever-increasing wattages per rack demanding to be constrained? Opinions on this are mixed. “There are different use cases,” explains Van den Bosch. “Water cooling is ideal for some applications, but it is not a universal solution.”

Water cooling will not even be a one-to-one replacement for air cooling. Schneider’s Wilden predicts: “The increase in liquid cooling will actually lead to more sales of air cooling.” Wilden explains that because this increase is so large, the remaining 20-30 percent for air cooling will lead to more sales overall for all manufacturers, including those using traditional cooling. This is because IT components that still rely on air cooling are always required. GPU-filled racks also contain CPUs, SSDs and other equipment that are not in the water loop.

Feasibility: Rewarding good behaviour?

We must look beyond just the hardware. Customers do what is best for them, which sounds more negative than it needs to be. The experts repeatedly mention energy certificates as a way to reward good behavior among customers. “Everything must be precisely measurable,” says Mulder of Dell. “And the responsibility must be clear. Who does what in the chain of sustainability?” Certificates are available for data center locations themselves, but green IT can be viewed broadly. In addition to customers asking parties such as NorthC, Eurofiber, Dell, and Schneider Electric how sustainable they are, they are also being asked this question themselves.

This step towards sustainability involves all sectors. “We are talking about fully circular hardware, smaller data centers, and shifting workloads,” Wilden of Schneider Electric summarizes. The rise of edge computing also plays a role in shifting the focal points of IT consumption. “The complexity is increasing, but we like that,” says Wilden. “It offers opportunities.” That is to say: opportunities for differentiation to tackle new bottlenecks, wherever they may be.

The aforementioned software layer offers insights that form a springboard to other areas for improvement. Mulder of Dell Technologies describes it as “the power of data.” Consider predictable failures due to AI applications’ predictive capacity. “Hyperscalers already have practical examples of this,” Wilden says. It is a matter of time (although it is not certain how much time) before this becomes democratized.

Feasibility of innovation

How feasible is the intended innovation? Loek Wilden of Schneider Electric notes that liquid cooling has been on the rise for a long time. This efficiency drive arose out of necessity, but has already shown that the design of data centers can change radically. “But it is no longer the easy things that you can improve. We need structural cooperation between hardware vendors and infrastructure players.”

Bonke adds: “The next step is to look at what TNO is doing. They are working on an orchestration layer that allows you to first load server 1 until its resources are fully utilized, and then switch to a second one. That increases efficiency enormously.” Once again, this requires redesigning what is ‘yours’ in a data center location.

Vijf mannen zitten rond een tafel in een vergaderruimte met warmgekleurde muren en grote ramen, verwikkeld in een discussie over datacenterstrategieën. Laptops, papieren en drankjes liggen verspreid over de tafel.

“In colocations and enterprise data centers, we are indeed seeing water cooling for chips more and more often,” says Van den Bosch. “We offer flexible solutions, where customers of a certain size can be offered a separate solution. We can adapt to demand. Ultimately, it is the customers who ask the real questions.” This process continues unabated.

Despite innovative alternatives such as liquid cooling and direct-to-chip, some customers still cling to traditional air cooling. The tendency to cling to the old is understandable, and sometimes, it is not necessary for the customer in question. The experts clearly understand this; not every customer is the same. Mulder does think that the conclusion will change over time. “Both the data center and the equipment must be flexible to make the transition.”

PUE and context

PUE (Power Usage Effectiveness) is getting a lot of focus and is a demonstrable measuring point. However, more context is needed for true sustainability, not just on paper. Bonke has already mentioned the move to smaller data centers and more edge computing. This should also lead to less frequent AC/DC/AC/DC switching, as we discussed earlier, and that alone will yield significant efficiency gains, he believes.

Van den Bosch emphasizes the importance of radical choices. He calls Water Usage Effectiveness “also a question of sustainability.” At NorthC, they have noticed that evaporating high-quality drinking water is actually a terrible waste; there is simply no alternative network for water that is not drinkable. “All new NorthC designs are therefore waterless.” He recognizes that efficiency is higher in the summer with water cooling, but, again, he measures it after the fact.

If these propositions cannot be reconciled with reality, making a product more sustainable will result in a more complex picture than just switching to a new type of cooling. The dimensions of chips are also relevant here, Bonke notes. After all, if they are optimally designed for water cooling, you want to switch to direct-to-chip, i.e. without the classic IHS (internal heat spreader) that is needed for heat transfer with air cooling. This is an area where only a few companies have influence, but even they follow the wishes of customers.

Van den Bosch continues: “In Eindhoven we have gas engines that can run on hydrogen. They are a lot more stable than PEM cells. It is a completely different technology that you have to explain to your customers. Some parties are still looking for diesel and want to see two generators.” In other words: the same old story. We will have to move away from that in the long run, Van den Bosch suggests. “Sometimes you have to be disruptive.” He explains that natural gas can be used as an independent source for the power grid where necessary, even though power outages in the Netherlands are minimal.

Responsibility

A crucial question remains: who will take the initiative for these changes? “The customer has to want it,” says Bonke. Mulder adds that phased upgrading should be possible for the customer.

Bonke argues that the step to high-density server racks should not be taken lightly: “High-density with a high cooling requirement is not high-density for the data center.” A lot of space is still needed for actual cooling, as the cooling tiles only have a certain capacity. Therefore, distribution can still be improved, which also helps with redundancy. “That is the best solution for both parties.”

Van den Bosch emphasizes the importance of looking at third parties. “We need to keep an eye on suppliers. We would like to connect wind farms directly to data centers.” This requires coordination, which the companies of the experts at the table are clearly willing to work on to find a solution. He also sees possibilities for heat storage: “Storing heat in the ground and using it throughout the year. This requires clear agreements, but the buyer has the advantage.”

Van den Bosch agrees that data center operators would like to transfer heat, but this runs into all kinds of problems. “If we exchange the heat in the winter, someone else will save CO2.” NorthC Datacenters is willing to cooperate in this, “but there has to be a business case for it.”

European cloud and edge computing

The discussion shifts to the European cloud strategy. “There are 10,000 locations [in Europe] that need to be set up and connected,” says Bonke. “This is an opportunity to look again at the best locations and the best set-up.” Mulder adds: “But we don’t always need to go to the data center. The edge trend seems to lead to fragmentation, data is everywhere. The required speed cannot be achieved if data has to be continuously sent to a central location.”

“Edge data centers were once conceived as small locations in neighborhoods,” Van den Bosch recalls. “But generally we see large agglomerations spread across the country, which is already a kind of ‘edge’.” Wilden makes a distinction between ‘regional edge’ and ‘local edge’. Mulder thinks a regional approach could take the pressure off the large data centers. “Ideally, you shift a certain percentage of the workloads to locations where there is a demand for energy,” Bonke concludes.

The role of AI and visions of the future

“It is fiction to think that AI will provide a direct overview in this regard,” warns Mulder. Bonke adds: “Private environments in the medical sector will continue to exist.” Regulations are yet another aspect that should not be forgotten in the considerations of IT decision makers.

Bonke outlines a vision for the future: “We are moving towards a distributed data center concept. Places where energy transition is possible, where surplus can be utilized. If cooling can be done at 50-60 degrees Celsius, the residual heat is much more useful.” He sees possibilities in district heating: “That is still not working because you need heat pumps. It has to be above 30 degrees and we have to be able to get rid of the chillers and refrigerants. This will result in huge efficiency gains, especially in the winter.”

Balance and challenges

The balance between temperature, consumption and utilization has often already been found. Pain points remain: allocation, the role of customers, and the ‘every man for himself’ principle. In addition, the question remains: how do you measure it? What is the energy cost of a PIN transaction or waiting for the train?

Ambitions vary from small to large, from immediate solutions to long-term prospects. Hydrogen cooling anticipates the rise of AI. “The next bubble is already on its way,” says Mulder of Dell. Not all data centers are ready, but the technology is rushing towards us. The importance of facilitating what is needed is becoming structural. AI can also provide solutions, such as reducing grid congestion.

Residual heat remains a hot topic, now and in the future. The ideal size of data centers is evolving; they are not yet located in neighborhoods, but data is everywhere. The experts are hinting at a different setup, from large central data centers to regional ones. One size will never fit all, so the importance of flexibility in the future is emphasized. This also offers room for IT players to pursue other organizations.

Sustainable architectural choices point in multiple directions when taking into account the decisions organizations must make. To cloud or not to cloud? The medical world with “AI boxes”? And accountability remains crucial: eco-labels and an entirely sustainable process. The fact that not everyone measures in the same way remains a challenge, although PUE has long been established as a starting point and yardstick for ambitious sustainability initiatives.

Finally, we will discuss the sharing of computing environments. All experts agree that utilization must increase. The data center of the future must above all be flexible in both the short and long term and measurably sustainable. In the radical reorganization of IT consumption, workloads will be placed where they can run optimally, optimal consumption will be rewarded and there will be room for new, creative ideas.