Bombellii Ventures

Why Data centers matter?

We rarely think about the invisible systems that keep our world running. Until they fail. Then, suddenly, they’re all that matters, unveiling how modern and reliable infrastructures are foundational to economic opportunity, access to critical services, and to the well-being of citizens.

I learned this early in Algeria, where an unreliable internet connection wasn’t just a minor annoyance. As a child, I still recall when online games would freeze mid-match. Same things with attempts to access bank accounts or book medical appointments or submit school assignments. Basic civic services, and the peace of mind they bring, became harder to reach.

Today’s AI revolution faces the same blind spot. We marvel at chatbots and LLMs, but rarely ask: Where is this computation happening, and at what energy and environmental cost?” The real workhorses behind the magic are data centers. These unremarkable buildings are the classrooms where AI learns. Yet these facilities are being pushed beyond their limits by two distinct but converging forces.

– Surging demand: Hundreds of millions of people now use AI daily. A single AI inference query can consume 30 times more energy than a standard web search. Multiply that by 800 million weekly ChatGPT users and the load becomes staggering, even for the 12000 data centers already existing. Looking ahead, McKinsey indicates that global demand for data center capacity could more than triple by 2030, with a 22% CAGR.

– Exploding compute needs: A single rack packed with AI accelerators now demands up to ten times more power than a standard server rack from just a few years ago. It is no surprise that global power demand from data centers is expected to increase by 165% by the end of the decade, according to Goldman Sachs.

These two forces are driving record investment, and the trend is accelerating. Industry estimates suggest that by 2030, over $6.7 trillion in global investment will be needed to meet AI’s compute demands. The hyperscalers (Alphabet, Amazon, Microsoft, and Meta) are leading the charge with a projected combined $322 billions investment this year alone, much of it on next-generation facilities that are consuming energy at an unprecedented rate.

This isn’t just a technical or energy challenge. It’s about sovereignty, sustainability, and strategic autonomy. The U.S government sees the stakes clearly. Its 2025 AI Action Plan, released in July 2025, emphasizes that AI’s future depends not only on smarter algorithms on the strength and readiness of the infrastructure that powers them. And much of that infrastructure still has a long way to go.

So, what does it take to meet the demands of AI-class computing? First, we need to understand the core challenges. Then, we must ask a more provocative question: Can AI help solve the very infrastructure problems it created?

 

Are Data Centers Ready for AI’s Class Demand?

Let’s first acknowledge that the “Cloud” is not in the sky, but rather deeply anchored in the physical world because enabled by data centers. These facilities are packed with servers that store and process the world’s data. Every text, stream, or app relies on these buildings. They power the essential digital services we all use, namely, social networks platforms, AI chatbots and cloud platforms.

But data centers are also the final link in a long and complex global value chain, stretching from remote mines extracting copper and lithium, to high-precision factories producing semiconductors, power systems, and cooling units.

Every component is shaped by this chain. That means the reliability, scalability, and sustainability of the digital services we use every day are ultimately constrained by the realities of this physical supply network.

Let’s take a step behind the digital facade and unpack how it’s built, and why it’s facing mounting challenges.

– Everything starts in mines and factories. Raw materials such as steel, copper, rare earth magnets, and semiconductor precursors are extracted, processed, then shipped.

– Next comes site selection. The right location isn’t just about land, it’s about connectivity and power. No grid access? No data center. This is the curent reality of many operators because site selection hinges on access to massive amounts of electricity, but in major hubs, the grid is full. This is a transmission bottleneck. Today, more than 2600 gigawatts of energy and storage projects are stuck in interconnection queues across the United States. Projects may be ready on paper, with land secured and servers ordered, but they cannot be energized due to the grid congestion. The grid simply can’t accommodate them. This bottleneck is also responsible for wasted clean energy. In 2024, California curtailed 3.4 million megawatt-hours of solar energy, enough to power 300000 homes for a year.

– After selecting the site and navigating grid complexity, the design and construction phase begins. Builders raise the structure, lay deep foundations – often using modular designs to speed up deployment – and install the critical equipment supplied by OEMs. This includes power systems, electrical distribution components and cooling infrastructure that form the operational core of the facility. This step is crucial because cooling account for 40% of a data center’s total energy. As workloads intensify, systems are under unprecedented strain. Where traditional server racks once generated 3–5 kW of heat, today’s AI racks can push 30 to 150 kW, overwhelming legacy air-cooling infrastructure. Bottom line, without a scalable, energy-efficient way to remove heat, increasing compute density is physically impossible, no matter how many GPUs are installed install.

– Once the shell and the non-IT-equipment are ready, the “brain” is added. Racks packed with CPUs, GPUs, or custom chips are installed and powered on.

– Connectivity and network equipment are the following stage. The “brain” requires storage, and storage only works if the network delivers it. “Inside” the facility, high-speed networks link thousands of servers. “Outside”, fiber cables tie the facility to the global internet, ensuring low-latency access to users and cloud platforms.

– At this point, the data center is ready to run 24/7. Operators monitor power, cooling, and security, while software tools are here to optimize efficiency, predict failures, and manage workloads in real time. This part of the value chain has become a key innovation battleground because most servers run well below capacity, idling while still consuming power and generating heat. Cooling often runs at full blast regardless of load, and workloads are scheduled without considering energy cost or carbon intensity. The result? Systemic waste, inefficiency, and increased risk of downtime.

That is not all. When the data center is live, operators face another growing source of pressure: finding a way to scale clean energy. In a world where sustainability has evolved from a CSR initiative to a business imperative, most hyperscale data centers still rely on fossil fuel-dominated grids and carbon-intensive diesel generators for backup.

– Finally, after navigating these complex and largely invisible layers, the service is delivered.. Hyperscale’s along with colocation providers and SaaS platforms turn this physical infrastructure into the high-margins services mentioned above.

 

How to make data centers great again ?

This tell us that today’s data center value chain is global, fragmented, and built for scale. But the future will undoubtedly reshape it. Tomorrow’s data centers will look quite different from the ones built last few years. There are many innovations happening right now. In this dynamic space, Ai can provide value each time more intelligence is required. Making AI both the driver of demand and the most powerful tool to manage it.

Let’s see what future data centers will look like:

– AI vs. the Grid: Smarter Integration, Faster Deployment.
Startups are turning grid intelligence into a strategic advantage with platforms that model congestion, forecast interconnection timelines, and align compute needs with available capacity. To name a few, Gridmatic uses AI to optimize interconnection strategies, helping developers avoid stranded sites. PowerX provides predictive analytics for transmission planning. Camus Energy enhances grid visibility, enabling operators to foresee congestion and reroute loads cost-effectively.

– Clean Energy: On-Site Generation Meets AI
Hyperscalers are increasingly commissioning clean power directly, building solar, wind, and storage at or near data centers. According to Bloomberg Energy, by 2030, 27% of facilities are expected to have on-site generation, up from just 1% last year. This shift is being powered by startups that use AI to make clean energy smarter and more responsive. For example, RESURETY, provides advanced analytics for clean energy PPAs and real-time grid carbon intensity, enabling data centers to align compute with the greenest available power. FLEXIDAO goes further, using AI to orchestrate renewable procurement and dynamically shift workloads based on energy availability.

– Cooling: From Air to Liquid
Liquid cooling is now the standard for AI-class workloads, supporting rack densities exceeding 150 kW. Some startups are at the forefront of that technology. ZutaCore’s HyperCool delivers waterless, direct-on-chip cooling, reducing energy use by 50% and enabling 10x more compute in the same footprint. Submer leads in immersion cooling, submerging servers in dielectric fluid to eliminate fans and AC. CoolestDC designs custom liquid systems that improve performance, reduce energy use, and cut CO2 emissions.

But beyond hardware, AI is also making cooling intelligent, dynamically adjusting airflow and cooling output. An example would be Google cutting cooling energy by 40% using DeepMind’s AI.

– Waste Management: Heat, Water and circular economy
Cooling has another side that’s often overlooked: waste. These systems consume vast amounts of water and generate enormous amounts of heat. WestWater Research projects that water consumption from U.S. data centers will increase by 170% by 2030.

The issue is especially sensitive in drought-affected regions, where communities struggle with scarce water supplies. In Mesa, Arizona, residents pushed back strongly when the city council approved a hyperscale facility expected to use more than one million gallons of water per day. For that reason, as data centers grow in scale, managing waste effectively is increasingly important.

Instead of venting heat, companies are repurposing it into usable energy or services. Phasic Energy and NovoPower convert excess heat into electricity, reducing grid reliance. Deep Green takes a circular approach, using waste heat to warm water, and using cold pool water to cool servers. Similarly, water doesn’t have to be lost and Epic Cleantec shows us how. The company provides on-site recycling systems that reuse up to 95% of wastewater for cooling with systems that process up to one million gallons per day.

– Operational Inertia: From Static to Self-Optimizing
Yet, even with clean power, smart cooling, and waste recovery, inefficiencies remain. Not in the hardware, but in how it’s all managed. That’s where AI is stepping in again, not just to monitor, but to manage and improve operations in real time.

A new generation of intelligent platforms is turning static data centers into responsive systems. These tools act like a central nervous system, watching everything from cooling units to power supplies, learning how they behave, and spotting early warning signs. Instead of waiting for a breakdown, the system can alert a technician, order replacement parts, or switch to backup equipment automatically.

Augury, an American unicorn, is a good example. The company successfully cut downtime and maintenance costs. More mature companies like RiT Technologies’ with its XpedITe platform takes it further by automating how equipment is set up and used.

But AI doesn’t stop at maintenance. It’s also transforming how computing tasks are handled. Think of it like smart traffic routing. Instead of clogging the system during rush hour, non-urgent jobs can be delayed until energy is cheaper or greener. Sardina Systems’ FishOS has helped companies boost server usage from just 15% to over 50% by intelligently packing workloads.

All of this is being pulled together by next-generation DCIM platforms, the digital dashboards that give operators full visibility. Modius’ OpenData provides real-time analytics, helping teams find wasted capacity and reduce power use. Hyperview offers a cloud-based interface that predicts problems before they happen, cuts energy use, and keeps systems running smoothly.

Together, these tools are turning data centers from rigid, reactive facilities into flexible, intelligent ecosystems.

 

What’s the opportunity ahead of us and why does it matter ?

The data center industry is facing a pivotal moment. AI workload is straining the limits of power, cooling, water, and operations, but in that strain lies a rare chance : to build something cleaner, smarter, and more resilient than before.

The opportunity isn’t just to scale data centers. It’s to reimagine them, not as power-hungry warehouses of servers, but as intelligent, integrated engines of progress. Smarter data centers mean less wasted energy, less water, and a smaller carbon footprint. They can drive local investment and support circular economies.

CleanTech startups are shaping this transition. We’re witnessing a wave of innovation unseen since the birth of cloud computing. These companies are proving that sustainability isn’t a constraint, it’s a competitive advantage.

Leave a Reply

Your email address will not be published. Required fields are marked *