Why should you even consider AI — something so virtual and abstract — for systems as physical and consequential as critical infrastructure?
For a long time, that skepticism was justified.
For years, AI projects in critical sectors stalled — too expensive, too brittle, too data-hungry. Every project began at zero: collecting data, labeling it for training, and building a model that worked until the moment conditions changed.
AI models were task-specific until 2017, when we entered the age of foundation models. And foundation models changed the math for AI.
Over the last 50 years, advances in compute, storage, and networks have steadily digitized the world — first media, then finance, then social systems. Now, that same wave is reaching physical infrastructure. Sensors, satellites, smart meters — everything is generating data at a scale that simply didn’t exist before.
Infrastructure finally became ready for AI.
It’s worth understanding the problem foundation models solve, because the implications for energy, utilities, and critical infrastructure are significant.
Why AI Projects Used to Fail
Contrary to how it often gets framed, the historical failure rate of enterprise AI wasn’t mainly a modeling problem. The models worked for the specific task and exact conditions they were designed for.
However, when something changes — a new asset type, a different operating pattern, or something else — the process had to start over. Learning didn’t transfer from one use case to the next.
Building an outage prediction model didn’t make building a vegetation management model any easier. You were always starting from scratch.
What Foundation Models Change
Foundation models break this pattern at the architectural level.
Instead of training a model to do one thing, first train it to understand the underlying structure of a domain (language, weather, imagery, or even grid behavior). Then adapt that understanding to specific tasks.
While the up-front cost is high, everything after that gets dramatically cheaper.
“That’s the reason so many people are excited. Performance, yes, but also the pathway. It’s the first time there is a real pathway to better scalability of AI,” said Dr. Hendrik F. Hamann during his GenAI for Critical Infrastructure presentation at Irth User Summit.
Demystifying GenAI
At their core, modern foundation models do something surprisingly simple:
They predict what comes next.
In a language model, that’s the next word.
In a weather model, it’s the next state of the atmosphere.
In a satellite model, it’s the missing portion of an image.
During pre-training, the model is given large datasets — text, satellite imagery, weather measurements, power grid state variables — with portions deliberately removed. The model’s only job is to predict what’s missing. Through that process, it develops a functional understanding of how that data is structured and how its elements relate to one another.
A language model trained this way doesn’t just learn words — it learns that “coffee” and “morning” are contextually close. A weather foundation model trained the same way learns that incomplete wind data implies something about where precipitation will be. A satellite foundation model learns to recognize pivot-irrigation patterns in the Midwest from a partially masked image it has never seen before.
From Foundation to Application (Where the Economics Shift)
That generalized understanding is the foundation. Everything built on top of it — the specific applications, the industry use cases — is fine-tuning. And fine-tuning is comparatively inexpensive.
Instead of building a model from scratch:
- You can start with a pre-trained foundation
- You apply a relatively small amount of labeled data
- You specialize the model for your use case
- Fine-tuning in under 150 lines of code
- Running inference on a single GPU instead of a supercomputer
- Reconstructing datasets even when a large portion of the data is missing
In practice, that could mean:
- Fine-tuning in under 150 lines of code
- Running inference on a single GPU instead of a supercomputer
- Reconstructing datasets even when a large portion of the data is missing
What This Looks Like for Critical Infrastructure
nfrastructure data is genuinely complex.
Weather, AMI, SCADA, generation schedules, market signals, network topology — it’s a mix of modalities that historically made modeling difficult.
Foundation models built natively on that kind of data are now possible because they are built to learn relationships across these kinds of inputs.
Once trained, the applications follow:
- Weather models → forecasting, hurricane tracking, outage prediction
- Satellite models → vegetation management, wildfire detection, asset monitoring
- Grid models → contingency analysis, load flow simulation, planning optimization
The key shift: one foundation, many applications.
The “Fake” Problem and a Better Way to Think About It
There’s a persistent concern about AI-generated outputs in high-stakes domains: that they’re somehow fabricated or not grounded in reality.
When a foundation model fills in missing satellite imagery or reconstructs wind field data, it’s doing the same thing a traditional forecast model does — making a probabilistic prediction based on learned patterns.
We don’t call a seven-day weather forecast “fake” because it won’t match reality perfectly. It’s called a prediction, and we evaluate it by how well it matches reality on average, how its errors are distributed, and whether those errors are acceptable for the situation.
Calling AI predictions “fake” misses the point. We should be asking if whether they are useful and, increasingly, the answer is yes.
Getting Started Without a Data Science Team
One practical question that comes up consistently: how accessible is any of this to an organization that isn’t a research institution? The honest answer is more accessible than most people expect and getting more so.
Many of the models discussed in the presentation GenAI for Critical Infrastructure, are available as open source, often with working examples.
The process looks like this:
- Identify a high-value use case
- Find a relevant foundation model
- Run an existing example
- Fine-tune with your data
- Evaluate results
That doesn’t mean every organization should build a production AI system this way. But it does mean that the exploration phase — validating whether the technology fits your problem — is within reach for a small team willing to spend time with the available resources.
If your AI roadmap still assumes every use case starts with data collection and model building, you’re already operating on an outdated cost curve.
What Comes Next
For years, physical infrastructure lagged behind other industries in digitization. Not because it mattered less, but because it was harder to measure and model.
That gap is closing quickly. The data-generating capacity of physical assets is growing exponentially.
Foundation models are arriving at exactly the moment when the data they need to be useful is becoming available.
The era of starting from scratch on every AI project is ending. What comes next isn’t easier in every respect — the domain complexity of critical infrastructure is real, and the stakes of getting things wrong are high. But the fundamental economics have changed, and the implications are worth taking seriously.

