Deploying Probabilisitic Corrosion Forecasting at Scale: What Unfied Data Makes Possible
Integrity analytics doesn't scale without a unfied inspection history. To demonstrate what becomes possible once inspection data is aligned,...
4 min read
Sheri Baucom : February 12, 2026

Integrity analytics doesn't scale without a unfied inspection history.
To demonstrate what becomes possible once inspection data is aligned, normalized, and historically connected, we applied an analytical probabilistic corrosion depth model to a dataset comprised of:
to study the advantages of a probabilistic versus a deterministic approach in estimating a future corrosion depth.
Metal loss from corrosion remains one of the dominant integrity threats to transmission pipelines. As systems age and operate under diverse environmental and operational conditions, wall loss becomes a persistent challenge. Without adequate characterization, monitoring, and forecasting, progressive corrosion can lead to loss of containment, environmental impacts, safety incidents, and significant economic consequences.
Reliable corrosion assessment and forward-looking forecasting aren't optional; they're fundamental to effective prevention of pipeline failures.
Many corrosion growth models currently used in the industry implicitly assume that ILI depth measurements are exact. ILI systems report sizing tolerances, yet most deterministic integrity workflows:
This creates a fundamental problem in which uncertainty exists, but isn’t modeled.
When uncertainty is ignored:
The probabilistic corrosion growth model was developed specifically to address this gap. Rather than smoothing or
averaging uncertain depth calls, the model explicitly incorporates ILI depth uncertainty by assuming normally distributed measurement error andpropagating that uncertainty through time. The output is not one predicted depth, but a depth probability distribution and a corresponding probability of exceedance (PoE).
Is this outlier hiding in your dataset?

Before corrosion growth can be calculated, the same physical anomaly must be consistently identified across multiple inline inspections. This requires spatially aligned data. The automatic alignment process within AIP assigns a common normalized odometer reference to each anomaly measurement within ananomaly chain (a series of matched anomalies) across the ILI history.
Because individual odometer readings can vary slightly between inspections, each anomaly chain is represented by a mean normalized location that characterizes its position along the pipeline. This step is critical.
Only after anomaly histories are spatially aligned and normalized can depth uncertainty bemodeled correctly and propagated through time. Alignment is a prerequisite for defensible analytics.

Traditional anomaly analysis for metal loss uses a fixed depth threshold. For example, "Repair if the metal loss depth exceeds 40% of the wall thickness."
However, ILI measurements carry sizing tolerances. A 38% depth "call" is not meaningfully different from 42% when uncertainty is considered.
Using unified, aligned inspectionhistories, the model instead generates:
The model was first evaluated usinganomalies with a field-found depth using the following thresholds:
These criteria were applied consistently across thousands of anomaly chains to identify anomalies to include in the study.
Before applying the model to candidate features, it was tested against anomalies already associated with a field-identified depth, i.e., "Repair associated chain."
Key observation: When depths were plotted on unity charts (ILI vs. a field-depth), the probabilistic approach selected anomalies distributed aroundthe nominal 40% metal loss depth threshold. Notably:
This demonstrates a limitation of hard cutoff screening: Features just below threshold can still carry a high probability of exceedance.

The model was then applied to anomaly "candidate" chains that were not repaired. These chains underwent as a subsequent ILI run that provided validation data, and therefore, the unity charts are ILI vs. ILI depth.
Example: When Ignoring Measurement Uncertainty Fails
But once measurement uncertainty was modeled:
The subsequent inline inspection measured the anomaly at 54% WT. The issue wasn’t necessarily growth but unmodeled uncertainty.
Across the dataset:
The model did more than identify under-calls. Selected anomalies fell into four categories:
Case A – Under-called severity
Case B – Clear exceedance
Case C – High depth PoE, low growth PoE
Case D – Measurement bias
This differentiation matters because deterministic screening cannot distinguish between:
Probabilistic modeling can.
The model requires only:
Because inspection histories were unified and aligned, the same statistical framework was applied across:
Automation in integrity programs isnot about dashboards or AI overlays. It begins with unified inspectiondata, from which data science models can be derived and tested. This is the bridge between unified data and engineering-grade AI.
Two liquid pipeline operators have commissioned controlled studies applying the model to their historical ILI portfolios to quantify performance against existing Integrity management screening practices.
These studies require only:
For operators seeking to strengthen statistical defensibility in corrosion decision-making, a scoped analytical study mayprovide measurable insight before broader implementation. Contact us to learn more.
Integrity analytics doesn't scale without a unfied inspection history. To demonstrate what becomes possible once inspection data is aligned,...
Pipeline integrity teams face an increasingly difficult challenge: growing system complexity and tighter regulatory timelines, with the same or...