The Glass Box vs. The Black Box: Why Safran Risk Provides the Only Defensible Path for Mega-Projects

December 11, 2025 |   | 
4 minutes read
Gregor Grzeszczyk

Gregor Grzeszczyk

In the world of high-stakes project management, the promise of Artificial Intelligence (AI) often sounds like "real magic". But for complex, multi-billion pound programs - the true "Everest of project management" - credibility, accountability, and transparency are non-negotiable.

 

The Black Box: Correlation Without Causality

Major programmes are almost always 'First of a Kind' (FOAK) projects, meaning they are full of things that have never happened and are "making history".

Whilst major projects offer some elements of repeatability and therefore correlation which support AI’s base principles. Over recent decades, major projects have become much larger in scale, more complex in nature and have in many cases become mega projects or large programmes of work. This has led to even more ‘context being required’ as many projects and programmes have become ‘First of a Kind’ where new technology, innovation and new delivery methods are being adopted to improve constructability and outcomes.

This changing landscape with increased interfaces and complexity renders AI fundamentally less reliable and increases the need for greater transparency and judgement.

AI uses proprietary algorithms and machine learning to find correlations in historical data, but this rarely translates into actionable insights due to lack of contextualisation, and the fact that current / future reality is unbounded in its growth and diversification.

 

For a greater appreciation of this, please study concepts of "Emergence” and how they relate to “Complex Adaptive Systems”. This is where we get the popular saying “It’s More than the Sum of its Parts”!

 

The key features AI lacks which are necessary to avoid an "expensive mistake" include:

1. No Defensible Traceability: When an AI forecast provides a result (e.g. a P90 completion date), and you are challenged by a board or audit committee, the answer often defaults to "the algorithm pretty much recommended it". This results in "indefensible logic" because there is a "computational mystery and very poor traceability". In a courtroom or even during an ordinary audit, things that are unexplainable are "not going to fly".

2. Inability to Model Causality: AI can note that programmes with poor data quality tend to have delays, but this is merely correlation, earning it the moniker "Captain Obvious". It cannot infer contextualised logical chain required in major programmes: Identifying a specific problem in a given setting, and modelling exactly how that problem causes a delay that triggers penalty clauses and impacts cash flow.

 

With Safran Risk, every single output can be traced back to a specific input, ensuring every decision is properly documented, defensible, and auditable.

Gregor Grzeszczyk, Senior Associate Director of Risk at Amentum

 

The Glass Box: Actionable Insights Built on First Principles

The core argument for first principles modelling tools like Safran Risk is that they embody the necessary Glass Box philosophy. A Glass Box provides computational clarity and is built on expert judgment and explicit assumptions.

Safran Risk possesses unique features that provide the defensibility and nuance required for mega-projects, none of which AI currently offers an equivalent for:

  • Verifiable Financial and Cost Linkage:
    AI schedules often have a "limited cost linkage", assuming everything goes "perfectly money-wise". Algorithms also are blind to the strategic cost phasing or capital flows. In contrast, Safran Risk allows users to add true cash information into the model. We can explicitly transfer the knowledge that delays, accelerations, and mitigations all cost money directly into the model, overcoming AI’s limited understanding of financial management and mitigations.

  • Probabilistic Mitigation Modelling and Conditional Logic:
    AI is limited to modelling only the "very basic characteristics of a mitigation". It cannot handle "what if scenarios" or conditional branching. Safran Risk enables the complex modelling of mitigation strategies, including probabilistic branching and mitigation success rate simulation. This allows practitioners to analyse crucial nuances like the cost of purchasing components in advance, the probability of mitigation outcomes, and the existence of secondary risks, which are completely missed by AI.

  • Explicit Customization via Scripting Capabilities:
    Because major programmes have unique characteristics, generic AI algorithms fall short. Safran is equipped with very strong scripting capabilities that allow practitioners to tailor the tool for a custom context. This ensures the model caters precisely to the specific case and context, a flexibility that cannot be captured by "very generic algorithms" that cannot be modified.

  • A Complete Audit Trail: 
    With Safran Risk, every single output can be traced back to a specific input, ensuring every decision is properly documented, defensible, and auditable. This audit trail is what allows you to "show your working", proving your decision was "reasonable, balanced, and it was based on the information available at the time".

 

Trust But Verify

The message is clear: We are not Luddites rejecting new technology. We need to integrate automation and human knowledge e.g. the ACE approach (Automate, Curate, Execute and Explain).

AI is useful for repetitive tasks or simple data processing endeavours, like scanning documents and flagging anomalies. But after AI has processed the data, human expertise must step in to curate those findings, eliminate false positives, and build the proper causal model.

In recent weeks we’ve become more familiar with the term “AI Slop” circulating in the news. Now, all of those thousands of processes that AI sped up for you must be checked for potential corners cut and hallucination to ensure the quality of work submitted reflects how you wish to be perceived. The mantra of “Doing things right the 1st time” feels at odds with the potential for AI Slop.

For multi-billion pound programmes, we are not buying gadgets. We must prioritize how right over how fast. And use conventional yet powerful tools, like Safran Risk, that are customisable, credible and explainable.

AI certainly is a great and exciting new tool, when used within its appropriate context, but you wouldn’t use a hammer to saw some timber, and we shouldn’t rush to apply it to everything conceivable.

Furthermore, there is currently a lack of discussion and visibility on where the strongest possible gains of AI actually come from, unfettered access to your actual consistently structured and contextualised data. The cost of this? Often overlooked or hidden. Is your company going to invest in the entirely new ecosystem and IT infrastructure needed (think servers and data lakes), or are they just going to outsource to a third party in an age of increasing cybersecurity risk?

Major programmes demand expertise, transparency, and accountability. They need causal and contextual understanding, not correlation and pattern matching alone.

Use AI to enhance your capabilities, but never use it to replace your judgment.

 

Note From the Safran Team

Gregor Grzeszczyk introduced this topic at Project Controls Expo UK (Wembley 2025), where he delivered a powerful "pro professionalism" argument aimed at separating AI hype from reality. Gregor’s crucial takeaway centres on a fundamental choice in risk analysis: The "Glass Box vs. Black Box" approach. Questioning the opaque nature of the Black Box and its suitability for meeting the unique demands of major programmes that mandate transparent tools like Safran Risk.

 

shared image (13)

 

We thank Gregor for sharing his insightful perspectives, and we will be following up with a longer article detailing all elements of his presentation.