In the world of high-stakes project management, the promise of Artificial Intelligence (AI) often sounds like "real magic". But for complex, multi-billion pound programs - the true "Everest of project management" - credibility, accountability, and transparency are non-negotiable.
Major programmes are almost always 'First of a Kind' (FOAK) projects, meaning they are full of things that have never happened and are "making history".
Whilst major projects offer some elements of repeatability and therefore correlation which support AI’s base principles. Over recent decades, major projects have become much larger in scale, more complex in nature and have in many cases become mega projects or large programmes of work. This has led to even more ‘context being required’ as many projects and programmes have become ‘First of a Kind’ where new technology, innovation and new delivery methods are being adopted to improve constructability and outcomes.
This changing landscape with increased interfaces and complexity renders AI fundamentally less reliable and increases the need for greater transparency and judgement.
AI uses proprietary algorithms and machine learning to find correlations in historical data, but this rarely translates into actionable insights due to lack of contextualisation, and the fact that current / future reality is unbounded in its growth and diversification.
For a greater appreciation of this, please study concepts of "Emergence” and how they relate to “Complex Adaptive Systems”. This is where we get the popular saying “It’s More than the Sum of its Parts”!
1. No Defensible Traceability: When an AI forecast provides a result (e.g. a P90 completion date), and you are challenged by a board or audit committee, the answer often defaults to "the algorithm pretty much recommended it". This results in "indefensible logic" because there is a "computational mystery and very poor traceability". In a courtroom or even during an ordinary audit, things that are unexplainable are "not going to fly".
2. Inability to Model Causality: AI can note that programmes with poor data quality tend to have delays, but this is merely correlation, earning it the moniker "Captain Obvious". It cannot infer contextualised logical chain required in major programmes: Identifying a specific problem in a given setting, and modelling exactly how that problem causes a delay that triggers penalty clauses and impacts cash flow.
With Safran Risk, every single output can be traced back to a specific input, ensuring every decision is properly documented, defensible, and auditable.
Gregor Grzeszczyk, Senior Associate Director of Risk at Amentum
The core argument for first principles modelling tools like Safran Risk is that they embody the necessary Glass Box philosophy. A Glass Box provides computational clarity and is built on expert judgment and explicit assumptions.
Safran Risk possesses unique features that provide the defensibility and nuance required for mega-projects, none of which AI currently offers an equivalent for:
The message is clear: We are not Luddites rejecting new technology. We need to integrate automation and human knowledge e.g. the ACE approach (Automate, Curate, Execute and Explain).
AI is useful for repetitive tasks or simple data processing endeavours, like scanning documents and flagging anomalies. But after AI has processed the data, human expertise must step in to curate those findings, eliminate false positives, and build the proper causal model.
In recent weeks we’ve become more familiar with the term “AI Slop” circulating in the news. Now, all of those thousands of processes that AI sped up for you must be checked for potential corners cut and hallucination to ensure the quality of work submitted reflects how you wish to be perceived. The mantra of “Doing things right the 1st time” feels at odds with the potential for AI Slop.
For multi-billion pound programmes, we are not buying gadgets. We must prioritize how right over how fast. And use conventional yet powerful tools, like Safran Risk, that are customisable, credible and explainable.
AI certainly is a great and exciting new tool, when used within its appropriate context, but you wouldn’t use a hammer to saw some timber, and we shouldn’t rush to apply it to everything conceivable.
Furthermore, there is currently a lack of discussion and visibility on where the strongest possible gains of AI actually come from, unfettered access to your actual consistently structured and contextualised data. The cost of this? Often overlooked or hidden. Is your company going to invest in the entirely new ecosystem and IT infrastructure needed (think servers and data lakes), or are they just going to outsource to a third party in an age of increasing cybersecurity risk?
Major programmes demand expertise, transparency, and accountability. They need causal and contextual understanding, not correlation and pattern matching alone.
Use AI to enhance your capabilities, but never use it to replace your judgment.