Software tools and data democratization
The construction industry is abuzz with talk of digitalization and Artificial Intelligence (AI) hype. Certainly, the next decade does have the potential to be transformative for engineering and construction, offering the productivity gains that other industries enjoyed during the twentieth century, as we discussed in the introductory post in this series on Capability Improvement. By way of comparison, over the past decade an unprecedented number of project professionals have gained access to software analysis tools that were previously only available to a limited number of risk practitioners, as referenced in our blog post entitled Avoiding The Risk Of Using The Wrong Risk Tool. In this regard, data democratization is a good thing, however problems can arise when inadequate attention is given to both risk theory and risk data.
Douglas Hubbard, in his book titled, The Failure of Risk Management: How it is Broken and How to Fix it, quips that, without proper care or understanding, much risk information can be “worse than useless” by inadvertently triggering misdirection and self-inflicted harm. In a similar vein, Charles Babbage, the father of the computer, is quoted as saying in 1864, “On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”. This has become more widely known as the GIGO principle or garbage in, garbage out.
Avoiding risk management pitfalls
Effective risk management is more than risk analysis. It is about proactive action, learning and updating plans to ensure predictable outcomes, as we unpack in our blog post on Real-Time Scenario Planning. Problems can arise when teams attempt to generate and process risk data in the absence of sound risk theory. Two common pitfalls include: 1) teams employing risk terminology or processes that generate bias or noise, and 2) teams developing project simulations without a thorough appreciation of the need for ongoing or continuing project risk management.
Best-in-class risk software such as Safran Risk offers teams a framework that conforms with widely recognized best practices and provides a structured canvas for capturing and curating risk data at all stages throughout project delivery. . In addition, Safran Risk offers unique features, such as Global Risks, allowing organizations to share empirical or reference class data and account for common, recurring systemic risk that, to some degree, can better account for the impact of unknown unknowns (which some may describe as Black Swan events).
ISO 8000-8:2015 describes fundamental concepts of information and data quality. Modern, supported software can account for what this international standard refers to as syntactic and semantic data quality (think here in terms of metadata / terminology and the reliability of applied formulas or software code). Pragmatic quality, on the other hand, will be a function of the organization’s risk culture, available risk processes, experience and the level of consistency in which risk processes are applied, from one project to the next. Essentially, pragmatic quality addresses the degree to which risk data is worthwhile. Pragmatic quality is directly related to an organization’s ability to comprehend and apply good risk theory. It’s very much a human attribute.
Safran Risk – Fully traceable and reproducible risk simulations
Sound risk theory and credible practice require that critical decisions are made using only objective, quantitative inputs that are “mathematically unambiguous”. Any decision of consequence should be made without reliance on qualitative inputs. Hybrid approaches may combine one or more quantitative models, but the reader is urged to verify, in matters concerning simulation or risk quantification, that hybrid is not used as a euphemism for semi-quantitative. When it comes to the qualitative scoring of risks, Hubbard warns of the “unintended effect of scales” which include the relative imprecision of scoring bands, psychological tendency or bias to focus on a limited number of inputs and an inability to model correlations or domino effects. Ultimately, risk professionals need quantitative risk analysis tools such as Safran Risk to create risk simulations that are fully traceable and reproducible by independent parties (including, but not limited to, alliance partners, insurers, financiers, or investors). See our blog post Avoiding The Risk Of Using The Wrong Risk Tool to unpack this topic further.
Finally, risk practitioners or experienced project professionals will call upon other tools and techniques to ensure pragmatic data quality. When collecting risk inputs, teams employ a risk metalanguage to describe risk events in a manner that delineates between cause and effect. This not only aids brainstorming and helps minimize the potential for miscommunication, it also it also facilitates a more accurate quantification of residual risk exposure (i.e. exposure remaining after the team has done all it can to proactively manage risk). Additionally, techniques can be used to elicit team inputs in a manner that avoids problematic heuristics arising during conventional risk workshops. These issues are well documented by Hillson & Murray-Webster and include: groupthink, the Moses factor and cultural conformity. In essence, these phenomena introduce systematic error or bias that can be avoided, for example, using the Delphi technique or one-on-one interviews before engaging in team-wide review and discussion. Finally, Safran Risk can ensure schedule readiness for analysis by testing for compliance with CPM (critical path methodology) best practices.
If you would like to equip your organization and teams with first-class risk management software, so that uncertainty can be captured and modeled, the experts at Safran are ready to help. Contact us to discuss your current project risk management capabilities with a trusted advisor.