<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=150057&amp;fmt=gif">

Avoiding the risk of using the wrong risk tool

September 20, 2022 |   | 
3 minutes read
James Arrow

James Arrow

Developing and maintaining risk capability

Risk management is not a binary skill. Project teams are not incapable of managing risk one day and then demonstrating expert proficiency the next. Like any skill, competence is developed over time, or simply sustained with ongoing practice. When embarking on this learning journey, project leaders and risk practitioners do well to develop and maintain a level of awareness, so that they successfully navigate the risk of using the wrong risk tools.

In one of our earlier blog posts, entitled Stakeholder and Change Management we described how a risk management program can be likened to an organizational change management (OCM) program. In this context, capability assessments and capability improvement planning can help identify needs and prompt intervention, to kick off the roll-out of more appropriate or improved risk management tools and techniques.

The pitfalls of relying on scales and qualitative data

At the lower end of the capability spectrum, project teams are often well-served first honing their qualitative risk analysis skills. This helps ensure that risks are consistently and accurately captured before attempts are made to quantitatively analyze them. In earlier posts we emphasized this by underlining the importance of Collecting Data That Matter. For organizations at the beginning of their risk management capability journey, there is likely to be some level of overconfidence placed in the value of their qualitative data. Oftentimes, a frequent mistake is to make key decisions of significant consequence to project objectives or business goals, by relying largely on ordinal values, risk scores, heatmaps or risk matrices.   

In Hubbard’s book titled, The Failure of Risk Management, he lists the following “unintended effects of scales” and reasons for avoiding qualitative scoring in support of decision-making:

    • Range compression injects imprecision by grouping a wide range of variables and potentially exaggerates otherwise relatively small changes (or “garbage times garbage is garbage squared” – see Collecting Data That Matters). Additionally, Hubbard cites research conducted in conjunction with IBM that reveals that when scoring with point scales, clustering occurs around one or two values, further magnifying the effect of range compression (for example, when using a 5-point scale, 3s and 4s are disproportionately selected by users).
    • Partition dependence describes how inaccuracies are driven by requiring risk assessments to fall within a fixed range of values (or requiring the most likely value to equal the estimate in the model), denying the possibility of bias in the model. This facet, preventing assessors / team members from specifying more precise and otherwise realistic values, underscores why it is futile to attempt to gather historic qualitative records and try to compare, consolidate or normalize responses generated by people using inconsistent scale points (e.g., 3-point, 5-point or 10-point scales).
    • Presumption of regular intervals, when applying an ordinal scale to quantify risk impact, having to assume that incremental scoring values roughly approximate the relative magnitudes of risk exposure, can mischaracterize, or distort the reality of a situation. Human psychology and the different ways in which different team members may interpret a change in score further perpetuates inaccuracies.
    • Presumption of independence describes how qualitative scoring mechanisms are invariably unable to model dependencies or correlations between attributes or related risks.

With that last bullet point in mind, and borrowing from an earlier blog post on Real-Time Scenario Planning, Christian Smart in his book titled, Solving for Project Risk Management: Understanding the Critical Role of Uncertainty in Project Management, states that, when making any decision of consequence, “A full probabilistic risk analysis is needed that accounts for multiple sources of uncertainty.”

Safran - Avoiding the risk of using the wrong risk tool Mini Series Graphic

Moving towards risk maturity

Within organizations that have a well-established risk management capability, it is likely that they have evolved beyond simple qualitative analyses, or even relied on an external consultant to perform quantitative risk analyses (QRA) and possibly developed their own in-house expertise. The risk here however is one of status quo bias and sludge (or friction) that prevents change for the better. In such cases, teams may be compelled to continue using legacy tools that are no longer supported by the vendor. A good example here is Oracle’s Primavera Risk Analysis (OPRA).  

In summary, common issues that teams experience when compelled to employ outdated software tools include:

  • System crashes and otherwise unavailable downtime
  • Increased costs associated with emergency software fixes
  • Decreased productivity while teams contend with slow interfaces
  • Security risks associated with otherwise avoidable vulnerabilities and bugs
  • Compliance issues if project and client data become exposed and unprotected

Essentially, the threat of using outdated tools is iatrogenic risk or self-inflicted harm. If teams are unnecessarily distracted trying to tap dance around this list of avoidable issues, they are unable to devote, to the best of their ability, their full attention to the primary task of analyzing risk. The ramifications of this issue could be far more costly than the cost of employing modern, best-in-class software tools such as Safran Risk so, as Benjamin Franklin once said, “An ounce of prevention is worth a pound of cure.”

Combined cost and schedule risk data in Safran Risk

Finally, there is one other major advantage in using Safran Risk over legacy tools. Since Safran combines all cost and schedule risk data in one repository, critical information is structured in a consistent manner (namely, machine-readable XML), helping organizations position themselves as future-ready and poised to share standardized data with specialized data science teams for machine learning or advanced analytics.

As a senior leader within the construction and engineering industry, have you equipped your project management delivery teams with best-in-class tools? Have you done all you can to set your people up for success? Reach out and contact us if you need a trusted advisor to help stress-test any of your assumptions.

Safran - Avoiding the risk of using the wrong risk tool Consultation Graphic