<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=150057&amp;fmt=gif">

Solving the Project Portfolio Paradox

December 07, 2022 |   | 
5 minutes read
Mark Turner

Mark Turner

How to Optimise Project Risk Management Tools, Processes, and Techniques for Unique Projects  

Monthly Risk Data Reporting Issues

Each project is unique, with different requirements for different customers.  Thus, project managers require optimal project management tools and techniques that are flexible enough for them to adapt to their own requirements and to deliver their specific project needs. 

When projects create risk registers in independent systems, such as separate spreadsheets, it becomes difficult to integrate the different data sets to form a comprehensive picture of all the projects for the organisation.  This becomes particularly troublesome when each project identifies separate risk scoring mechanisms or uses different nomenclature to describe the risk approval states or completion status. 

The obvious solution is to create a single risk system to compile all the project risk registers.  However, this introduces constraints on the projects to follow common approaches which may not fit with their particular project requirements. Projects want to be free to use their own methodologies, whilst the organisation needs to standardise each project so that they can be compared with each other!  

Is it possible to create a common system within which each project can achieve its unique requirements? 

 

Differences within Common Risk Management Frameworks 

Most organisations will have adopted one of the common frameworks for standard risk methodology.  Both the PMI and ISO31000 frameworks advocate similar approaches to managing risk.  These include contextualising the situation, identifying what could happen, defining what can be done, understanding the effects of action then monitoring, reviewing and communicating results. 

Whilst such frameworks create sensible structures, the precise way in which each of these several steps is conducted is left open to interpretation by the organisations that adopt them and the projects that implement them.  It is within these implementations that the unique differences for projects are magnified. 

Some limited examples of these differences include: 

  • Deciding how to score a risk.  While using a matrix of likelihood and impact to define a risk score is still common, such matrices are widely open to interpretation.  The number of columns and rows changes.  The values within the cells change.  The qualitative meaning of likelihood and impact change. 
  • Related to the score is the use of either relative or absolute values when determining impacts.  Some projects will want to gauge their impacts against a fixed set of values (absolute) whilst others will determine impact relative to the size and duration of the specific project. 
  • Describing the “Before” and “After” position of the risk.  Some organisations favour a simple two-position state of “Pre-treatment” and “Post-treatment”.  Others use a three-position state of “Inherent Risk”, “Current Risk” and “Target Risk”, whilst others have also included a fourth position of “Residual Risk”. 
  • Application of different methods to threats and opportunities.  Whilst many organisations recognise that risks are composed of threats and opportunities, some projects take a separate approach to scoring and treating them. 
  • Defining a risk status.  Are all risks in a risk register equal and do they have the same level of maturity?  Usually, risk registers will hold a mix of well-articulated and comprehensively analysed risks along with others that are mere placeholders for further identification and review.  How each project handles such maturity issues can vary widely between projects. 
  • Risk Impact Data Entry.  Whilst some projects may only want to use single point values for their impact, others will require three point values for minimum, most likely and maximum.  Other, more mature projects may be looking for more sophisticated distributions such as BetaPert or Normal.  Finding a solution that caters for everyone is critical. 

When stand-alone systems risk registers are created to cater for the unique requirement of the particular project, the above-mentioned differences have no effect on the way the project performs.  There are no right or wrong answers when deciding how to manage the risks on any given project.  The crucial point is that someone, somewhere is looking at the risks! 

However, these small differences become a significant problem when different projects need to be reported against each other. 

 

The Programme Director’s Focus on Project Performance 

Programme Directors and other Senior Managers should care about individual project performance.  At the same time, they should be concerned about how each of those projects is performing as a part of an organisation, be that a dedicated programme or as a member of a loose project portfolio.  This is where the standard individual approach to risk registers starts to fall apart. 

Some of the common questions that Programme Directors should be asking include: 

  • Which project holds the biggest risk? 
  • Will we finish within budget? 
  • How much contingency does the portfolio carry? 
  • How much overall exposure to threats does the portfolio carry? 
  • How well are each of the projects treating their risks? 

Attempting to pull together answers to such questions is not a simple task when each of the risk registers has been constructed in different ways.  Indeed, obtaining the data in the first place can be tricky.  Ensuring that the data is up-to-date and not likely to change after analysis is another issue.  Once the data has been sourced, aligning it to a common standard can be problematic.  A significant risk on a million-dollar project may well be inconsequential on a billion-dollar project.  Reconciling the meaning of qualitative impact assessments between vastly different project types cannot be achieved easily without a common understanding of the qualitative meaning. 

The time taken for a risk analyst to pull together such reports for medium to large organisations can be a full-time job. 

 

Bridging the Gap Between Project Risk Management and Portfolio Reporting  

Although at first sight it would appear that project risk management and portfolio reporting have very different requirements of risk registers, the problem may not be as acute as it first appears. 

Software such as Safran Risk Manager bridges the gap between the two needs by allowing sufficient customisation of the set-up to meet the unique needs of the project, whilst maintaining common baselines against which all projects can be reported on. 

These set-up specifications include: 

  • Being able to create an unlimited number of risk-scoring matrices.  Thus, each project could have its own unique scoring method reproduced within the system if so desired.  However, it is advocated that a limited number of standard risk templates be transitioned to. 
  • Being able to utilise both absolute and relative scoring methodologies.  
  • Ensuring that the scoring method is aligned to quantitative values of cost and time, allowing the portfolio reporting to “reverse engineer” the meaning of different qualitative assessments and so present a standard definition when projects are being compared with each other. 
  • Ensuring that minimum standard values are captured within each project risk, so that these standard values can be aggregated across the portfolio.  Thus, metrics such as risk close-out performance and action completion performance can be measured and used to gauge the implementation of the risk management process. 
  • Enabling projects to lock their risk data following its review and allowing portfolio reports to be created from up-to-date data that is unlikely to change. 

Once portfolio reporting capability is integrated into the design of the software solution, generating portfolio reports moves from being a full-time task to the press of a button.   
As such, risk data analysts can use their time more effectively to better study the results of the report and aid in interpreting the meaning. 

Safran - Safran Risk Manager - Mini Series Graphic

Safran Risk Manager can: 

  • Empower projects to manage risks in the way that they want; 
  • Enable Programme Directors and Senior Management to receive standardised reports across the portfolio; 
  • Free up Risk Data Analysts from gathering data, enabling them to add more value through interpreting results. 

Are your project managers taking their own approaches to managing project risks which makes it difficult to track and report on? 

Safran Risk Manager is specifically designed to let project managers meet their own project requirements, whilst still enabling the central coordination of data from within the same operating environment. This not only saves significant time, but also ensure consistent and accurate representation of the data. Get in touch with one of the Safran experts today and request a free trial of Safran Risk Manager.