Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Other topics – Strategy @ Risk

Category: Other topics

  • The Estimated Project Cost Distributions and the Final Project Cost

    The Estimated Project Cost Distributions and the Final Project Cost

    This entry is part 2 of 2 in the series The Norwegian Governmental Project Risk Assessment Scheme

    Everybody believes in the exponential law of errors: the experimenters, because they think it can be proved by mathematics; and the mathematicians, because they believe it has been established by observation. (Whittaker& Robinson, 1967)

    The growing use of Cost Risk Assessment models in public projects has raised some public concerns about its costs and the models ability to reduce cost overruns and correctly predict the projects final cost. We have in this article shown that the models are neither reliable nor valid, by calculating the probabilities of the projects final costs. The final cost and their probabilities indicate that the cost distributions do not adequately represent the actual cost distributions.

    Introduction

    In the previous post we found that the project cost distributions applied in the uncertainty analysis for 85 Norwegian public works projects were symmetric – and that they could be represented by normal distributions. Their P85/P50 ratios also suggested that they might come from the same normal distribution, since a normal distribution seemed to fit all the observed ratios. The quantile-quantile graph (q-q graph) below depicts this:

    Q-Q-plot#1As the normality test shows, it is not exactly normal ((As the graph shows is the distribution slightly skewed to the right)), but near enough normal for all practical purposes ((The corresponding linear regression gives a value of 0.9540 for the coefficient of determination (R).)). This was not what we would have expected to find.

    The question now is if the use of normal distributions representing the total project cost is a fruitful approach or not.

    We will study this by looking at the S/P50 ratio that is the ratio between the final (actual) total project cost – S and the P50 cost estimate. But first we will take a look at the projects individual cost distributions.

    The individual cost distributions

    By using the fact that the individual project’s cost are normally distributed and by using the P50 and P85 percentiles we can estimate the mean and variance for all the projects’ the cost distributions (Cook, 2010).

    In the graph below we have plotted the estimated relative cost distribution (cost/P50) for the projects with the smallest (light green) and the largest (dark green) variance. Between these curves lie the relative cost distributions for all the 85 projects.

    Between the light green and the blue curve we find 72 (85%) of the projects. The area between the blue and the dark green curve contains 13 of the projects – the projects with the highest variance:

    Relative-costThe differences between the individual relative cost distributions are therefore small. Average standard deviation for all 85 projects is 0.1035 with a coefficient of variation of 48%. For the 72 projects the average standard deviation are 0.0882 with a coefficient of variation of 36%. This is consistent with what we could see from the regression of P85 on P50.

    It is bewildering that a portfolio of so diverse projects can end up with such a small range of normal distributed cost.

    The S/P50 ratio

    A frequency graph of the 85 observed ratios (S/P50) shows a pretty much symmetric distribution, with a pronounced peak. It is slightly positively skewed, with a mean of 1.05, a maximum value of 1.79, a minimum value of 0.41 and a coefficient of variation of 20.3%:

    The-S-and-P85-ratioAt first glance this seems as a reasonable result; even if the spread is large, given that the project’s total cost has normal distributions.

    If the estimated cost distribution(s) gives a good representation of the underlying cost distribution, then – S – should also belong to that distribution. Have in mind that the only value we know with certainty to belong to the underlying cost distribution is – S, i.e. the final total project cost.

    It is there for of interest to find out if the S/P50 ratio(s) are consistent with the estimated distributions. We will try to investigate this by different routes, first by calculating at what probability the deviation of S from P50 occurred.

    What we need to find is, for each of the 85 projects, the probability of having had a final cost ratio (S/P50):

    i.    less or equal to the observed ratio for projects with S > P50 and
    ii.   Greater or equal to the observed ratio for projects with S < P50.

    The graph below depicts this. The maroon circles give the final cost ratio (S/P50) and their probabilities:

    Relative-cost#1A frequency graph of these probabilities should give a graph with a right tail, with most of the projects close to the 0.5 fractile (the median or P50 value), tapering off to the right as we move to higher fractiles.

    We would thus anticipate that most projects have been finished at or close to the quality assurance schemes median value i.e., having had a probability of 0.5 for having had this or a lower (higher) value as final cost ratio, and that only a few would have significant deviations from this.

    We will certainly not expect many of the final cost ratio probabilities above the 0.85 percentile (P85).

    The final cost probability frequency graph will thus give us some of the completing information needed to assess the soundness of using methods and simulation techniques ending up with symmetric project cost distributions.

    Final project cost ratio probability

    The result is given in the graph below, where the red bars indicates projects that with probabilities of 85% or more should have had lower (or higher) final cost ratios:

    Final-cost-probabilitiesThe result is far from what we expected: the projects probabilities are not concentrated at or close to 0.5 and the frequency graph is not tapering off to the right. On the contrary, the frequency of projects increases as we move to higher probabilities for the S/P50 ratios, and the highest frequency is for projects that with high probability should have had a much less or a much higher final cost:

    1. The final project cost ratio probabilities have a mean of 0.83, a median at 0.84 and a coefficient of variation of 21%.
    2. Of the 85 projects, 51 % have final cost ratios that had a probability of 84% or less of being lower (or higher) and 49% have final cost ratios that had a probability of 85% or more of being lower (higher).

    Almost fifty percent of the projects have thus been seriously under or over budgeted or have had large cost over- or underruns – according to the cost distributions established by the QA2 process.

    The cumulative frequency distribution below gives a more detailed description:

    Final-cost-probabilities#1It is difficult to say in what range the probability for the S/P85 ratio should have been for considering the estimated cost distributions to be “acceptable”. If the answer is “inside the quartile range”, then only 30% of the projects final cost forecasts can be regarded as acceptable.

    The assumption of normally distributed total project costs

    Based on the close relation between the P50 and P85 percentiles it is tempting to conclude that most if not all projects has had the same cost estimation validation process; using the same family of cost distributions, with the same shape parameter and assuming independent cost elements – ending up with a near normal or normal distribution for the projects total cost. I.e. all the P85/50 ratios belong to the same distribution.

    If this is the case, then also the projects final costs ratios should also belong to the same distribution. In the q-q graph below, we have added the S/P50 ratios (red) to the P85/P50 ratios (green) from the first q-q graph. If both ratios are randomly drawn from the same distribution, they should all fall close onto the blue identity line:

    Q-Q-plot#3The ratios are clearly not normaly distributed; the S/P50 ratios ends mostly up in both tails and the shape of the plotted ratios now indicates a distribution with heavy tails or may be with bimodality. The two ratios is hence most likely not from the same distribution.
    A q-q graphn with only the S/P50 ratios shows however that they might be normaly distributed, but have been taken from a different distribution than the P85/P50 ratios:

    Q-Q-plot#2The S/P50 ratios are clearly normaly distributed as they fall very close onto the identity line. The plotted ratios also indicates a little lighter tails than the corresponding theoretical distribution.

    That the two sets of ratios so clearly are different is not surprising, since the S/P50 ratios have a coeficient of variation of 20% while the same metric is 4.6% for the P85/P50 ratios ((The S/P50 ratios have a mean of 1.0486 and a standard deviation of 0.2133. The same metrics for the P85/P50 ratios is 1.1069 and 0.0511.)) .

    Since we want the S/P50 ratio to be as close to one as possible, we can regard the distribution of the S/P50 ratios as the QA2’s error distribution.This brings us to the question of the reliability and validity of the QA2 “certified” cost risk assessment model.

    Reliability and Validity

    The first that needs to be answered is then the certified model’s reliability in producing consistent results and second if the cost model really measures what we want to be measured.

    1. We will try to answer this by using the S/P50 probabilities defined above to depict:
      The precision ((ISO 5725-Accuracy of Measurement Methods and Results.))  of the forecasted costs distributions by the variance of the S/P50 probabilities, and
    2. The accuracy (or trueness) of the forecasts, or the closeness of the mean of the probabilities for the S/P50 ratio to the forecasts median value – 0.5.

    The first will give us an answer about the model’s reliability and the second an answer about the model’s validity:
    Accuracy-and-PrecisionA visual inspection of the graph gives an impression of both low precision and low accuracy:

    • the probabilities have a coefficient of variation of 21% and a very high density of final project costs ending up in the cost distributions tail ends, and
    • the mean of the probabilities is 0.83 giving a very low accuracy of the forecasts.

    The conclusion then must be that the cost models (s) are neither reliable nor valid:

    Unreliable_and_unvalidSummary

    We have in these two articles shown that the implementation of the QA2 scheme in Norway ends up with normally distributed project costs.

    i.    The final cost ratios (S/P50) and their probabilities indicate that the cost distributions do not adequately represent the actual distributions.
    ii.    The model (s) is neither reliable nor valid.
    iii.    We believe that this is due to the choice of risk models and technique and not to the actual risk assessment work.
    iv.    The only way to resolve this is to use proper Monte Carlo simulation models and techniques

    Final Words

    Our work reported in these two posts have been done out of pure curiosity after watching the program “Brennpunkt”. The data used have been taken from the program’s documentation.  Based on the results, we feel that our work should be replicated by the Department of Finance and with data from the original sources, to weed out possible errors.

    It should certainly be worth the effort:

    i.    The 85 project here, amounts to NOK 221.005 million with
    ii.    NOK 28.012 million in total deviation ((The sum of all deviations from the P50 values.))  from the P50 value
    iii.    NOK 19.495 million have unnecessary been held in reserve ((The P85 amount less the final project cost > zero.))  and
    iv.    The overruns ((The final project cost less the P50 amount > zero))  have been NOK 20.539 million
    v.    That is, nearly every fifth “krone” of the projects budgets has been “miss” allocated
    vi.    And there are many more projects to come.

    References

    Cook, J.D., (2010). Determining distribution parameters from quantiles.
    http://biostats.bepress.com/mdandersonbiostat/paper55

    Whittaker, E. T. and Robinson, G. (1967), Normal Frequency Distribution. Ch. 8 in The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed. New York: Dover, pp. 164-208, 1967. p. 179.

  • The implementation of the Norwegian Governmental Project Risk Assessment scheme

    The implementation of the Norwegian Governmental Project Risk Assessment scheme

    This entry is part 1 of 2 in the series The Norwegian Governmental Project Risk Assessment Scheme

    Introduction

    In Norway all public investment projects with an expected budget exceeding NOK 750 million have to undergo quality assurance ((The hospital sector has its own QA scheme.)) . The oil and gas sector, and state-owned companies with responsibility for their own investments, are exempt.

    The quality assurance scheme ((See, The Norwegian University of Science and Technology (NTNU): The Concept Research Programme)) consists of two parts: Quality assurance of the choice of concept – QA1 (Norwegian: KS1) ((The one page description for QA1 (Norwegian: KS1)have been taken from: NTNU’s Concept Research Programme)) and Quality assurance of the management base and cost estimates, including uncertainty analysis for the chosen project alternative – QA2 (Norwegian: KS2) ((The one page description for QA2 (Norwegian: KS2) have been taken from: NTNU’s Concept Research Programme))

    This scheme is similar too many other countries’ efforts to create better cost estimates for public projects. One such example is Washington State Department of Transportations’ Cost Risk Assessment (CRA) and Cost Estimate Validation Process (CEVP®) (WSDOT, 2014).

    One of the main purposes of QA2 is to set a cost frame for the project. This cost frame is to be approved by the government and is usually set to the 85% percentile (P85) of the estimated cost distribution. The cost frame for the responsible agency is usually set to the 50% percentile (P50). The difference between P50 and P85 is set aside as a contingency reserve for the project. This is reserves that ideally should remain unused.

    The Norwegian TV program “Brennpunkt” an investigative program sponsored by the state television channel NRK put the light on the effects of this scheme ((The article also contains the data used here)):

    The investigation concluded that the Ministry of Finance quality assurance scheme had not resulted in reduced project cost overruns and that the process as such had been very costly.

    This conclusion has of course been challenged.

    The total cost for doing the risk assessments of the 85 projects was estimated to approx. NOK 400 million or more that $60 million. In addition, in many cases, comes the cost of the quality assurance of choice of concept, a cost that probably is much higher.

    The Data

    The data was assembled during the investigation and consists of six setts where five have information giving the P50 and P85 percentiles. The last set gives data on 29 projects finished before the QA2 regime was implemented (the data used in this article can be found as an XLSX.file here):

    The P85 and P50 percentiles

    The first striking feature of the data is the close relation between the P85 and P50 percentiles:

    In the graph above we have only used 83 of the 85 projects with known P50 and P85. The two that are omitted are large military projects. If they had been included, all the details in the graph had disappeared. We will treat these two projects separately later in the article.

    A regression gives the relationship between P85 and P50 as:

    P85 = (+/- 0.0113+1.1001)* P50, with R= 0.9970

    The regression gives an exceptionally good fit. Even if the graph shows some projects deviating from the regression line, most falls on or close to the line.

    With 83 projects this can’t be coincidental, even if the data represents a wide variety of government projects spanning from railway and roads to military hardware like tanks and missiles.

    The Project Cost Distribution

    There is not much else to be inferred about the type of cost distribution from the graph. We do not know whether those percentiles came from fitted distributions or from estimated Pdf’s. This close relationship however leads us to believe that the individual projects cost distributions are taken from the same family of distributions.

    If this family of distributions is a two-parameter distribution, we can use the known P50 and P85 ((Most two-parameter families have sufficient flexibility to fit the P50 and P85 percentiles.)) percentiles to fit  a number of distributions to the data.

    This use of quantiles to estimate the parameters of an a priori distribution have been described as “quantile maximum probability estimation” (Heathcote & al., 2004). This can be done by fitting a number of different a priori distributions and then compare the sum log likelihoods of the resulting best fits for each distribution, to find the “best” family of distributions.

    Using this we anticipate finding cost distributions with the following properties:

    1. Nonsymmetrical, with a short left and a long right tail i.e. being positive skewed and looking something like the distribution below (taken from a real life project):

    2. The left tail we would expect to be short after the project has been run through the full QA1 and QA2 process. After two such encompassing processes we would believe that most, even if not all, possible avenues for cost reduction and grounds for miscalculations have been researched and exhausted – leaving little room for cost reduction by chance.

    3. The right tail we would expect to be long taking into account the possibility of adverse price movements, implementation problems, adverse events etc. and thus the possibility of higher costs. This is where the project risk lies and where budget overruns are born.

    4. The middle part should be quite steep indicating low volatility around “most probable cost”.

    Estimating the Projects Cost Distribution

    To simplify we will assume that the above relation between P50 and P85 holds, and that it can be used to describe the resulting cost distribution from the projects QA2 risk assessment work.  We will hence use the P85/P50 ratio ((If costs is normally distributed: C ∼ N (m, s2), then Z = C/m ∼ N (1, s2/ m2). If costs is gamma distributed: C ∼ Γ (a, λ) then Z = C/m ∼ Γ (1, λ).))  to study the cost distributions. This implies that we are looking for a family of distributions that have the probability of (X<1) =0.5 and the probability of (x<1.1) =0.85 and being positive skewed. This change of scale will not change the shape of the density function, but simply scale the graph horizontally.

    Fortunately the MD Anderson Cancer Centre has a program – Parameter Solver ((The software can be downloaded from: https://biostatistics.mdanderson.org/SoftwareDownload/SingleSoftware.aspx?Software_Id=6 )) – that can solve for the distribution parameters given the P50 and P85 percentiles (Cook, 2010). We can then use this to find the distributions that can replicate the P50 and P85 percentiles.

    We find that distributions from the Normal, Log Normal, Gamma, Inverse Gamma and Weibull families will fit to the percentiles. All the distributions however are close to being symmetric with the exception of the Weibull distribution that has a left tail. A left tail in a budgeted cost distribution usually indicates over budgeting with the aim of looking good after the project has been finished. We do not think that this would have passed the QA2 process – so we don’t think that it has been used.

    We believe that it is most likely that the distributions used are of the Normal, Gamma or of the Gamma derivative Erlang ((The Erlang distribution is a Gamma distribution with integer shape parameter.)) type, due to their convolution properties . That is, sums of independent identically distributed variables having one of these particular distributions come from the same distribution family. This makes it possible to simplify risk models of the cost only variety by just summing up the parameters ((For the Normal. Gamma and Erlang distributions this implies summing up the shape parameters of the individual cost elements distributions: If X and Y are normally distributed: X ∼ N (a, b2) and Y∼ N (d, e2) and X is independent of Y, then Z=X + Y is N (a + d, b2 + e2), and if k is a strictly positive constant then Z=k*X is N (k*a, k2* b2). If X and Y are gamma distributed: X ∼ Γ (a, λ) and Y∼ Γ (b, λ) and X is independent of Y, then X + Y is Γ (a +b, λ), and if k is a strictly positive constant then c*X is Γ (k*a, λ).)) of the cost elements to calculate the parameters of the total cost distribution.

    This have the benefit of giving the closed form for the total cost distribution compared to Monte Carlo simulation where the closed form of the distribution, if it exists, only can be found thru the exercise we have done here.

    This property can as well be a trap, as the adding up of cost items quickly gives the distribution of the sum symmetrical properties before it finally ends up as a Normal distribution ((The Central Limit Theorem gives the error in a normal approximation to the gamma distribution as n-1/2 as the shape parameter n grows large. For large k the gamma distribution X ∼ Γ (k, θ) converges to a normal distribution with mean µ = k*θ and variance s2= k*θ2. In practice it will approach a normal distribution with the shape parameter > 10.)).

    The figures in the graph below give the shapes for the Gamma and Normal distribution with the percentiles P50=1. and P85 = 1.1:

    The Normal distribution is symmetric and the Gamma distribution is also for all practical purposes symmetric. We therefore can conclude that the distributions for total project cost used in the 83 projects have been symmetric or close to symmetric distributions.
    This result is quite baffling; it is difficult to understand why the project cost distributions should be symmetric.

    The only economic explanation have to be that the expected cost of the projects are estimated with such precision that any positive or negative deviations are mere flukes and chance outside foreseeability and thus not included in the risk calculations.

    But is this possible?

    The two Large Military Projects

    The two projects omitted from the regression above: new fighter planes and frigates have values of the ratio P85/P50 as 1.19522 and 1.04543, compared to the regression estimate of 1.1001 for the 83 other projects. They are however not atypical, other among the 83 projects have both smaller (1.0310) and larger (1.3328) values for the P85/P50 ratio. Their sheer size however with a P85 of respective 68 and 18 milliard NOK, gives them a too high weight in a joint regression compared to the other projects.

    Never the less, the same comments made above for the other 83 projects apply for these two projects. A regression with the projects included would have given the relationship between P85 and P50 as:

    P85 = (+/- 0.0106+1.1751)* P50, with R= 0.9990.

    And as shown in the graph below:

    This graph again depicts the surprisingly low variation in all the projects P85/P50 ratios:

    The ratios have in point of fact a coefficient of variation of only 4.7% and a standard deviation of 0.052 – for the all the 85 projects.

    Conclusions

    The Norwegian quality assurance scheme is obviously a large step in the direction of reduced budget overruns in public projects. (See: Public Works Projects)

    Even if the final risk calculation somewhat misses the probable project cost distribution will the exercises described in the quality assurance scheme heighten both the risk awareness and the uncertainty knowingness. All, contributing to the common goal – reduced budget under- and overruns and reduced project cost.

    It is nevertheless important that all elements in the quality assurance process catches the project uncertainties in a correct way, describing each projects specific uncertainty and its possible effects on project cost and implementation (See: Project Management under Uncertainty).

    From what we have found: widespread use of symmetric cost distributions and possibly the same type of distributions across the projects, we are a little doubtful about the methods used for the risk calculations. The grounds for this are shown in the next two tables:

    The skewness ((The skewness is equal to two divided by the square root of the shape parameter.)) given in the table above depends only on the shape parameter. The Gamma distribution will approach a normal distribution when the parameter larger than ten. In this case all projects’ cost distributions approach a normal distribution – that is a symmetric distribution with zero skewness.

    To us, this indicates that the projects’ cost distribution reflects more the engineer’s normal calculation “errors” than the real risk for budget deviations due to implementation risk.

    The kurtosis (excess kurtosis) indicates the form of the peak of the distribution. Normal distributions have zero kurtosis (mesocurtic) while distributions with a high peak have a positive kurtosis (leptokurtic).

    It is stated in the QA2 that the uncertainty analysis shall have “special focus on … Event uncertainties represented by a binary probability distribution” If this part had been implemented we would have expected at least more flat-topped curves (platycurtic) with negative kurtosis or better not only unimodal distributions. It is hard to see traces of this in the material.

    So, what can we so far deduct that the Norwegian government gets from the effort they spend on risk assessment of their projects?

    First, since the cost distributions most probably are symmetric or near symmetric, expected cost will probably not differ significantly from the initial project cost estimate (the engineering estimate) adjusted for reserves and risk margins. We however need more data to substantiate this further.

    Second, the P85 percentile could have been found by multiplying the P50 percentile by 1.1. Finding the probability distribution for the projects’ cost has for the purpose of establishing the P85 cost figures been unnecessary.

    Third, the effect of event uncertainties seems to be missing.

    Fourth, with such a variety of projects, it seems strange that the distributions for total project cost ends up being so similar. There have to be differences in project risk from building a road compared to a new Opera house.

    Based on these findings it is pertinent to ask what went wrong in the implementation of QA2. The idea is sound, but the result is somewhat disappointing.

    The reason for this can be that the risk calculations are done just by assigning probability distributions to the “aggregated and adjusted engineering “cost estimates and not by developing a proper simulation model for the project, taking into consideration uncertainties in all factors like quantities, prices, exchange rates, project implementation etc.

    We will come back in a later post to the question if the risk assessment never the less reduces the budgets under- and overrun.

    References

    Cook, John D. (2010), Determining distribution parameters from quantiles. http://www.johndcook.com/quantiles_parameters.pdf

    Heathcote, A., Brown, S.& Cousineau, D. (2004). QMPE: estimating Lognormal, Wald, and Weibull RT distributions with a parameter-dependent lower bound. Journal of Behavior Research Methods, Instruments, and Computers (36), p. 277-290.

    Washington State Department of Transportation (WSDOT), (2014), Project Risk Management Guide, Nov 2014. http://www.wsdot.wa.gov/projects/projectmgmt/riskassessment

    Endnotes

  • Project Management under Uncertainty

    Project Management under Uncertainty

    You can’t manage what you can’t measure
    You can’t measure what you can’t define
    How do you define something that isn’t known?

    DeMarco, 1982

    1.     Introduction

    By the term Project we usually understand a unique, one-time operation designed to accomplish a set of objectives in a limited time frame. This could be building a new production plant, designing a new product or develop new software for a specific purpose.

    A project usually differ from normal operations by; being a onetime operation, having a limited time horizon and budget, having unique specifications and by working across organizational boundaries. A project can be divides into four phases: project definition, planning, implementation and project phase-out.

    2.     Project Scheduling

    The project planning phase, which we will touch upon in this paper, consists of braking down the project into tasks that must be accomplished for the project to be finished.

    The objectives of the project scheduling are to determine the earliest start and finish of each task in the project. The aim is to be able to complete the project as early as possible and to calculate the likelihood that the project will be completed within a certain time frame.

    The dependencies[i] between the tasks determine their predecessor(s) and successor(s) and thus their sequence (order of execution) in the project[1]. The aim is to list all tasks (project activities), their sequence and duration[2] (estimated activity time length). The figure[ii] below shows a simple project network diagram, and we will in the following use this as an example[iii].

    Sample-project#2This project thus consists of a linear flow of coordinated tasks where in fact time, cost and performance can vary randomly.

    A convenient way of organizing this information is by using a Gantt[iv] chart. This gives a graphic representation of the project’s tasks, the expected time it takes to complete them, and the sequence in which they must be done.

    There will usually be more than one path (sequence of tasks) from the first to the last task in a project. The path that takes the longest time to complete is the projects critical path. The objective of all this is to identify this path and the time it takes to complete it.

    3.     Critical Path Analysis

    The Critical Path (CP)[v] is defined as the sequence of tasks that, if delayed – regardless of whether the other project tasks are completed on or before time – would delay the entire project.

    The critical path is hence based on the forecasted duration of each task in the project. These durations are given as single point estimates[3] implying that the project’s tasks duration contain no uncertainty (deterministic). This is obviously wrong and will often lead to unrealistic project estimates due to the inherent uncertainty in all project work.

    Have in mind that: All plans are estimates and are only as good as the task estimates.

    As a matter of fact many different types of uncertainty can be expected in most projects:

    1. Ordinary uncertainty, where time, cost and performance can vary randomly, but inside predictable ranges. Variations in task durations will cause the projects critical path to shift, but this can be predicted and the variation in total project time can be calculated.
    2. Foreseen uncertainty, where a few known factors (events) can affect the project but in an unpredictable way[4]. This is projects where tasks and events occur probabilistic and contain logical relationships of a more complicated nature. E.g. from a specific event some tasks are undertaken with certainty while others probabilistically (Elmaghraby, 1964) and (Pritsker, 1966). The distribution for total project time can still be calculated, but will include variation from the chance events.
    3. Unforeseen uncertainty, where one or more factors (events) cannot be predicted. This will imply that decisions points about the projects implementation have to be included at one or more points in the projects execution.

    As a remedy to the critical path analysis inadequacy to the existence of ordinary uncertainty, the Program Evaluation and Review Technique (PERT[vi]) analysis was developed. PERT is a variation on Critical Path Analysis that takes a slightly more skeptical view of the duration estimates made for each of the project tasks.

    PERT uses a tree-point estimate,[vii] based on the forecast of the shortest possible task duration, the most likely task duration and the worst-case task duration. The tasks expected duration is then calculated as a weighted average of these tree estimates of the durations.

    This is assumed to help to bias time estimates away from the unrealistically short time-scales that often is the case.

    4.     CP, PERT and Monte Carlo Simulation

    The two most important questions we want answered are:

    • How long will it take to do the project?
    • How likely is the project to succeed within the allotted time frame?
    • In this example the projects time frame is set to 67 weeks.

    We will use the Critical Path method, PERT and Monte Carlo simulation to try to answer these questions, but first we need to make some assumptions on the variability of the estimated task durations. We will assume that the durations are triangular distributed and that the actual durations can be both higher and lower than their most likely value.

    The distributions will probably have a right tail since underestimation is common when assessing time and cost (positively skewed), but sometime people deliberately overestimate to avoid being responsible for later project delay (negatively skewed). The assumptions of the tasks duration are given in the table below:

    Project-table#2The corresponding paths, critical path and project durations is given in the table below. The critical path method finds path #1 (tasks: A,B,C,D,E) as the critical path and thus expected project duration to 65 weeks. The second question however cannot be answered by using this method. So, in regard to probable deviations from expected project duration, the project manager is left without any information.

    By using PERT, calculating expected durations and their standard deviation as described in endnote vii, we find the same critical path and roughly the same expected project duration (65.5 weeks), but since we now can calculate the estimate’s standard deviation we can find the probability of the project being finished inside the projects time frame.

    Project-table#1By assuming that the sum of task durations along the critical path is approximately normal distributed, we find that the probability of having the project finished inside the time frame of 67 weeks to 79%. Since this gives is a fairly high probability of project success the manager can rest contentedly – or can she?

    If we repeat the exercise, but now using Monte Carlo simulation we find a different answer. We can no longer with certainty establish a critical path. The tasks variability can in fact give three different critical paths. The most likely is path #1 as before, but there is a close to 30% probability that path #4 (tasks: A,B,C,G,E) will be the critical path. It is also possible, even if the probability is small (<5%), that path #3 (tasks: A,F,G,E) is the critical path (see figure below).Path-as-Critical-pathSo, in this case we cannot use the critical path method, it will give wrong answers and misleading information to the project manager and. More important is the fact that the method cannot use all the information we have about the project’s tasks, that is to say their variability.

    A better approach is to simulate project time to find the distribution for total project duration. This distribution will then include the duration of all critical paths that may arise during the project simulation, given by the red curve in figure below:

    Path-Durations-(CP)This figure gives the cumulative probability distribution for the possible critical paths duration (Path#: 1,3,4) as well as for total project duration. Since path #1 consistently have long duration times there are only in ‘extreme’ cases that path #4 is the critical path. Most strikingly is the large variation in path #3’s duration and the fact that it can end up in some of the simulation’s runs as critical path.

    The only way to find the distribution for total project duration is for every run in the simulation to find the critical path and calculate its duration.

    We now find the expected total project duration to be 67 weeks, one week more than what the CPM and PERT gave, but more important, we find that the probability of finishing the project inside the time frame is only 50%.

    By neglecting the probability that the critical path might change due to task variability PERT is underestimating project variance and thus the probability that the project will not finish inside the expected time frame.

    Monte Carlo models like this can be extended to include many types of uncertainty belonging to the classes of foreseen and unforeseen uncertainty. However, it will only be complete when all types of project costs and their variability are included.

    5.     Summary

    Key findings in comparative studies show that using Monte Carlo along with project planning techniques allows better understanding of project uncertainty and its risk level as well as provides project team with the ability to grasp various possible courses of the project within one simulation procedure.

    Notes

    [1] This can be visualized in a Precedence Diagram also known as a Project Network Diagram.In a Network Diagram, the start of an activity must be linked to the end of another activity

    [2] An event or a milestone is a point in time having no duration. A Precedence Diagram will always have a Start and an End event.

    [3] As a “best guess” or “best estimate” of a fixed or random variable.

    [4] E.g. repetition of tasks.

    Endnotes

    [i] There are four types of dependencies in a Precedence Diagram:

    1. Finish-Start: A task cannot start before a previous task has ended.
    2. Start-Start: There is a defined relationship between the start of tasks.
    3. Finish-Finish: There is a defined relationship between the end dates of tasks.
    4. Start-Finish: There is a defined relationship between the start of one task and the end date of a successor task.

    [ii] Taken from the Wikipedia article: Critical path drag, http://en.wikipedia.org/wiki/Critical_path_drag

    [iii] The Diagram contains more information than we will use. The diagram is mostly self-explaining, however Float (or Slack) and Drag is defined as the activity delay that the project can tolerate before the project comes in late and how much a task on the critical path is delaying project completion (Devaux,2012).

    [iv] The Gantt chart was developed by Henry Laurence Gantt in the 1910s.

    [v] The Critical Path Method (CPM) was developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley, Jr. of Remington Rand.

    [vi] The Program Evaluation and Review Technique (PERT) were developed by Booz Allen Hamilton and the U.S. Navy, at about the same time as the CPM. Key features of a PERT network are:

    1. Events must take place in a logical order.
    2. Activities represent the time and the work it takes to get from one event to another.
    3. No event can be considered reached until ALL activities leading to the event are completed.
    4. No activity may be begun until the event preceding it has been reached.

    [vii] Assuming, that a process with a double-triangular distribution underlies the actual task durations, the tree estimated values (min, ml, max) can then be used to calculate expected value (E) and standard deviation (SD) as L-estimators, with: E = (min + 4m + max)/6 and SD = (max − min)/6.

    E is thus a weighted average, taking into account both the most optimistic and most pessimistic estimates of the durations provided. SD measures the variability or uncertainty in the estimated durations.

    References

    Devaux, Stephen A.,(2012). “The Drag Efficient: The Missing Quantification of Time on the Critical Path” Defense AT&L magazine of the Defense Acquisition University. Retrieved from http://www.dau.mil/pubscats/ATL%20Docs/Jan_Feb_2012/Devaux.pdf

    DeMarco, T, (1982), Controlling Software Projects, Prentice-Hall, Englewood Cliffs, N.J., 1982

    Elmaghraby, S.E., (1964), An algebra for the Analyses of Generalized Activity Networks, Management Science, 10,3.

    Pritsker, A. A. B. (1966). GERT: Graphical Evaluation and Review Technique (PDF). The RAND Corporation, RM-4973-NASA.

  • The role of events in simulation modeling

    The role of events in simulation modeling

    This entry is part 2 of 2 in the series Handling Events

    “With a sample size large enough, any outrageous thing is likely to happen”

    The law of truly large numbers (Diaconis & Mosteller, 1989)

    Introduction

    The need for assessing the impact of events with binary[i] outcomes, like loan defaults, occurrence of recessions, passage of a special legislation, etc., or events that can be treated like binary events like paradigm shifts in consumer habits, changes in competitor behavior or new innovations, arises often in economics and other areas of decision making.

    To the last we can add political risks, both macro and micro; conflicts, economic crises, capital controls, exchange controls, repudiation of contracts, expropriation, quality of bureaucracy, government project decision-making, regulatory framework conditions; changes in laws and regulations, changes in tax laws and regimes etc.[ii]  Political risk acts like discontinuities and usually becomes more of a factor as the time horizon of a project gets longer.

    In some cases when looking at project feasibility, availability of resources, quality of work force and preparations can also be treated as binary variables.

    Events with binary outcomes have only two states, either it happens or it does not happen: the presence or absence of a given exposure. We may extend this to whether it may happen next year or not or if it can happen at some other point in the projects timeframe.

    We have two types of events:  external events originating from outside with the potential to create effects inside the project and events originating inside the project and having direct impact on the project. By the term project we will in the following mean; a company, plant or operation etc. The impact will eventually be of economic nature and it is this we want to put a value on.

    External events are normally grouped into natural events and man-made events. Examples of man-made external events are changes in laws and regulations, while extreme weather conditions etc. are natural external events.

    External events can occur as single events or as combinations of two or more external events. Potential combined events are two or more external events having a non-random probability of occurring simultaneously, e.g., quality of bureaucracy and government project decision-making.

    Identification of possible external events

    The identification of possible events should roughly follow the process sketched below[iii]:

    1. Screening for Potential Single External Events – Identify all natural and man-made external events threatening the project implementation (Independent Events).
    2. Screening for Potential Combined External Events – Combining single external events into various combinations that are both imaginable and which may possibly threaten the project implementation (Correlated Events).
    3. Relevance Screening – Screening out potential external events, either single or combined, that is not relevant to the project. By ‘not relevant’, we will understand that they cannot occur or that their probability of occurrence is evidently ‘too low’.
    4. Impact Screening – Screening out potential external events, either single or combined, that is not relevant to the project. By ‘not relevant’, we will understand that no possible project impact can be identified.
    5. Event Analysis – Acquiring and assessing information on the probability of occurrence, at each point in the future, for each relevant event. 
    6. Probabilistic Screening –  To accept the risk contribution of an external event, or to plan appropriate project modifications to reduce not acceptable  contributions to project risk.

    Project Impact Analysis; modelling and quantification

    It is useful to distinguish between two types of forecasts for binary outcomes: probability[iv] forecasts and point forecasts.  We will in the following only use probability forecasts since we also want to quantify forecast uncertainty, which is often ignored in making point forecasts. After all, the primary purpose of forecasting is to reduce uncertainty.

    We assume that none of the possible events is in the form of a catastrophe.  A mathematical catastrophe is a point in a model of an input-output system, where a vanishingly small change in an exogenous variate can produce a large change in the output. (Thom, 1975)

    Current practice in public projects

    The usual approach at least for many public projects[v] is to first forecast the total costs distribution from the cost model and then add, as a second cost layer outside the model, the effects of possible events. These events will be discoveries about: the quality of planning, availability of resources, the state of corporation with other departments, difficulties in getting decisions, etc.

    In addition are these costs more often than not calculated as a probability distribution of lump sums and then added to the distribution for the estimated expected total costs. The consequence of this is that:

    1. the ‘second cost layer’ introduces new lump sum cost variables,
    2. the events are unrelated to the variates in the cost model,
    3. the mechanism of costs transferal  from the events are rarely clearly stated and
    4. for a project with a time frame of several years and where the net present value of project costs is the decisive variable, these amounts to adding a lump sum to the first years cost.

    Thus using this procedure to identify project tolerability to external events – can easily lead decision and policy makers astray.

    We will therefor propose another approach with analogies taken from time series analysis – intervention analysis. This approach to intervention analysis is based on mixed autoregressive moving average (ARMA[vi]) models introduced by Box & Tiao in 1975. (Box and Tiao, 1975) Intervention models links one or more input (or independent) variates to a response (or dependent) variate by a transfer function.

    Handling Project Interventions

    In time series analysis we try to discern the effects of an intervention after the fact. In our context we are trying to establish what can happen if some event intervenes in our project.  We will do this by using transfer functions. Transfer functions are models of how the effects from the event are translated into future values of y.  This implies to:

    1. Forecast the probability pt that the event will happen at time – t,
    2. Select the variates (response variable) in the model that will be affected,
    3. Establish a transfer function for each response variable, giving expected effect (response) on that variate.

    The event can trigger a response at time T[vii] in the form of a step[viii] (St) (i.e. change in tax laws) or a pulse (Pt) (i.e. change in supplier). We will denote this as:

    St = 0, when t <T and =1, when t > T

    Pt = 0, when t ≠T and =1, when t = T

    For one exogenous variate x and one response variate y, the general form of an intervention model is:

    yt = [w(B) / d(B)] x t-s + N(et)

    Where Bs is the backshift operator, shifting the time series s steps backward and N(et) an appropriate noise model for y. The delay between a change in x and a response in y is s. The intervention model has both a numerator and a denominator polynomial.

    The numerator polynomial is the moving average polynomial (MA)[ix]. The numerator parameters are usually the most important, since they will determine the magnitude of the effect of x on y.

    The denominator polynomial is the autoregressive polynomial (AR)[x]. The denominator determines the shape of the response (growth or decay).

    Graphs of some common intervention models are shown in the panel (B) below taken from the original paper by Box & Tiao, p 72:

    Effect-response

    As the figures above show, a large number of different types of responses can be modelled using relatively simple models. In many cases will a step not give an immediate response, but have a more dynamic response and a response to a pulse may or may not decay all the way back. Most response models have a steady state solution that will be achieved after a number of periods. Model c) in the panel above however will continue to grow to infinity. Model a) gives a permanent change positive (carbon tax) or negative (new cheaper technology). Model b) gives a more gradual change positive (implementation of new technology) or negative (effect of crime reducing activities). The response to pulse can be positive or negative (loss of supplier) with a decay that can continue for a short or a long period all the way back or to a new permanent level.

    Summary

    By using analogies from intervention analysis a number of interesting and important issues can be analyzed:

    • If two events affects one response variable will the combined effect be less or greater than the sum of both?
    • Will one event affecting more than one response variable increase the effect dramatically?
    • Is there a risk of calculating the same cost twice?
    • If an event occurs at the end of a project, will it be prolonged? And what will the costs be?
    • Etc.

    Questions like this can never be analyzed when using a ‘second layer lump sum’ approach. Even more important is possibility to incorporate the responses to exogenous events inside the simulation model, thus having the responses at the correct point on the time line and by that a correct net present value for costs, revenues and company or project value.

    Because net present values are what this is all about isn’t it? After all the result will be used for decision making!

    REFERENCES

    Box, G.E.P.  and Tiao, G.C., 1975.  Intervention analysis with application to economic and environmental problems.  J. Amer. Stat. Assoc. 70, 349:  pp70-79.

    Diaconis, P. and Mosteller, F. , 1989. Methods of Studying Coincidences. J. Amer. Statist. Assoc. 84, 853-861.

    Knochenhauer, M & Louko, P., 2003. SKI Report 02:27 Guidance for External Events Analysis. Swedish Nuclear Inspectorate.

    Thom R., 1975. Structural stability and morphogenesis. Benjamin Addison Wesley, New York.

    ENDNOTES

    [i] Events with binary outcomes have only two states, either it happens or it does not happen: the presence or absence of a given exposure. The event can be described by a Bernoulli distribution. This is a discrete distribution having two possible outcomes labelled by n=0 and n=1 in which n=1 (“event occurs”) have probability p and n=0 (“do not occur”) have probability q=1-p, where 0<p<1. It therefore has probability density function P(n)= 1-p for n=0 and P(n)= p for n=1, which can also be written P(n)=pn(1-p) (1-n).

    [ii] ‘’Change point’’ (“break point” or “turning point”) usually denotes the point in time where the change takes place and “regime switching” the occurrence of a different regime after the change point.

    [iii] A good example of this is Probabilistic Safety Assessments (PSA). PSA is an established technique to numerically quantify risk measures in nuclear power plants. It sets out to determine what undesired scenarios can occur, with which likelihood, and what the consequences could be (Knochenhauer & Louko, 2003).

    [iv] A probability is a number between 0 and 1 (inclusive). A value of zero means the event in question never happens, a value of one means it always happens, and a value of 0.5 means it will happen half of the time.

    Another scale that is useful for measuring probabilities is the odds scale. If the probability of an event occurring is p, then the odds (W) of it occurring are p: 1- p, which is often written as  W = p/ (1-p). Hence if the probability of an event is 0.5, the odds are 1:1, whilst if the probability is 0.1, the odds are 1:9.

    Since odds can take any value from zero to infinity, then log (p/(1- p)) ranges from -infinity  to infinity. Hence, we can model g(p) = log [(p/(1- p)] rather than p. As g(p) goes from -infinity  to infinity, p goes from 0 to 1.

    [v] https://www.strategy-at-risk.com/2013/10/07/distinguish-between-events-and-estimates/

    [vi] In the time series econometrics literature this is known as an autoregressive moving average (ARMA) process.

    [vii] Interventions extending over several time intervals can be represented by a series of pulses.

    [viii] (1-B) step = pulse; pulse is a 1st differenced step and step = pulse /(1-B)  step is a cumulated pulse.

    Therefore, a step input for a stationary series produces an identical impulse response to a pulse input for an integrated I(1) series.

    [ix] w(B) = w0 + w1B + w2B2 + . . .

    [x] d(B) = 1 + d1B + d2B2 + . . . Where -1 < d < 1.

     

  • Risk Appetite and the Virtues of the Board

    Risk Appetite and the Virtues of the Board

    This entry is part 1 of 1 in the series Risk Appetite and the Virtues of the Board

     

     

     

    This article consists of two parts: Risk Appetite, and The Virtues of the Board. (Upcoming) This first part can be read as a standalone article, the second will be based on concepts developed in this part.

    Risk Appetite

    Multiple sources of risk are a fact of life. Only rarely will decisions concerning various risks be neatly separable. Intuitively, even when risks are statistically independent, bearing one risk should make an agent less willing to bear another. (Kimball, 1993)

    Risk appetite – the board’s willingness to bear risk – will depend both on the degree to which it dislikes uncertainty and to the level of that uncertainty. It is also likely to shift as the board respond to emerging market and macroeconomic uncertainty and events of financial distress.

    The following graph of the “price of risk[1]” index developed at the Bank of England shows this. (Gai & Vause, 2005)[2] The estimated series fluctuates close to the average “price of risk” most of the time, but has sharp downward spikes in times of financial crises. Risk appetite is apparently highly affected by exogenous shocks:

    Estimated_Risk_appetite_BE_In adverse circumstances, it follows that the board and the investors will require higher expected equity value of the firm to hold shares – an enhanced risk premium – and that their appetite for increased risk will be low.

    Risk Management and Risk Appetite

    Despite widespread use in risk management[3] and corporate governance literature, the term ‘risk appetite’[i] lacks clarity in how it is defined and understood:

    • The degree of uncertainty that an investor is willing to accept in respect of negative changes to its business or assets. (Generic)
    • Risk appetite is the degree of risk, on a broad-based level, that a company or other entity is willing to accept in the pursuit of its goals. (COSO)
    • Risk Appetite the amount of risk that an organisation is prepared to accept, tolerate, or be exposed to at any point in time (The Orange Book October 2004)

    The same applies to a number of other terms describing risk and the board’s attitudes to risk, as for the term “risk tolerance”:

    • The degree of uncertainty that an investor can handle in regard to a negative change in the value of his or her portfolio.
    • An investor’s ability to handle declines in the value of his/her portfolio.
    • Capacity to accept or absorb risk.
    • The willingness of an investor to tolerate risk in making investments, etc.

    It thus comes as no surprise that risk appetite and other terms describing risk are not well understood to a level of clarity that can provide a reference point for decision making[4]. Some takes the position that risk appetite never can be reduced to a sole figure or ratio, or to a single sentence statement. However to be able to move forward we have to try to operationalize the term in such a way that it can be:

    1. Used to commensurate risk with reward or to decide what level of risk that is commensurate with a particular reward and
    2. Measured and used to sett risk level(s) that, in the board’s view, is appropriate for the firm.

    It thus defines the boundaries of the activities the board intends for the firm, both to the management and the rest of the organization, by setting limits to risk taking and defining what acceptable risk means. This can again be augmented by a formal ‘risk appetite statement’ defining the types and levels of risk the organization is prepared to accept in pursuit of increased value.

    However, in view of the “price of risk” series above, such formal statements cannot be carved in stone or they have to contain rules for how they are to be applied in adverse circumstances, since they have to be subject to change as the business and macroeconomic climate changes.

    Deloitte’s Global Risk Management Survey 6. ed. (Deloitte, 2009) found that sixty-three percent of the institutions had a formal, approved statement of their risk appetite. (See Exhibit 4. below) Roughly one quarter of the institutions said they relied on quantitatively defined statements, while about one third used both quantitative and qualitative approaches:

    Risk-apptite_Deloitte_2009Using a formal ‘risk appetite statement’ is the best way for the board to communicate its visions, and the level and nature of the risks the board will consider as acceptable to the firm. This has to be quantitatively defined and be based on some opinion of the board’s utility function and use metrics that can fully capture all risks facing the company.

    We will in the following use the firm’s Equity Value as metric as this will capture all risks – those impacting the balance sheet, income statement, required capital and WACC etc.

    We will assume that the board’s utility function[5] have diminishing marginal utility for an increase in the company’s equity value. From this it follows that the board’s utility will decrease more with a loss of 1 $ than it will increase with a gain of 1 $. Thus the board is risk averse[ii].

    The upside-potential ratio

    To do this we will use the upside-potential ratio[6] (UPR), a measure developed as a measure of risk-adjusted returns (Sortino et al., 1999).  The UPR is a measure of the potential return on an asset relative to a preset return, per unit of downside risk. This ratio is a special case of the more general one-sided variability ratio Phib

    Phib p,q (X) := E1/p[{(X – b)+}p] / E1/q[{(X- b)}q],

    Where X is total return, (X-b) is excess return over the benchmark b[7] and the minus and plus sign denotes the left-sided moment (lower partial moment) and the right sided moment (upper partial moment) – of order p and q.

    The lower partial moment[8] is a measure of the “distance[9]” between risky situations and the corresponding benchmark when only unfavorably differences contribute to the “risk”. The upper partial moment on the other hand measures the “distance” between favorable situations and the benchmark.

    The Phi ratio is thus the ratio of “distances” between favorable and unfavorable events – when properly weighted (Tibiletti & Farinelli, 2002).

    For a fixed benchmark b, the higher Phi the more ‘profitable’ is the risky asset. Phi can therefore be used to rank risky assets. For a given asset, Phi will be a decreasing function of the benchmark b.

    The choice of values for p and q depends on the relevance given to the magnitude of the deviations from the benchmark b. The higher the values, the more emphasis are put on that tail. For p=q=1 we have the Omega index (Shadwick & Keating, 2002).

    The choice of p=1 and q=2, is assumed to fit a conservative investor while a value of p>>1 and q<<1 will be more in line with an aggressive investor (Caporin & Lisi, 2009).

    We will in the following use p=1 and q=2 for calculation of the upside-potential ratio (UPR) thus assuming that the board consists of conservative investors. For very aggressive boards other choices of p and q should be considered.

    LM-vs-UM#0The UPR for the firm can thus be expressed as a ratio of partial moments; that is as the ratio of the first order upper partial moment (UPM1)[10] and the second order lower partial moment (LPM2) (Nawrocki, 1999) and ( Breitmeyer, Hakenes & Pfingsten, 2001), or the over-performance divided by the root-mean-square of under-performance, both calculated at successive points on the probability distribution for the firm’s equity value.

    As we successively calculates the UPR starting at the left tail will the lower partial moment (LPM2) increase and the upper partial moment (UPM1) decrease:UPM+LPM The upside potential ratio will consequently decrease as we move from the lower left tail to the upper right tail – as shown in the figure below: Cum_distrib+UPRThe upside potential ratio have many interesting uses, one is shown in the table below. This table gives the upside potential ratio at budgeted value, that is the expected return above budget value per unit of downside risk – given the uncertainty the management for the individual subsidiaries have expressed. Most of the countries have budget values above expected value exposing downward risk. Only Turkey and Denmark have a ratio larger than one – all others have lager downward risk than upward potential. The extremes are Poland and Bulgaria.

    Country/
    Subsidiary
    Upside
    Potential Ratio
    Turkey2.38
    Denmark1.58
    Italy0.77
    Serbia0.58
    Switzerland0.23
    Norway0.22
    UK0.17
    Bulgaria0.08

    We will in the following use five different equity distributions, each representing a different strategy for the firm. The distributions (strategies) have approximately the same mean, but exhibits increasing variance as we move to successive darker curves. That is; an increase in the upside also will increase the possibility of a downside:

    Five-cutsBy calculating the UPR for successive points (benchmarks) on the different probability distribution for the firm’s equity value (strategies) we, can find the accompanying curves described by the UPR’s in the UPR and LPM2/UPM1 space[12], (Cumova & Nawrocki, 2003):

    Upside_potential_ratioThe colors of the curves give the corresponding equity value distributions shown above. We can see that the equity distribution with the longest upper and lower tails corresponds to the right curve for the UPR, and that the equity distribution with the shortest tails corresponds to the left (lowest upside-potential) curve.

    In the graph below, in the LPM2/UPM1 space, the curves for the UPR’s are shown for each of the different equity value distributions (or strategies). Each will give the rate the firm will have to exchange downside risk for upside potential as we move along the curve, given the selected strategy. The circles on the curves represent points with the same value of the UPR, as we move from one distribution to another:

    LM-vs-UM#2By connecting the points with equal value of the UPR we find the iso-UPR curves; the curves that give the same value for the UPR, across the strategies in the LPM2/UPM1 space:

    LM-vs-UM#3We have limited the number of UPR values to eight, but could of course have selected a larger number both inside and outside the limits we have set.

    The board now have the option of selecting the strategy they find most opportune, or the one that fits best to their “disposition” to risk by deciding the appropriate value of LPM2 and UPM1 or of the upside-potential ratio, and this what we will pursue further in the next part:  “The Virtues of the Board”.

    References

    Breitmeyer, C., Hakenes, H. and Pfingsten, A., (2001). The Properties of Downside Risk Measures. Available at SSRN: http://ssrn.com/abstract=812850 or http://dx.doi.org/10.2139/ssrn.812850.

    Caporin, M. & Lisi,F. (2009). Comparing and Selecting Performance Measures for Ranking Assets. Available at SSRN: http://ssrn.com/abstract=1393163 or http://dx.doi.org/10.2139/ssrn.1393163

    CRMPG III. (2008). The Report of the CRMPG III – Containing Systemic Risk: The Road to Reform. Counterparty Risk Management Policy Group. Available at: http://www.crmpolicygroup.org/index.html

    Cumova, D. & Nawrocki, D. (2003). Portfolio Optimization in an Upside Potential and Downside Risk Framework. Available at: http://www90.homepage.villanova.edu/michael.pagano/DN%20upm%20lpm%20measures.pdf

    Deloitte. (2009). Global Risk Management Survey: Risk management in the spotlight. Deloitte, Item #9067. Available at: http://www.deloitte.com/assets/Dcom-UnitedStates/Local%20Assets/Documents/us_fsi_GlobalRskMgmtSrvy_June09.pdf

    Ekern, S. (1980). Increasing N-th degree risk. Economics Letters, 6: 329-333.

    Gai, P.  & Vause, N. (2004), Risk appetite: concept and measurement. Financial Stability Review, Bank of England. Available at: http://www.bankofengland.co.uk/publications/Documents/fsr/2004/fsr17art12.pdf

    Illing, M., & Aaron, M. (2005). A brief survey of risk-appetite indexes. Bank of Canada, Financial System Review, 37-43.

    Kimball, M.S. (1993). Standard risk aversion.  Econometrica 61, 589-611.

    Menezes, C., Geiss, C., & Tressler, J. (1980). Increasing downside risk. American Economic Review 70: 921-932.

    Nawrocki, D. N. (1999), A Brief History of Downside Risk Measures, The Journal of Investing, Vol. 8, No. 3: pp. 9-

    Sortino, F. A., van der Meer, R., & Plantinga, A. (1999). The upside potential ratio. , The Journal of Performance Measurement, 4(1), 10-15.

    Shadwick, W. and Keating, C., (2002). A universal performance measure, J. Performance Measurement. pp. 59–84.

    Tibiletti, L. &  Farinelli, S.,(2002). Sharpe Thinking with Asymmetrical Preferences. Available at SSRN: http://ssrn.com/abstract=338380 or http://dx.doi.org/10.2139/ssrn.338380

    Unser, M., (2000), Lower partial moments as measures of perceived risk: An experimental study, Journal of Economic Psychology, Elsevier, vol. 21(3): 253-280.

    Viole, F & Nawrocki, D. N., (2010), The Utility of Wealth in an Upper and Lower Partial Moment Fabric). Forthcoming, Journal of Investing 2011. Available at SSRN: http://ssrn.com/abstract=1543603

    Notes

    [1] In the graph risk appetite is found as the inverse of the markets price of risk, estimated by the two probability density functions over future returns – one risk-neutral distribution and one subjective distribution – on the S&P 500 index.

    [2] For a good overview of risk appetite indexes, see “A brief survey of risk-appetite indexes”. (Illing & Aaron, 2005)

    [3] Risk Management all the processes involved in identifying, assessing and judging risks, assigning ownership, taking actions to mitigate or anticipate them, and monitoring and reviewing progress.

    [4] The Policy Group recommends that each institution ensure that the risk tolerance of the firm is established or approved by the highest levels of management and shared with the board. The Policy Group further recommends that each institution ensure that periodic exercises aimed at estimation of risk tolerance should be shared with the highest levels of management, the board of directors and the institution’s primary supervisor in line with Core Precept III. Recommendation IV-2b (CRMPG III, 2008).

    For an extensive list of Risk Tolerance articles, see: http://www.planipedia.org/index.php/Risk_Tolerance_(Research_Category)

    [5] See: http://en.wikipedia.org/wiki/Utility, http://en.wikipedia.org/wiki/Ordinal_utility and http://en.wikipedia.org/wiki/Expected_utility_theory.

    [6] The ratio was created by Brian M. Rom in 1986 as an element of Investment Technologies’ Post-Modern Portfolio theory portfolio optimization software.

    [7] ‘b’ is usually the target or required rate of return for the strategy under consideration, (‘b’ was originally known as the minimum acceptable return, or MAR). We will in the following calculate the UPR for successive benchmarks (points) covering the complete probability distribution for the firm’s equity value.

    [8] The Lower partial moments will uniquely determine the probability distribution.

    [9] The use of the term distance is not unwarranted; the Phi ratio is very similar to the ratio of two Minkowski distances of order p and q.

    [10] The upper partial-moment is equivalent to the full moment minus the lower partial-moment.

    [11] Since we don’t know the closed form for the equity distributions (strategies), the figure above have been calculated from a limited, but large number of partial moments.

    Endnotes

    [i] Even if they are not the same, the terms ‘‘risk appetite’’ and ‘‘risk aversion’’ are often used interchangeably. Note that the statement: “increasing risk appetite means declining risk aversion; decreasing risk appetite indicates increasing risk aversion” is not necessarily true.

    [ii] In the following we assume that the board is non-satiated and risk-averse, and have a non-decreasing and concave utility function – U(C) – with derivatives at least of degrees five and of alternating signs – i.e. having all odd derivatives positive and all even derivatives negative. This is satisfied by most utility functions commonly used in mathematical economics including all completely monotone utility functions, as the logarithmic, exponential and power utility functions.

     More generally, a decision maker can be said as being nth-degree risk averse if sign (un) = (−1)n+1 (Ekern,1980).

     

  • Distinguish between events and estimates

    Distinguish between events and estimates

    This entry is part 1 of 2 in the series Handling Events

     

    Large public sector investment projects in Norway have to go through an established methodology for quality assurance. There must be an external quality assurance process both of the selected concept (KS1) and the projects profitability and cost (KS2).

    KS1 and KS2

    Concept quality control (KS1) shall ensure the realization of socioeconomic advantages (the revenue side of a public project) by ensuring that the most appropriate concept for the project is selected. Quality assurance of cost and management support (KS2) shall ensure that the project can be completed in a satisfactory manner and with predictable costs.

    KS1 and KS2

    I have worked with KS2 analysis, focusing on the uncertainty analysis. The analysis must be done in a quantitative manner and be probability based. There is special focus on probability level P50, the project’s expected value or the grant to the Project and P85, the Parliament grant. The civil service entity doing the project is granted the expected value (P50) and must go to a superior level (usually the ministerial level) to use the uncertainty reserve (the difference between the cost level P85 and).

    Lessons learnt from risk management in large public projects

    Many lessons may be learned from this quality assurance methodology by private companies. Not least is the thorough and methodical way the analysis is done, the way uncertainty is analysed and how the uncertainty reserve is managed.

    The analogy to the decision-making levels in the private sector is that the CEO shall manage the project on P50, while he must go to the company’s board to use the uncertainty reserve (P85-P50).

    In the uncertainty analyses in KS2 a distinction is made between estimate uncertainty and event uncertainty. This is a useful distinction, as the two types of risks are by nature different.

    Estimate uncertainty

    Uncertainty in the assumptions behind the calculation of a project’s cost and revenue, such as

    • Prices and volumes of products and inputs
    • Market mix
    • Strategic positioning
    • Construction cost

    These uncertainties can be modelled in great detail ((But remember – you need to see the forest for the trees!)) and are direct estimates of the project’s or company’s costs and revenues.

    Event Uncertainties

    These events are not expected to occur and therefore should not be included in the calculation of direct cost or revenue. The variables will initially have an expected value of 0, but events may have serious consequences if they do occur. Events can be modeled by estimating the probability of the event occurring and the consequence if they do. Examples of event uncertainties are

    • Political risks in emerging markets
    • Paradigm Shifts in consumer habits
    • Innovations
    • Changes in competitor behavior
    • Changes in laws and regulations
    • Changes in tax regimes

    Why distinguish between estimates and events?

    The reason why there are advantages to separating estimates and events in risk modeling is that they are by nature different. An estimate of an expense or income is something we know will be part of a project’s results, with an expected value that is NOT equal to 0. It can be modeled as a probability curve with an expected outcome and a high and low value.

    An event, on the other hand, can occur or not, and has an expected value of 0. If the event is expected to occur, the impact of the event should be modeled as an expense or income. Whether the event occurs or not has a probability, and there will be an impact if the event occurs (0 if it doesn’t occur).

    Such an event can be modeled as a discrete distribution (0, it does not occur, 1if it occurs) and there is only an impact on the result of the project or business IF it occurs. The consequence may be deterministic – we know what it means if it happens – or it could be a distribution with a high, low and expected value.

    An example

    I have created an example using a private manufacturing company. They have an expected P&L which looks like this:

    Example data

    The company has a high export share to the EU and Norwegian cost (both variable and fixed). Net margin is expected to fall to a level of 17% in 2018. The situation looks somewhat better when the simulated – there is more upside than downside in the market.

    Initial simulation

    But potential events that may affect the result are not yet modeled, and what impact can they have? Let’s look at two examples of potential events:

    1. The introduction of a duty of 25% on the company’s products in the EU. The company will not be able to lift the cost onto the customers, and therefore this will be a cost for the company.
    2. There are only two suppliers of the raw materials the company uses to produce its products and the demand for it is high. This means that the company has a risk of not getting enough raw materials (25% less) in order to produce as much as the market demands.

    events

    As the table shows the risk that the events occur increase with time. Looking at the consequences of the probability weighted events in 2018, the impact on the expected result is:

    resultat 2018

    The consequence of these events is a larger downside risk (lower expected result) and higher variability (larger standard deviation). The probability of a 0 result is

    • 14% in the base scenario
    • 27% with the event “Duty in the EU”
    • 36% with the event “Raw material Shortage” in addition

    The events have no upside, so this is a pure increase in company risk. A 36% probability of a result of 0 or lower may be dramatic. The knowledge of what potential events may mean to the company’s profitability will contribute to the company’s ability to take appropriate measures in time, for instance

    • Be less dependent on EU customers
    • Securing a long-term raw materials contract

    and so on.

    Normally, this kind of analysis is done as a scenario. But a scenario analysis will not provide the answer to how likely the event is nor to what the likely consequence is. Neither will it be able to give the answer to the question: How likely it is that the business will make a loss?

    One of the main reasons for risk analysis is that it increases the ability to take action in time. Good risk management is all about being one step ahead – all the time. As a rule, the consequences of events that no one has thought of (and thus no plan B is in place) are greater than that of events which have been thought through. It is far better to have calculated the consequences, reflected on the probabilities and if possible put in place risk mitigation.

    Knowing the likelihood that something can go horribly wrong is also an important tool in order to properly prioritize and put mitigation measures in at the right place.