Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Uncertainty – Strategy @ Risk

Category: Uncertainty

  • The implementation of the Norwegian Governmental Project Risk Assessment scheme

    The implementation of the Norwegian Governmental Project Risk Assessment scheme

    This entry is part 1 of 2 in the series The Norwegian Governmental Project Risk Assessment Scheme

    Introduction

    In Norway all public investment projects with an expected budget exceeding NOK 750 million have to undergo quality assurance ((The hospital sector has its own QA scheme.)) . The oil and gas sector, and state-owned companies with responsibility for their own investments, are exempt.

    The quality assurance scheme ((See, The Norwegian University of Science and Technology (NTNU): The Concept Research Programme)) consists of two parts: Quality assurance of the choice of concept – QA1 (Norwegian: KS1) ((The one page description for QA1 (Norwegian: KS1)have been taken from: NTNU’s Concept Research Programme)) and Quality assurance of the management base and cost estimates, including uncertainty analysis for the chosen project alternative – QA2 (Norwegian: KS2) ((The one page description for QA2 (Norwegian: KS2) have been taken from: NTNU’s Concept Research Programme))

    This scheme is similar too many other countries’ efforts to create better cost estimates for public projects. One such example is Washington State Department of Transportations’ Cost Risk Assessment (CRA) and Cost Estimate Validation Process (CEVP®) (WSDOT, 2014).

    One of the main purposes of QA2 is to set a cost frame for the project. This cost frame is to be approved by the government and is usually set to the 85% percentile (P85) of the estimated cost distribution. The cost frame for the responsible agency is usually set to the 50% percentile (P50). The difference between P50 and P85 is set aside as a contingency reserve for the project. This is reserves that ideally should remain unused.

    The Norwegian TV program “Brennpunkt” an investigative program sponsored by the state television channel NRK put the light on the effects of this scheme ((The article also contains the data used here)):

    The investigation concluded that the Ministry of Finance quality assurance scheme had not resulted in reduced project cost overruns and that the process as such had been very costly.

    This conclusion has of course been challenged.

    The total cost for doing the risk assessments of the 85 projects was estimated to approx. NOK 400 million or more that $60 million. In addition, in many cases, comes the cost of the quality assurance of choice of concept, a cost that probably is much higher.

    The Data

    The data was assembled during the investigation and consists of six setts where five have information giving the P50 and P85 percentiles. The last set gives data on 29 projects finished before the QA2 regime was implemented (the data used in this article can be found as an XLSX.file here):

    The P85 and P50 percentiles

    The first striking feature of the data is the close relation between the P85 and P50 percentiles:

    In the graph above we have only used 83 of the 85 projects with known P50 and P85. The two that are omitted are large military projects. If they had been included, all the details in the graph had disappeared. We will treat these two projects separately later in the article.

    A regression gives the relationship between P85 and P50 as:

    P85 = (+/- 0.0113+1.1001)* P50, with R= 0.9970

    The regression gives an exceptionally good fit. Even if the graph shows some projects deviating from the regression line, most falls on or close to the line.

    With 83 projects this can’t be coincidental, even if the data represents a wide variety of government projects spanning from railway and roads to military hardware like tanks and missiles.

    The Project Cost Distribution

    There is not much else to be inferred about the type of cost distribution from the graph. We do not know whether those percentiles came from fitted distributions or from estimated Pdf’s. This close relationship however leads us to believe that the individual projects cost distributions are taken from the same family of distributions.

    If this family of distributions is a two-parameter distribution, we can use the known P50 and P85 ((Most two-parameter families have sufficient flexibility to fit the P50 and P85 percentiles.)) percentiles to fit  a number of distributions to the data.

    This use of quantiles to estimate the parameters of an a priori distribution have been described as “quantile maximum probability estimation” (Heathcote & al., 2004). This can be done by fitting a number of different a priori distributions and then compare the sum log likelihoods of the resulting best fits for each distribution, to find the “best” family of distributions.

    Using this we anticipate finding cost distributions with the following properties:

    1. Nonsymmetrical, with a short left and a long right tail i.e. being positive skewed and looking something like the distribution below (taken from a real life project):

    2. The left tail we would expect to be short after the project has been run through the full QA1 and QA2 process. After two such encompassing processes we would believe that most, even if not all, possible avenues for cost reduction and grounds for miscalculations have been researched and exhausted – leaving little room for cost reduction by chance.

    3. The right tail we would expect to be long taking into account the possibility of adverse price movements, implementation problems, adverse events etc. and thus the possibility of higher costs. This is where the project risk lies and where budget overruns are born.

    4. The middle part should be quite steep indicating low volatility around “most probable cost”.

    Estimating the Projects Cost Distribution

    To simplify we will assume that the above relation between P50 and P85 holds, and that it can be used to describe the resulting cost distribution from the projects QA2 risk assessment work.  We will hence use the P85/P50 ratio ((If costs is normally distributed: C ∼ N (m, s2), then Z = C/m ∼ N (1, s2/ m2). If costs is gamma distributed: C ∼ Γ (a, λ) then Z = C/m ∼ Γ (1, λ).))  to study the cost distributions. This implies that we are looking for a family of distributions that have the probability of (X<1) =0.5 and the probability of (x<1.1) =0.85 and being positive skewed. This change of scale will not change the shape of the density function, but simply scale the graph horizontally.

    Fortunately the MD Anderson Cancer Centre has a program – Parameter Solver ((The software can be downloaded from: https://biostatistics.mdanderson.org/SoftwareDownload/SingleSoftware.aspx?Software_Id=6 )) – that can solve for the distribution parameters given the P50 and P85 percentiles (Cook, 2010). We can then use this to find the distributions that can replicate the P50 and P85 percentiles.

    We find that distributions from the Normal, Log Normal, Gamma, Inverse Gamma and Weibull families will fit to the percentiles. All the distributions however are close to being symmetric with the exception of the Weibull distribution that has a left tail. A left tail in a budgeted cost distribution usually indicates over budgeting with the aim of looking good after the project has been finished. We do not think that this would have passed the QA2 process – so we don’t think that it has been used.

    We believe that it is most likely that the distributions used are of the Normal, Gamma or of the Gamma derivative Erlang ((The Erlang distribution is a Gamma distribution with integer shape parameter.)) type, due to their convolution properties . That is, sums of independent identically distributed variables having one of these particular distributions come from the same distribution family. This makes it possible to simplify risk models of the cost only variety by just summing up the parameters ((For the Normal. Gamma and Erlang distributions this implies summing up the shape parameters of the individual cost elements distributions: If X and Y are normally distributed: X ∼ N (a, b2) and Y∼ N (d, e2) and X is independent of Y, then Z=X + Y is N (a + d, b2 + e2), and if k is a strictly positive constant then Z=k*X is N (k*a, k2* b2). If X and Y are gamma distributed: X ∼ Γ (a, λ) and Y∼ Γ (b, λ) and X is independent of Y, then X + Y is Γ (a +b, λ), and if k is a strictly positive constant then c*X is Γ (k*a, λ).)) of the cost elements to calculate the parameters of the total cost distribution.

    This have the benefit of giving the closed form for the total cost distribution compared to Monte Carlo simulation where the closed form of the distribution, if it exists, only can be found thru the exercise we have done here.

    This property can as well be a trap, as the adding up of cost items quickly gives the distribution of the sum symmetrical properties before it finally ends up as a Normal distribution ((The Central Limit Theorem gives the error in a normal approximation to the gamma distribution as n-1/2 as the shape parameter n grows large. For large k the gamma distribution X ∼ Γ (k, θ) converges to a normal distribution with mean µ = k*θ and variance s2= k*θ2. In practice it will approach a normal distribution with the shape parameter > 10.)).

    The figures in the graph below give the shapes for the Gamma and Normal distribution with the percentiles P50=1. and P85 = 1.1:

    The Normal distribution is symmetric and the Gamma distribution is also for all practical purposes symmetric. We therefore can conclude that the distributions for total project cost used in the 83 projects have been symmetric or close to symmetric distributions.
    This result is quite baffling; it is difficult to understand why the project cost distributions should be symmetric.

    The only economic explanation have to be that the expected cost of the projects are estimated with such precision that any positive or negative deviations are mere flukes and chance outside foreseeability and thus not included in the risk calculations.

    But is this possible?

    The two Large Military Projects

    The two projects omitted from the regression above: new fighter planes and frigates have values of the ratio P85/P50 as 1.19522 and 1.04543, compared to the regression estimate of 1.1001 for the 83 other projects. They are however not atypical, other among the 83 projects have both smaller (1.0310) and larger (1.3328) values for the P85/P50 ratio. Their sheer size however with a P85 of respective 68 and 18 milliard NOK, gives them a too high weight in a joint regression compared to the other projects.

    Never the less, the same comments made above for the other 83 projects apply for these two projects. A regression with the projects included would have given the relationship between P85 and P50 as:

    P85 = (+/- 0.0106+1.1751)* P50, with R= 0.9990.

    And as shown in the graph below:

    This graph again depicts the surprisingly low variation in all the projects P85/P50 ratios:

    The ratios have in point of fact a coefficient of variation of only 4.7% and a standard deviation of 0.052 – for the all the 85 projects.

    Conclusions

    The Norwegian quality assurance scheme is obviously a large step in the direction of reduced budget overruns in public projects. (See: Public Works Projects)

    Even if the final risk calculation somewhat misses the probable project cost distribution will the exercises described in the quality assurance scheme heighten both the risk awareness and the uncertainty knowingness. All, contributing to the common goal – reduced budget under- and overruns and reduced project cost.

    It is nevertheless important that all elements in the quality assurance process catches the project uncertainties in a correct way, describing each projects specific uncertainty and its possible effects on project cost and implementation (See: Project Management under Uncertainty).

    From what we have found: widespread use of symmetric cost distributions and possibly the same type of distributions across the projects, we are a little doubtful about the methods used for the risk calculations. The grounds for this are shown in the next two tables:

    The skewness ((The skewness is equal to two divided by the square root of the shape parameter.)) given in the table above depends only on the shape parameter. The Gamma distribution will approach a normal distribution when the parameter larger than ten. In this case all projects’ cost distributions approach a normal distribution – that is a symmetric distribution with zero skewness.

    To us, this indicates that the projects’ cost distribution reflects more the engineer’s normal calculation “errors” than the real risk for budget deviations due to implementation risk.

    The kurtosis (excess kurtosis) indicates the form of the peak of the distribution. Normal distributions have zero kurtosis (mesocurtic) while distributions with a high peak have a positive kurtosis (leptokurtic).

    It is stated in the QA2 that the uncertainty analysis shall have “special focus on … Event uncertainties represented by a binary probability distribution” If this part had been implemented we would have expected at least more flat-topped curves (platycurtic) with negative kurtosis or better not only unimodal distributions. It is hard to see traces of this in the material.

    So, what can we so far deduct that the Norwegian government gets from the effort they spend on risk assessment of their projects?

    First, since the cost distributions most probably are symmetric or near symmetric, expected cost will probably not differ significantly from the initial project cost estimate (the engineering estimate) adjusted for reserves and risk margins. We however need more data to substantiate this further.

    Second, the P85 percentile could have been found by multiplying the P50 percentile by 1.1. Finding the probability distribution for the projects’ cost has for the purpose of establishing the P85 cost figures been unnecessary.

    Third, the effect of event uncertainties seems to be missing.

    Fourth, with such a variety of projects, it seems strange that the distributions for total project cost ends up being so similar. There have to be differences in project risk from building a road compared to a new Opera house.

    Based on these findings it is pertinent to ask what went wrong in the implementation of QA2. The idea is sound, but the result is somewhat disappointing.

    The reason for this can be that the risk calculations are done just by assigning probability distributions to the “aggregated and adjusted engineering “cost estimates and not by developing a proper simulation model for the project, taking into consideration uncertainties in all factors like quantities, prices, exchange rates, project implementation etc.

    We will come back in a later post to the question if the risk assessment never the less reduces the budgets under- and overrun.

    References

    Cook, John D. (2010), Determining distribution parameters from quantiles. http://www.johndcook.com/quantiles_parameters.pdf

    Heathcote, A., Brown, S.& Cousineau, D. (2004). QMPE: estimating Lognormal, Wald, and Weibull RT distributions with a parameter-dependent lower bound. Journal of Behavior Research Methods, Instruments, and Computers (36), p. 277-290.

    Washington State Department of Transportation (WSDOT), (2014), Project Risk Management Guide, Nov 2014. http://www.wsdot.wa.gov/projects/projectmgmt/riskassessment

    Endnotes

  • Project Management under Uncertainty

    Project Management under Uncertainty

    You can’t manage what you can’t measure
    You can’t measure what you can’t define
    How do you define something that isn’t known?

    DeMarco, 1982

    1.     Introduction

    By the term Project we usually understand a unique, one-time operation designed to accomplish a set of objectives in a limited time frame. This could be building a new production plant, designing a new product or develop new software for a specific purpose.

    A project usually differ from normal operations by; being a onetime operation, having a limited time horizon and budget, having unique specifications and by working across organizational boundaries. A project can be divides into four phases: project definition, planning, implementation and project phase-out.

    2.     Project Scheduling

    The project planning phase, which we will touch upon in this paper, consists of braking down the project into tasks that must be accomplished for the project to be finished.

    The objectives of the project scheduling are to determine the earliest start and finish of each task in the project. The aim is to be able to complete the project as early as possible and to calculate the likelihood that the project will be completed within a certain time frame.

    The dependencies[i] between the tasks determine their predecessor(s) and successor(s) and thus their sequence (order of execution) in the project[1]. The aim is to list all tasks (project activities), their sequence and duration[2] (estimated activity time length). The figure[ii] below shows a simple project network diagram, and we will in the following use this as an example[iii].

    Sample-project#2This project thus consists of a linear flow of coordinated tasks where in fact time, cost and performance can vary randomly.

    A convenient way of organizing this information is by using a Gantt[iv] chart. This gives a graphic representation of the project’s tasks, the expected time it takes to complete them, and the sequence in which they must be done.

    There will usually be more than one path (sequence of tasks) from the first to the last task in a project. The path that takes the longest time to complete is the projects critical path. The objective of all this is to identify this path and the time it takes to complete it.

    3.     Critical Path Analysis

    The Critical Path (CP)[v] is defined as the sequence of tasks that, if delayed – regardless of whether the other project tasks are completed on or before time – would delay the entire project.

    The critical path is hence based on the forecasted duration of each task in the project. These durations are given as single point estimates[3] implying that the project’s tasks duration contain no uncertainty (deterministic). This is obviously wrong and will often lead to unrealistic project estimates due to the inherent uncertainty in all project work.

    Have in mind that: All plans are estimates and are only as good as the task estimates.

    As a matter of fact many different types of uncertainty can be expected in most projects:

    1. Ordinary uncertainty, where time, cost and performance can vary randomly, but inside predictable ranges. Variations in task durations will cause the projects critical path to shift, but this can be predicted and the variation in total project time can be calculated.
    2. Foreseen uncertainty, where a few known factors (events) can affect the project but in an unpredictable way[4]. This is projects where tasks and events occur probabilistic and contain logical relationships of a more complicated nature. E.g. from a specific event some tasks are undertaken with certainty while others probabilistically (Elmaghraby, 1964) and (Pritsker, 1966). The distribution for total project time can still be calculated, but will include variation from the chance events.
    3. Unforeseen uncertainty, where one or more factors (events) cannot be predicted. This will imply that decisions points about the projects implementation have to be included at one or more points in the projects execution.

    As a remedy to the critical path analysis inadequacy to the existence of ordinary uncertainty, the Program Evaluation and Review Technique (PERT[vi]) analysis was developed. PERT is a variation on Critical Path Analysis that takes a slightly more skeptical view of the duration estimates made for each of the project tasks.

    PERT uses a tree-point estimate,[vii] based on the forecast of the shortest possible task duration, the most likely task duration and the worst-case task duration. The tasks expected duration is then calculated as a weighted average of these tree estimates of the durations.

    This is assumed to help to bias time estimates away from the unrealistically short time-scales that often is the case.

    4.     CP, PERT and Monte Carlo Simulation

    The two most important questions we want answered are:

    • How long will it take to do the project?
    • How likely is the project to succeed within the allotted time frame?
    • In this example the projects time frame is set to 67 weeks.

    We will use the Critical Path method, PERT and Monte Carlo simulation to try to answer these questions, but first we need to make some assumptions on the variability of the estimated task durations. We will assume that the durations are triangular distributed and that the actual durations can be both higher and lower than their most likely value.

    The distributions will probably have a right tail since underestimation is common when assessing time and cost (positively skewed), but sometime people deliberately overestimate to avoid being responsible for later project delay (negatively skewed). The assumptions of the tasks duration are given in the table below:

    Project-table#2The corresponding paths, critical path and project durations is given in the table below. The critical path method finds path #1 (tasks: A,B,C,D,E) as the critical path and thus expected project duration to 65 weeks. The second question however cannot be answered by using this method. So, in regard to probable deviations from expected project duration, the project manager is left without any information.

    By using PERT, calculating expected durations and their standard deviation as described in endnote vii, we find the same critical path and roughly the same expected project duration (65.5 weeks), but since we now can calculate the estimate’s standard deviation we can find the probability of the project being finished inside the projects time frame.

    Project-table#1By assuming that the sum of task durations along the critical path is approximately normal distributed, we find that the probability of having the project finished inside the time frame of 67 weeks to 79%. Since this gives is a fairly high probability of project success the manager can rest contentedly – or can she?

    If we repeat the exercise, but now using Monte Carlo simulation we find a different answer. We can no longer with certainty establish a critical path. The tasks variability can in fact give three different critical paths. The most likely is path #1 as before, but there is a close to 30% probability that path #4 (tasks: A,B,C,G,E) will be the critical path. It is also possible, even if the probability is small (<5%), that path #3 (tasks: A,F,G,E) is the critical path (see figure below).Path-as-Critical-pathSo, in this case we cannot use the critical path method, it will give wrong answers and misleading information to the project manager and. More important is the fact that the method cannot use all the information we have about the project’s tasks, that is to say their variability.

    A better approach is to simulate project time to find the distribution for total project duration. This distribution will then include the duration of all critical paths that may arise during the project simulation, given by the red curve in figure below:

    Path-Durations-(CP)This figure gives the cumulative probability distribution for the possible critical paths duration (Path#: 1,3,4) as well as for total project duration. Since path #1 consistently have long duration times there are only in ‘extreme’ cases that path #4 is the critical path. Most strikingly is the large variation in path #3’s duration and the fact that it can end up in some of the simulation’s runs as critical path.

    The only way to find the distribution for total project duration is for every run in the simulation to find the critical path and calculate its duration.

    We now find the expected total project duration to be 67 weeks, one week more than what the CPM and PERT gave, but more important, we find that the probability of finishing the project inside the time frame is only 50%.

    By neglecting the probability that the critical path might change due to task variability PERT is underestimating project variance and thus the probability that the project will not finish inside the expected time frame.

    Monte Carlo models like this can be extended to include many types of uncertainty belonging to the classes of foreseen and unforeseen uncertainty. However, it will only be complete when all types of project costs and their variability are included.

    5.     Summary

    Key findings in comparative studies show that using Monte Carlo along with project planning techniques allows better understanding of project uncertainty and its risk level as well as provides project team with the ability to grasp various possible courses of the project within one simulation procedure.

    Notes

    [1] This can be visualized in a Precedence Diagram also known as a Project Network Diagram.In a Network Diagram, the start of an activity must be linked to the end of another activity

    [2] An event or a milestone is a point in time having no duration. A Precedence Diagram will always have a Start and an End event.

    [3] As a “best guess” or “best estimate” of a fixed or random variable.

    [4] E.g. repetition of tasks.

    Endnotes

    [i] There are four types of dependencies in a Precedence Diagram:

    1. Finish-Start: A task cannot start before a previous task has ended.
    2. Start-Start: There is a defined relationship between the start of tasks.
    3. Finish-Finish: There is a defined relationship between the end dates of tasks.
    4. Start-Finish: There is a defined relationship between the start of one task and the end date of a successor task.

    [ii] Taken from the Wikipedia article: Critical path drag, http://en.wikipedia.org/wiki/Critical_path_drag

    [iii] The Diagram contains more information than we will use. The diagram is mostly self-explaining, however Float (or Slack) and Drag is defined as the activity delay that the project can tolerate before the project comes in late and how much a task on the critical path is delaying project completion (Devaux,2012).

    [iv] The Gantt chart was developed by Henry Laurence Gantt in the 1910s.

    [v] The Critical Path Method (CPM) was developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley, Jr. of Remington Rand.

    [vi] The Program Evaluation and Review Technique (PERT) were developed by Booz Allen Hamilton and the U.S. Navy, at about the same time as the CPM. Key features of a PERT network are:

    1. Events must take place in a logical order.
    2. Activities represent the time and the work it takes to get from one event to another.
    3. No event can be considered reached until ALL activities leading to the event are completed.
    4. No activity may be begun until the event preceding it has been reached.

    [vii] Assuming, that a process with a double-triangular distribution underlies the actual task durations, the tree estimated values (min, ml, max) can then be used to calculate expected value (E) and standard deviation (SD) as L-estimators, with: E = (min + 4m + max)/6 and SD = (max − min)/6.

    E is thus a weighted average, taking into account both the most optimistic and most pessimistic estimates of the durations provided. SD measures the variability or uncertainty in the estimated durations.

    References

    Devaux, Stephen A.,(2012). “The Drag Efficient: The Missing Quantification of Time on the Critical Path” Defense AT&L magazine of the Defense Acquisition University. Retrieved from http://www.dau.mil/pubscats/ATL%20Docs/Jan_Feb_2012/Devaux.pdf

    DeMarco, T, (1982), Controlling Software Projects, Prentice-Hall, Englewood Cliffs, N.J., 1982

    Elmaghraby, S.E., (1964), An algebra for the Analyses of Generalized Activity Networks, Management Science, 10,3.

    Pritsker, A. A. B. (1966). GERT: Graphical Evaluation and Review Technique (PDF). The RAND Corporation, RM-4973-NASA.

  • Distinguish between events and estimates

    Distinguish between events and estimates

    This entry is part 1 of 2 in the series Handling Events

     

    Large public sector investment projects in Norway have to go through an established methodology for quality assurance. There must be an external quality assurance process both of the selected concept (KS1) and the projects profitability and cost (KS2).

    KS1 and KS2

    Concept quality control (KS1) shall ensure the realization of socioeconomic advantages (the revenue side of a public project) by ensuring that the most appropriate concept for the project is selected. Quality assurance of cost and management support (KS2) shall ensure that the project can be completed in a satisfactory manner and with predictable costs.

    KS1 and KS2

    I have worked with KS2 analysis, focusing on the uncertainty analysis. The analysis must be done in a quantitative manner and be probability based. There is special focus on probability level P50, the project’s expected value or the grant to the Project and P85, the Parliament grant. The civil service entity doing the project is granted the expected value (P50) and must go to a superior level (usually the ministerial level) to use the uncertainty reserve (the difference between the cost level P85 and).

    Lessons learnt from risk management in large public projects

    Many lessons may be learned from this quality assurance methodology by private companies. Not least is the thorough and methodical way the analysis is done, the way uncertainty is analysed and how the uncertainty reserve is managed.

    The analogy to the decision-making levels in the private sector is that the CEO shall manage the project on P50, while he must go to the company’s board to use the uncertainty reserve (P85-P50).

    In the uncertainty analyses in KS2 a distinction is made between estimate uncertainty and event uncertainty. This is a useful distinction, as the two types of risks are by nature different.

    Estimate uncertainty

    Uncertainty in the assumptions behind the calculation of a project’s cost and revenue, such as

    • Prices and volumes of products and inputs
    • Market mix
    • Strategic positioning
    • Construction cost

    These uncertainties can be modelled in great detail ((But remember – you need to see the forest for the trees!)) and are direct estimates of the project’s or company’s costs and revenues.

    Event Uncertainties

    These events are not expected to occur and therefore should not be included in the calculation of direct cost or revenue. The variables will initially have an expected value of 0, but events may have serious consequences if they do occur. Events can be modeled by estimating the probability of the event occurring and the consequence if they do. Examples of event uncertainties are

    • Political risks in emerging markets
    • Paradigm Shifts in consumer habits
    • Innovations
    • Changes in competitor behavior
    • Changes in laws and regulations
    • Changes in tax regimes

    Why distinguish between estimates and events?

    The reason why there are advantages to separating estimates and events in risk modeling is that they are by nature different. An estimate of an expense or income is something we know will be part of a project’s results, with an expected value that is NOT equal to 0. It can be modeled as a probability curve with an expected outcome and a high and low value.

    An event, on the other hand, can occur or not, and has an expected value of 0. If the event is expected to occur, the impact of the event should be modeled as an expense or income. Whether the event occurs or not has a probability, and there will be an impact if the event occurs (0 if it doesn’t occur).

    Such an event can be modeled as a discrete distribution (0, it does not occur, 1if it occurs) and there is only an impact on the result of the project or business IF it occurs. The consequence may be deterministic – we know what it means if it happens – or it could be a distribution with a high, low and expected value.

    An example

    I have created an example using a private manufacturing company. They have an expected P&L which looks like this:

    Example data

    The company has a high export share to the EU and Norwegian cost (both variable and fixed). Net margin is expected to fall to a level of 17% in 2018. The situation looks somewhat better when the simulated – there is more upside than downside in the market.

    Initial simulation

    But potential events that may affect the result are not yet modeled, and what impact can they have? Let’s look at two examples of potential events:

    1. The introduction of a duty of 25% on the company’s products in the EU. The company will not be able to lift the cost onto the customers, and therefore this will be a cost for the company.
    2. There are only two suppliers of the raw materials the company uses to produce its products and the demand for it is high. This means that the company has a risk of not getting enough raw materials (25% less) in order to produce as much as the market demands.

    events

    As the table shows the risk that the events occur increase with time. Looking at the consequences of the probability weighted events in 2018, the impact on the expected result is:

    resultat 2018

    The consequence of these events is a larger downside risk (lower expected result) and higher variability (larger standard deviation). The probability of a 0 result is

    • 14% in the base scenario
    • 27% with the event “Duty in the EU”
    • 36% with the event “Raw material Shortage” in addition

    The events have no upside, so this is a pure increase in company risk. A 36% probability of a result of 0 or lower may be dramatic. The knowledge of what potential events may mean to the company’s profitability will contribute to the company’s ability to take appropriate measures in time, for instance

    • Be less dependent on EU customers
    • Securing a long-term raw materials contract

    and so on.

    Normally, this kind of analysis is done as a scenario. But a scenario analysis will not provide the answer to how likely the event is nor to what the likely consequence is. Neither will it be able to give the answer to the question: How likely it is that the business will make a loss?

    One of the main reasons for risk analysis is that it increases the ability to take action in time. Good risk management is all about being one step ahead – all the time. As a rule, the consequences of events that no one has thought of (and thus no plan B is in place) are greater than that of events which have been thought through. It is far better to have calculated the consequences, reflected on the probabilities and if possible put in place risk mitigation.

    Knowing the likelihood that something can go horribly wrong is also an important tool in order to properly prioritize and put mitigation measures in at the right place.

  • Working Capital Strategy Revisited

    Working Capital Strategy Revisited

    This entry is part 3 of 3 in the series Working Capital

    Introduction

    To link the posts on working capital and inventory management, we will look at a company with a complicated market structure, having sales and production in a large number of countries and with a wide variety of product lines. Added to this is a marked seasonality with high sales in the years two first quarters and much lower sales in the years two last quarters ((All data is from public records)).

    All this puts a strain on the organizations production and distribution systems and of course on working capital.

    Looking at the development of net working capital ((Net working capital = Total current assets – Total current liabilities)) relative to net sales it seems as the company in the later years have curbed the initial net working capital growth:

    Just by inspecting the graph however it is difficult to determine if the company’s working capital management is good or lacking in performance. We therefore need to look in more detail at the working capital elements  and compare them with industry ‘averages’ ((By their Standard Industrial Classification (SIC) )).

    The industry averages can be found from the annual “REL Consultancy /CFO Working Capital Survey” that made its debut in 1997 in the CFO Magazine. We can thus use the survey’s findings to assess the company’s working capital performance ((Katz, M.K. (2010). Working it out: The 2010 Working Capital Scorecard. CFO Magazine, June, Retrieved from http://www.cfo.com/article.cfm/14499542
    Also see: https://www.strategy-at-risk.com/2010/10/18/working-capital-strategy-2/)).

    The company’s working capital management

    Looking at the different elements of the company’s working capital, we find that:

    I.    Day’s sales outstanding (DSO) is on average 70 days compared with REL’s reported industry median of 56 days.

    II.    Day’s payables outstanding (DPO) is the difference small and in the right direction, 25 days against the industry median of 23 days.

    III.    Day’s inventory outstanding (DIO) on average 138 days compared with the industry median of 23 days, and this is where the problem lies.

    IV.    The company’s days of working capital (DWC = DSO+DIO-DPO) (( Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC). Se endnote for more.)) have on average according to the above, been 183 days over the last five years compared to REL’s  median DWC of 72 days in for comparable companies.

    This company thus has more than 2.5 times ‘larger’ working capital than its industry average.

    As levers of financial performance, none is more important than working capital. The viability of every business activity rests on daily changes in receivables, inventory, and payables.

    The goal of the company is to minimize its ‘Days of Working Capital’ (DWC) or which is equivalent the ‘Cash Conversion Cycle’ (CCC), and thereby reduce the amount of outstanding working capital. This requires examining each component of DWC discussed above and taking actions to improve each element. To the extent this can be achieved without increasing costs or depressing sales, they should be carried out:

    1.    A decrease in ‘Day’s sales outstanding’ (DSO) or in ‘Day’s inventory outstanding’ (DIO) will represent an improvement, and an increase will indicate deterioration,

    2.    An increase in ‘Day’s payables outstanding’ (DPO) will represent an improvement and an decrease will indicate deterioration,

    3.    Reducing ‘Days of Working Capital’ (DWC or CCC) will represent an improvement, whereas an increasing (DWC or CCC) will represent deterioration.

    Day’s sales- and payables outstanding

    Many companies think in terms of “collecting as fast as possible, and paying as slowly as permissible.” This strategy, however, may not be the wisest.
    At the same time the company is attempting to integrate with its customers – and realize the related benefits – so are its suppliers. A “pay slow” approach may not optimize either the accounts or inventory, and it is likely to interfere with good supplier relationships.

    Supply-chain finance

    One way around this might be ‘Supply Chain Finance ‘(SCF) or reverse factoring ((“The reverse factoring method, still rare, is similar to the factoring insofar as it involves three actors: the ordering party, the supplier and the factor. Just as basic factoring, the aim of the process is to finance the supplier’s receivables by a financier (the factor), so the supplier can cash in the money for what he sold immediately (minus an interest the factor deducts to finance the advance of money).” http://en.wikipedia.org/wiki/Reverse_factoring)). Properly done, it can enable a company to leverage credit to increase the efficiency of its working capital and at the same time enhance its relationships with suppliers. The company can extend payment terms and the supplier receives advance payments discounted at rates considerably lower than their normal funding margins. The lender (factor), in turn, gets the benefit of a margin higher than the risk profile commands.

    This is thus a form of receivables financing using solutions that provide working capital to suppliers and/or buyers within any part of a supply chain and that is typically arranged on the credit risk of a large corporate within that supply chain.

    Day’s inventory outstanding (DIO)

    DIO is a financial and operational measure, which expresses the value of inventory in days of cost of goods sold. It represents how much inventory an organization has tied up across its supply chain or more simply – how long it takes to convert inventory into sales. This measure can be aggregated for all inventories or broken down into days of raw material, work in progress and finished goods. This measure should normally be produced monthly.

    By using the industry typical ‘days inventory outstanding’ (DIO) we can calculate the potential reduction in the company’s inventory – if the company should succeed in being as good in inventory management as its peers.

    If the industry’s typical DIO value is applicable, then there should be a potential for a 60 % reduction in the company’s inventory.

    Even if this overstates the true potential it is obvious that a fairly large reduction is possible since 98% of the 1000 companies in the REL report have a value for DIO less than 138 days:

    Adding to the company’s concern should also be the fact that the inventories seems to increase at a faster pace than net sales:

    Inventory Management

    Successfully addressing the challenge of reducing inventory requires an understanding of why inventory is held and where it builds in the system.
    Achieving this goal requires a focus on inventory improvement efforts on four core areas:

    1. demand management – information integration with both suppliers and customers,
    2. inventory optimization – using statistical/finance tools to monitor and set inventory levels,
    3. transportation and logistics – lead time length and variability and
    4. supply chain planning and execution – coordinating planning throughout the chain from inbound to internal processing to outbound.

    We believe that the best way of attacking this problems is to produce a simulation model that can ‘mimic’ the sales – distribution – production chain in necessary detail to study different strategies and the probabilities of stock-out and possible stock-out costs compared with the costs of doing the different products (items).

    The costs of never experience a stock-out can be excessively high – the global average of retail out-of-stocks is 8.3% ((Gruen, Thomas W. and Daniel Corsten (2008), A Comprehensive Guide to Retail Out-of-Stock Reduction in the Fast-Moving Consumer Goods Industry, Grocery Manufacturers of America, Washington, DC, ISBN: 978-3-905613-04-9)) .

    By basing the model on activity-based costing, it can estimate the cost and revenue elements of the product lines thus either identify and/or eliminate those products and services that are unprofitable or ineffective. The scope is to release more working capital by lowering values of inventories and streamlining the end to end value chain

    To do this we have to make improved forecasts of sales and a breakdown of risk and economic values both geographically and for product groups to find out were capital should be employed coming years  (product – geography) both for M&A and organic growth investments.

    A model like the one we propose needs detailed monthly data usually found in the internal accounts. This data will be used to statistically determine the relationships between the cost variables describing the different value chains. In addition will overhead from different company levels (geographical) have to be distributed both on products and on the distribution chains.

    Endnote

    Days Sales Outstanding (DSO) = AR/(total revenue/365)

    Year-end trade receivables net of allowance for doubtful accounts, plus financial receivables, divided by one day of average revenue.

    Days Inventory Outstanding (DIO) = Inventory/(total revenue/365)

    Year-end inventory plus LIFO reserve divided by one day of average revenue.

    Days Payables Outstanding (DPO) = AP/(total revenue/365)

    Year-end trade payables divided by one day of average revenue.

    Days Working Capital (DWC): (AR + inventory – AP)/(total revenue/365)

    Where:
    AR = Average accounts receivable
    AP = Average accounts payable
    Inventory = Average inventory + Work in progress

    Year-end net working capital (trade receivables plus inventory, minus AP) divided by one day of average revenue. (DWC = DSO+DIO-DPO).

    For the comparable industry we find an average of: DWC=56+39-23=72 days

    Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC) except that the CCC uses the Cost of Goods Sold (COGS) when calculating both the Days Inventory Outstanding (DIO) and the Days Payables Outstanding (DPO) whereas DWC uses sales (Total Revenue) for all calculations:

    CCC= Days in period x {(Average  inventory/COGS) + (Average receivables / Revenue) – (Average payables/[COGS + Change in Inventory)]

    Where:
    COGS= Production Cost – Change in Inventory

    Footnotes

     

  • Inventory management – Stochastic supply

    Inventory management – Stochastic supply

    This entry is part 4 of 4 in the series Predictive Analytics

     

    Introduction

    We will now return to the newsvendor who was facing a onetime purchasing decision; where to set the inventory level to maximize expected profit – given his knowledge of the demand distribution.  It turned out that even if we did not know the closed form (( In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution, we could find the inventory level that maximized profit and how this affected the vendor’s risk – assuming that his supply with certainty could be fixed to that level. But what if that is not the case? What if the supply his supply is uncertain? Can we still optimize his inventory level?

    We will look at to slightly different cases:

    1.  one where supply is uniformly distributed, with actual delivery from 80% to 100% of his ordered volume and
    2. the other where the supply have a triangular distribution, with actual delivery from 80% to 105% of his ordered volume, but with most likely delivery at 100%.

    The demand distribution is as shown below (as before):

    Maximizing profit – uniformly distributed supply

    The figure below indicates what happens as we change the inventory level – given fixed supply (blue line). We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profit will increase to a point of maximum.

    If we let the actual delivery follow the uniform distribution described above, and successively changes the order point expected profit will follow the red line in the graph below. We can see that the new order point is to the right and further out on the inventory axis (order point). The vendor is forced to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2300 to 2900 units with a mean close to the inventory level giving maximum profit for the fixed supply case:

    The realized profits are as shown in the frequency graph below:

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.

    Maximizing profit – triangular distributed supply

    Again we compare the expected profit with delivery following the triangular distribution as described above (red line) with the expected profit created by known and fixed supply (blue line).  We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profits will increase to a point of maximum. However the order point for the stochastic supply is to the right and further out on the inventory axis than for the non-stochastic case:

    The uncertain supply again forces the vendor to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2250 to 2900 units with a mean again close to the inventory level giving maximum profit for the fixed supply case ((This is not necessarily true for other combinations of demand and supply distributions.)) .

    The realized profits are as shown in the frequency graph below:

    Average profit has somewhat been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation this variability has increased by 10%, and this is again mainly caused by an increased negative skewness – again have the down side been raised.

    The introduction of uncertain supply has shown that profit can still be maximized however the profit will be reduced by increased costs both in lost sales and in excess inventory. But most important, profit variability will increase raising issues of possible other strategies.

    Summary

    We have shown through Monte-Carlo simulations, that the ‘order point’ when the actual delivered amount is uncertain can be calculated without knowing the closed form of the demand distribution. We actually do not need the closed form for the distribution describing delivery, only historic data for the supplier’s performance (reliability).

    Since we do not need the closed form of the demand distribution or supply, we are not limited to such distributions, but can use historic data to describe the uncertainty as frequency distributions. Expanding the scope of analysis to include supply disruptions, localization of inventory etc. is thus a natural extension of this method.

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.

  • Inventory management – Some effects of risk pooling

    Inventory management – Some effects of risk pooling

    This entry is part 3 of 4 in the series Predictive Analytics

    Introduction

    The newsvendor described in the previous post has decided to branch out having news boys placed at strategic corners in the neighborhood. He will first consider three locations, but have six in his sights.

    The question to be pondered is how many of the newspaper he should order for these three locations and the possible effects on profit and risk (Eppen, 1979) and (Chang & Lin, 1991).

    He assumes that the demand distribution he experienced at the first location also will apply for the two others and that all locations (point of sales) can be served from a centralized inventory. For the sake of simplicity he further assumes that all points of sales can be restocked instantly (i.e. zero lead time) at zero cost, if necessary or advantageous by shipment from one of the other locations and that the demand at the different locations will be uncorrelated. The individual point of sales will initially have a working stock, but will have no need of safety stock.

    In short is this equivalent to having one inventory serve newspaper sales generated by three (or six) copies of the original demand distribution:

    The aggregated demand distribution for the three locations is still positively skewed (0.32) but much less than the original (0.78) and has a lower coefficient of variation – 27% – against 45% for the original ((The quartile variation has been reduced by 37%.)):

    The demand variability has thus been substantially reduced by this risk pooling ((We distinguish between ten main types of risk pooling that may reduce total demand and/or lead time variability (uncertainty): capacity pooling, central ordering, component commonality, inventory pooling, order splitting, postponement, product pooling, product substitution, transshipments, and virtual pooling. (Oeser, 2011)))  and the question now is how this will influence the vendor’s profit.

    Profit and Inventory level with Risk Pooling

    As in the previous post we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum – ¤16541 at a level of 7149 units:

    Compared to the point of maximum profit for a single warehouse (profit ¤4963 at a level of 2729 units, see previous post), have this risk pooling increased the vendors profit by 11.1% while reducing his inventory by 12.7%. Centralization of the three inventories has thus been a successful operational hedge ((Risk pooling can be considered as a form of operational hedging. Operational hedging is risk mitigation using operational instruments.))  for our newsvendor by mitigating some, but not all, of the demand uncertainty.

    Since this risk mitigation was a success the newsvendor wants to calculate the possible benefits from serving six newsboys at different locations from the same inventory.

    Under the same assumptions, it turns out that this gives an even better result, with an increase in profit of almost 16% and at the same time reducing the inventory by 15%:

    The inventory ‘centralization’ has then both increased profit and reduced inventory level compared to a strategy with inventories held at each location.

    Centralizing inventory (inventory pooling) in a two-echelon supply chain may thus reduce costs and increase profits for the newsvendor carrying the inventory, but the individual newsboys may lose profits due to the pooling. On the other hand, the newsvendor will certainly lose profit if he allows the newsboys to decide the level of their own inventory and the centralized inventory.

    One of the reasons behind this conflict of interests is that each of the newsvendor and newsboys will benefit one-sidedly from shifting the demand risk to another party even though the performance may suffer as a result (Kemahloğlu-Ziya, 2004) and (Anupindi and Bassok 1999).

    In real life, the actual risk pooling effects would depend on the correlations between each locations demand. A positive correlation would reduce the effect while a negative correlation would increase the effects. If all locations were perfectly correlated (positive) the effect would be zero and a correlation coefficient of minus one would maximize the effects.

    The third effect

    The third direct effect of risk pooling is the reduced variability of expected profit. If we plot the profit variability, measured by its coefficient of variation (( The coefficient of variation is defined as the ratio of the standard deviation to the mean – also known as unitized risk.)) (CV) for the three sets of strategies discussed above; one single inventory (warehouse), three single inventories versus all three inventories centralized and six single inventories versus all six centralized.

    The graph below depicts the situation. The three curves show the CV for corporate profit given the three alternatives and the vertical lines the point of profit for each alternative.

    The angle of inclination for each curve shows the profits sensitivity for changes in the inventory level and the location each strategies impact on the predictability of realized profit.

    A single warehouse strategy (blue) gives clearly a much less ability to predict future profit than the ‘six centralized warehouse’ (purple) while the ‘three centralized warehouse’ (green) fall somewhere in between:

    So in addition to reduced costs and increased profits centralization, also gives a more predictable result, and lower sensitivity to inventory level and hence a greater leeway in the practical application of different policies for inventory planning.

    Summary

    We have thus shown through Monte-Carlo simulations, that the benefits of pooling will increase with the number of locations and that the benefits of risk pooling can be calculated without knowing the closed form ((In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution.

    Since we do not need the closed form of the demand distribution, we are not limited to low demand variability or the possibility of negative demand (Normal distributions etc.). Expanding the scope of analysis to include stochastic supply, supply disruptions, information sharing, localization of inventory etc. is natural extensions of this method ((We will return to some of these issues in later posts.)).

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    References

    Anupindi, R. & Bassok, Y. (1999). Centralization of stocks: Retailers vs. manufacturer.  Management Science 45(2), 178-191. doi: 10.1287/mnsc.45.2.178, accessed 09/12/2012.

    Chang, Pao-Long & Lin, C.-T. (1991). Centralized Effect on Expected Costs in a Multi-Location Newsboy Problem. Journal of the Operational Research Society of Japan, 34(1), 87–92.

    Eppen,G.D. (1979). Effects of centralization on expected costs in a multi-location newsboy problem. Management Science, 25(5), 498–501.

    Kemahlioğlu-Ziya, E. (2004). Formal methods of value sharing in supply chains. PhD thesis, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, July 2004. http://smartech.gatech.edu/bitstream/1853/4965/1/kemahlioglu ziya_eda_200407_phd.pdf, accessed 09/12/2012.

    OESER, G. (2011). Methods of Risk Pooling in Business Logistics and Their Application. Europa-Universität Viadrina Frankfurt (Oder). URL: http://opus.kobv.de/euv/volltexte/2011/45, accessed 09/12/2012.

    Endnotes