Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Monte Carlo simulation – Strategy @ Risk

Tag: Monte Carlo simulation

  • The Estimated Project Cost Distributions and the Final Project Cost

    The Estimated Project Cost Distributions and the Final Project Cost

    This entry is part 2 of 2 in the series The Norwegian Governmental Project Risk Assessment Scheme

    Everybody believes in the exponential law of errors: the experimenters, because they think it can be proved by mathematics; and the mathematicians, because they believe it has been established by observation. (Whittaker& Robinson, 1967)

    The growing use of Cost Risk Assessment models in public projects has raised some public concerns about its costs and the models ability to reduce cost overruns and correctly predict the projects final cost. We have in this article shown that the models are neither reliable nor valid, by calculating the probabilities of the projects final costs. The final cost and their probabilities indicate that the cost distributions do not adequately represent the actual cost distributions.

    Introduction

    In the previous post we found that the project cost distributions applied in the uncertainty analysis for 85 Norwegian public works projects were symmetric – and that they could be represented by normal distributions. Their P85/P50 ratios also suggested that they might come from the same normal distribution, since a normal distribution seemed to fit all the observed ratios. The quantile-quantile graph (q-q graph) below depicts this:

    Q-Q-plot#1As the normality test shows, it is not exactly normal ((As the graph shows is the distribution slightly skewed to the right)), but near enough normal for all practical purposes ((The corresponding linear regression gives a value of 0.9540 for the coefficient of determination (R).)). This was not what we would have expected to find.

    The question now is if the use of normal distributions representing the total project cost is a fruitful approach or not.

    We will study this by looking at the S/P50 ratio that is the ratio between the final (actual) total project cost – S and the P50 cost estimate. But first we will take a look at the projects individual cost distributions.

    The individual cost distributions

    By using the fact that the individual project’s cost are normally distributed and by using the P50 and P85 percentiles we can estimate the mean and variance for all the projects’ the cost distributions (Cook, 2010).

    In the graph below we have plotted the estimated relative cost distribution (cost/P50) for the projects with the smallest (light green) and the largest (dark green) variance. Between these curves lie the relative cost distributions for all the 85 projects.

    Between the light green and the blue curve we find 72 (85%) of the projects. The area between the blue and the dark green curve contains 13 of the projects – the projects with the highest variance:

    Relative-costThe differences between the individual relative cost distributions are therefore small. Average standard deviation for all 85 projects is 0.1035 with a coefficient of variation of 48%. For the 72 projects the average standard deviation are 0.0882 with a coefficient of variation of 36%. This is consistent with what we could see from the regression of P85 on P50.

    It is bewildering that a portfolio of so diverse projects can end up with such a small range of normal distributed cost.

    The S/P50 ratio

    A frequency graph of the 85 observed ratios (S/P50) shows a pretty much symmetric distribution, with a pronounced peak. It is slightly positively skewed, with a mean of 1.05, a maximum value of 1.79, a minimum value of 0.41 and a coefficient of variation of 20.3%:

    The-S-and-P85-ratioAt first glance this seems as a reasonable result; even if the spread is large, given that the project’s total cost has normal distributions.

    If the estimated cost distribution(s) gives a good representation of the underlying cost distribution, then – S – should also belong to that distribution. Have in mind that the only value we know with certainty to belong to the underlying cost distribution is – S, i.e. the final total project cost.

    It is there for of interest to find out if the S/P50 ratio(s) are consistent with the estimated distributions. We will try to investigate this by different routes, first by calculating at what probability the deviation of S from P50 occurred.

    What we need to find is, for each of the 85 projects, the probability of having had a final cost ratio (S/P50):

    i.    less or equal to the observed ratio for projects with S > P50 and
    ii.   Greater or equal to the observed ratio for projects with S < P50.

    The graph below depicts this. The maroon circles give the final cost ratio (S/P50) and their probabilities:

    Relative-cost#1A frequency graph of these probabilities should give a graph with a right tail, with most of the projects close to the 0.5 fractile (the median or P50 value), tapering off to the right as we move to higher fractiles.

    We would thus anticipate that most projects have been finished at or close to the quality assurance schemes median value i.e., having had a probability of 0.5 for having had this or a lower (higher) value as final cost ratio, and that only a few would have significant deviations from this.

    We will certainly not expect many of the final cost ratio probabilities above the 0.85 percentile (P85).

    The final cost probability frequency graph will thus give us some of the completing information needed to assess the soundness of using methods and simulation techniques ending up with symmetric project cost distributions.

    Final project cost ratio probability

    The result is given in the graph below, where the red bars indicates projects that with probabilities of 85% or more should have had lower (or higher) final cost ratios:

    Final-cost-probabilitiesThe result is far from what we expected: the projects probabilities are not concentrated at or close to 0.5 and the frequency graph is not tapering off to the right. On the contrary, the frequency of projects increases as we move to higher probabilities for the S/P50 ratios, and the highest frequency is for projects that with high probability should have had a much less or a much higher final cost:

    1. The final project cost ratio probabilities have a mean of 0.83, a median at 0.84 and a coefficient of variation of 21%.
    2. Of the 85 projects, 51 % have final cost ratios that had a probability of 84% or less of being lower (or higher) and 49% have final cost ratios that had a probability of 85% or more of being lower (higher).

    Almost fifty percent of the projects have thus been seriously under or over budgeted or have had large cost over- or underruns – according to the cost distributions established by the QA2 process.

    The cumulative frequency distribution below gives a more detailed description:

    Final-cost-probabilities#1It is difficult to say in what range the probability for the S/P85 ratio should have been for considering the estimated cost distributions to be “acceptable”. If the answer is “inside the quartile range”, then only 30% of the projects final cost forecasts can be regarded as acceptable.

    The assumption of normally distributed total project costs

    Based on the close relation between the P50 and P85 percentiles it is tempting to conclude that most if not all projects has had the same cost estimation validation process; using the same family of cost distributions, with the same shape parameter and assuming independent cost elements – ending up with a near normal or normal distribution for the projects total cost. I.e. all the P85/50 ratios belong to the same distribution.

    If this is the case, then also the projects final costs ratios should also belong to the same distribution. In the q-q graph below, we have added the S/P50 ratios (red) to the P85/P50 ratios (green) from the first q-q graph. If both ratios are randomly drawn from the same distribution, they should all fall close onto the blue identity line:

    Q-Q-plot#3The ratios are clearly not normaly distributed; the S/P50 ratios ends mostly up in both tails and the shape of the plotted ratios now indicates a distribution with heavy tails or may be with bimodality. The two ratios is hence most likely not from the same distribution.
    A q-q graphn with only the S/P50 ratios shows however that they might be normaly distributed, but have been taken from a different distribution than the P85/P50 ratios:

    Q-Q-plot#2The S/P50 ratios are clearly normaly distributed as they fall very close onto the identity line. The plotted ratios also indicates a little lighter tails than the corresponding theoretical distribution.

    That the two sets of ratios so clearly are different is not surprising, since the S/P50 ratios have a coeficient of variation of 20% while the same metric is 4.6% for the P85/P50 ratios ((The S/P50 ratios have a mean of 1.0486 and a standard deviation of 0.2133. The same metrics for the P85/P50 ratios is 1.1069 and 0.0511.)) .

    Since we want the S/P50 ratio to be as close to one as possible, we can regard the distribution of the S/P50 ratios as the QA2’s error distribution.This brings us to the question of the reliability and validity of the QA2 “certified” cost risk assessment model.

    Reliability and Validity

    The first that needs to be answered is then the certified model’s reliability in producing consistent results and second if the cost model really measures what we want to be measured.

    1. We will try to answer this by using the S/P50 probabilities defined above to depict:
      The precision ((ISO 5725-Accuracy of Measurement Methods and Results.))  of the forecasted costs distributions by the variance of the S/P50 probabilities, and
    2. The accuracy (or trueness) of the forecasts, or the closeness of the mean of the probabilities for the S/P50 ratio to the forecasts median value – 0.5.

    The first will give us an answer about the model’s reliability and the second an answer about the model’s validity:
    Accuracy-and-PrecisionA visual inspection of the graph gives an impression of both low precision and low accuracy:

    • the probabilities have a coefficient of variation of 21% and a very high density of final project costs ending up in the cost distributions tail ends, and
    • the mean of the probabilities is 0.83 giving a very low accuracy of the forecasts.

    The conclusion then must be that the cost models (s) are neither reliable nor valid:

    Unreliable_and_unvalidSummary

    We have in these two articles shown that the implementation of the QA2 scheme in Norway ends up with normally distributed project costs.

    i.    The final cost ratios (S/P50) and their probabilities indicate that the cost distributions do not adequately represent the actual distributions.
    ii.    The model (s) is neither reliable nor valid.
    iii.    We believe that this is due to the choice of risk models and technique and not to the actual risk assessment work.
    iv.    The only way to resolve this is to use proper Monte Carlo simulation models and techniques

    Final Words

    Our work reported in these two posts have been done out of pure curiosity after watching the program “Brennpunkt”. The data used have been taken from the program’s documentation.  Based on the results, we feel that our work should be replicated by the Department of Finance and with data from the original sources, to weed out possible errors.

    It should certainly be worth the effort:

    i.    The 85 project here, amounts to NOK 221.005 million with
    ii.    NOK 28.012 million in total deviation ((The sum of all deviations from the P50 values.))  from the P50 value
    iii.    NOK 19.495 million have unnecessary been held in reserve ((The P85 amount less the final project cost > zero.))  and
    iv.    The overruns ((The final project cost less the P50 amount > zero))  have been NOK 20.539 million
    v.    That is, nearly every fifth “krone” of the projects budgets has been “miss” allocated
    vi.    And there are many more projects to come.

    References

    Cook, J.D., (2010). Determining distribution parameters from quantiles.
    http://biostats.bepress.com/mdandersonbiostat/paper55

    Whittaker, E. T. and Robinson, G. (1967), Normal Frequency Distribution. Ch. 8 in The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed. New York: Dover, pp. 164-208, 1967. p. 179.

  • Project Management under Uncertainty

    Project Management under Uncertainty

    You can’t manage what you can’t measure
    You can’t measure what you can’t define
    How do you define something that isn’t known?

    DeMarco, 1982

    1.     Introduction

    By the term Project we usually understand a unique, one-time operation designed to accomplish a set of objectives in a limited time frame. This could be building a new production plant, designing a new product or develop new software for a specific purpose.

    A project usually differ from normal operations by; being a onetime operation, having a limited time horizon and budget, having unique specifications and by working across organizational boundaries. A project can be divides into four phases: project definition, planning, implementation and project phase-out.

    2.     Project Scheduling

    The project planning phase, which we will touch upon in this paper, consists of braking down the project into tasks that must be accomplished for the project to be finished.

    The objectives of the project scheduling are to determine the earliest start and finish of each task in the project. The aim is to be able to complete the project as early as possible and to calculate the likelihood that the project will be completed within a certain time frame.

    The dependencies[i] between the tasks determine their predecessor(s) and successor(s) and thus their sequence (order of execution) in the project[1]. The aim is to list all tasks (project activities), their sequence and duration[2] (estimated activity time length). The figure[ii] below shows a simple project network diagram, and we will in the following use this as an example[iii].

    Sample-project#2This project thus consists of a linear flow of coordinated tasks where in fact time, cost and performance can vary randomly.

    A convenient way of organizing this information is by using a Gantt[iv] chart. This gives a graphic representation of the project’s tasks, the expected time it takes to complete them, and the sequence in which they must be done.

    There will usually be more than one path (sequence of tasks) from the first to the last task in a project. The path that takes the longest time to complete is the projects critical path. The objective of all this is to identify this path and the time it takes to complete it.

    3.     Critical Path Analysis

    The Critical Path (CP)[v] is defined as the sequence of tasks that, if delayed – regardless of whether the other project tasks are completed on or before time – would delay the entire project.

    The critical path is hence based on the forecasted duration of each task in the project. These durations are given as single point estimates[3] implying that the project’s tasks duration contain no uncertainty (deterministic). This is obviously wrong and will often lead to unrealistic project estimates due to the inherent uncertainty in all project work.

    Have in mind that: All plans are estimates and are only as good as the task estimates.

    As a matter of fact many different types of uncertainty can be expected in most projects:

    1. Ordinary uncertainty, where time, cost and performance can vary randomly, but inside predictable ranges. Variations in task durations will cause the projects critical path to shift, but this can be predicted and the variation in total project time can be calculated.
    2. Foreseen uncertainty, where a few known factors (events) can affect the project but in an unpredictable way[4]. This is projects where tasks and events occur probabilistic and contain logical relationships of a more complicated nature. E.g. from a specific event some tasks are undertaken with certainty while others probabilistically (Elmaghraby, 1964) and (Pritsker, 1966). The distribution for total project time can still be calculated, but will include variation from the chance events.
    3. Unforeseen uncertainty, where one or more factors (events) cannot be predicted. This will imply that decisions points about the projects implementation have to be included at one or more points in the projects execution.

    As a remedy to the critical path analysis inadequacy to the existence of ordinary uncertainty, the Program Evaluation and Review Technique (PERT[vi]) analysis was developed. PERT is a variation on Critical Path Analysis that takes a slightly more skeptical view of the duration estimates made for each of the project tasks.

    PERT uses a tree-point estimate,[vii] based on the forecast of the shortest possible task duration, the most likely task duration and the worst-case task duration. The tasks expected duration is then calculated as a weighted average of these tree estimates of the durations.

    This is assumed to help to bias time estimates away from the unrealistically short time-scales that often is the case.

    4.     CP, PERT and Monte Carlo Simulation

    The two most important questions we want answered are:

    • How long will it take to do the project?
    • How likely is the project to succeed within the allotted time frame?
    • In this example the projects time frame is set to 67 weeks.

    We will use the Critical Path method, PERT and Monte Carlo simulation to try to answer these questions, but first we need to make some assumptions on the variability of the estimated task durations. We will assume that the durations are triangular distributed and that the actual durations can be both higher and lower than their most likely value.

    The distributions will probably have a right tail since underestimation is common when assessing time and cost (positively skewed), but sometime people deliberately overestimate to avoid being responsible for later project delay (negatively skewed). The assumptions of the tasks duration are given in the table below:

    Project-table#2The corresponding paths, critical path and project durations is given in the table below. The critical path method finds path #1 (tasks: A,B,C,D,E) as the critical path and thus expected project duration to 65 weeks. The second question however cannot be answered by using this method. So, in regard to probable deviations from expected project duration, the project manager is left without any information.

    By using PERT, calculating expected durations and their standard deviation as described in endnote vii, we find the same critical path and roughly the same expected project duration (65.5 weeks), but since we now can calculate the estimate’s standard deviation we can find the probability of the project being finished inside the projects time frame.

    Project-table#1By assuming that the sum of task durations along the critical path is approximately normal distributed, we find that the probability of having the project finished inside the time frame of 67 weeks to 79%. Since this gives is a fairly high probability of project success the manager can rest contentedly – or can she?

    If we repeat the exercise, but now using Monte Carlo simulation we find a different answer. We can no longer with certainty establish a critical path. The tasks variability can in fact give three different critical paths. The most likely is path #1 as before, but there is a close to 30% probability that path #4 (tasks: A,B,C,G,E) will be the critical path. It is also possible, even if the probability is small (<5%), that path #3 (tasks: A,F,G,E) is the critical path (see figure below).Path-as-Critical-pathSo, in this case we cannot use the critical path method, it will give wrong answers and misleading information to the project manager and. More important is the fact that the method cannot use all the information we have about the project’s tasks, that is to say their variability.

    A better approach is to simulate project time to find the distribution for total project duration. This distribution will then include the duration of all critical paths that may arise during the project simulation, given by the red curve in figure below:

    Path-Durations-(CP)This figure gives the cumulative probability distribution for the possible critical paths duration (Path#: 1,3,4) as well as for total project duration. Since path #1 consistently have long duration times there are only in ‘extreme’ cases that path #4 is the critical path. Most strikingly is the large variation in path #3’s duration and the fact that it can end up in some of the simulation’s runs as critical path.

    The only way to find the distribution for total project duration is for every run in the simulation to find the critical path and calculate its duration.

    We now find the expected total project duration to be 67 weeks, one week more than what the CPM and PERT gave, but more important, we find that the probability of finishing the project inside the time frame is only 50%.

    By neglecting the probability that the critical path might change due to task variability PERT is underestimating project variance and thus the probability that the project will not finish inside the expected time frame.

    Monte Carlo models like this can be extended to include many types of uncertainty belonging to the classes of foreseen and unforeseen uncertainty. However, it will only be complete when all types of project costs and their variability are included.

    5.     Summary

    Key findings in comparative studies show that using Monte Carlo along with project planning techniques allows better understanding of project uncertainty and its risk level as well as provides project team with the ability to grasp various possible courses of the project within one simulation procedure.

    Notes

    [1] This can be visualized in a Precedence Diagram also known as a Project Network Diagram.In a Network Diagram, the start of an activity must be linked to the end of another activity

    [2] An event or a milestone is a point in time having no duration. A Precedence Diagram will always have a Start and an End event.

    [3] As a “best guess” or “best estimate” of a fixed or random variable.

    [4] E.g. repetition of tasks.

    Endnotes

    [i] There are four types of dependencies in a Precedence Diagram:

    1. Finish-Start: A task cannot start before a previous task has ended.
    2. Start-Start: There is a defined relationship between the start of tasks.
    3. Finish-Finish: There is a defined relationship between the end dates of tasks.
    4. Start-Finish: There is a defined relationship between the start of one task and the end date of a successor task.

    [ii] Taken from the Wikipedia article: Critical path drag, http://en.wikipedia.org/wiki/Critical_path_drag

    [iii] The Diagram contains more information than we will use. The diagram is mostly self-explaining, however Float (or Slack) and Drag is defined as the activity delay that the project can tolerate before the project comes in late and how much a task on the critical path is delaying project completion (Devaux,2012).

    [iv] The Gantt chart was developed by Henry Laurence Gantt in the 1910s.

    [v] The Critical Path Method (CPM) was developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley, Jr. of Remington Rand.

    [vi] The Program Evaluation and Review Technique (PERT) were developed by Booz Allen Hamilton and the U.S. Navy, at about the same time as the CPM. Key features of a PERT network are:

    1. Events must take place in a logical order.
    2. Activities represent the time and the work it takes to get from one event to another.
    3. No event can be considered reached until ALL activities leading to the event are completed.
    4. No activity may be begun until the event preceding it has been reached.

    [vii] Assuming, that a process with a double-triangular distribution underlies the actual task durations, the tree estimated values (min, ml, max) can then be used to calculate expected value (E) and standard deviation (SD) as L-estimators, with: E = (min + 4m + max)/6 and SD = (max − min)/6.

    E is thus a weighted average, taking into account both the most optimistic and most pessimistic estimates of the durations provided. SD measures the variability or uncertainty in the estimated durations.

    References

    Devaux, Stephen A.,(2012). “The Drag Efficient: The Missing Quantification of Time on the Critical Path” Defense AT&L magazine of the Defense Acquisition University. Retrieved from http://www.dau.mil/pubscats/ATL%20Docs/Jan_Feb_2012/Devaux.pdf

    DeMarco, T, (1982), Controlling Software Projects, Prentice-Hall, Englewood Cliffs, N.J., 1982

    Elmaghraby, S.E., (1964), An algebra for the Analyses of Generalized Activity Networks, Management Science, 10,3.

    Pritsker, A. A. B. (1966). GERT: Graphical Evaluation and Review Technique (PDF). The RAND Corporation, RM-4973-NASA.

  • The risk of planes crashing due to volcanic ash

    The risk of planes crashing due to volcanic ash

    This entry is part 4 of 4 in the series Airports

    Eyjafjallajokull volcano

    When the Icelandic volcano Eyafjallajøkul had a large eruption in 2010 it lead to closed airspace all over Europe, with corresponding big losses for airlines.  In addition it led to significant problems for passengers who were stuck at various airports without getting home.  In Norway we got a new word: “Ash stuck” ((Askefast)) became a part of Norwegian vocabulary.

    The reason the planes were put on ground is that mineral particles in the volcanic ash may lead to damage to the plane’s engines, which in turn may lead to them crashing.  This happened in 1982, when a flight from British Airways almost crashed due to volcanic particles in the engines. The risk of the same happening in 2010 was probably not large, but the consequences would have been great should a plane crash.

    Using simulation software and a simple model I will show how this risk can be calculated, and hence why the airspace was closed over Europe in 2010 even if the risk was not high.  I have not calculated any effects following the closure, since this isn’t a big model nor an in depth analysis.  It is merely meant as an example of how different issues can be modeled using Monte Carlo simulation.  The variable values are not factual but my own simple estimates.  The goal in this article is to show an example of modeling, not to get a precise estimate of actual risk.

    To model the risk of dangerous ash in the air there are a few key questions that have to be asked and answered to describe the issue in a quantitative way.

    Is the ash dangerousVariable 1. Is the ash dangerous?

    We first have to model the risk of the ash being dangerous to plane engines.  I do that by using a so called discrete probability.  It has a value 0 if the ash is not dangerous and a value 1 if it is.  Then the probabilities for each of the alternatives are set.  I set them to:

    • 99% probability that the as IS NOT dangerous
    • 1% probability that the ash IS dangerous

    Number of planes in the air during 2 hoursVariable 2. How many planes are in the air?

    Secondly we have to estimate how many planes are in the air when the ash becomes a problem.  Daily around 30 000 planes are in the air over Europe.  We can assume that if planes start crashing or get in big trouble the rest will immediately be grounded.  Therefore I only use 2/24 of these planes in the calculation.

    • 2 500 planes are in the air when the problem occurs

    I use a normal distribution and set the standard deviation for planes in the air in a 2 hour period to 250 planes.  I have no views on whether the curve is skewed one way or the other.  I assume it may well be, since there probably are different numbers of planes in the air depending on weekday, whether it’s a holiday season and so on, but I’ll leave that estimate to the air authority staff.

    Number of passengers and crewVariable 3.  How many people are there in each plane?

    Thirdly I need an estimate on how many passengers and crew there are in each plane.  I assume the following; I disregard the fact that there are a lot of intercontinental flights over the Eyafjallajøkul volcano, likely with more passengers than the average plane over Europe.  The curve might be more skewed that what I assume:

    • Average number of passengers/crew: 70
    • Lowest number of passengers/crew: 60
    • Highest number of passengers/crew: 95

    The reason I’m using a skewed curve here is that the airline business is constantly under pressure to fill up their planes.  In addition the number of passengers will vary by weekday and so on.  I think it is reasonable to assume that there are likely more passengers per plane rather than fewer.

    Number of planes crashingVariable 4. How many of the planes which are in the air will crash?

    The last variable that needs to be modeled is how many planes will crash should the ash be dangerous.  I assume that maybe no planes actually crash, even though the ash gets into their engines.  This is the low end of the curve.  I have in addition assumed the following:

    • Expected number of planes that crash: 0, 01%
    • Maximum number of planes that crash: 1, 0%

    Now we have what we need to start calculating!

    The formula I use to calculate is as follows:

    If(“Dangerous ash”=0;0)

    If(“Dangerous ash”=1;”Number of planes in the air”x”Number of planes crashing”x”Number of passengers/crew per plane”)

    If the ash is not dangerous, variable 1 is equal to 0, no planes crash and nobody dies.  If the ash is dangerous the number of dead is a product of the number of planes, number of passengers/crew and the number of planes crashing.

    Running this model with a simulation tool gives the following result:

    Expected value - number of dead

    As the graph shows the expected value is low; 3 people, meaning that the probability for a major loss of planes is very low.  But the consequences may be devastatingly high.  In this model run there is a 1% probability that the ash is dangerous, and a 0, 01% probability that planes actually crash.  However the distribution has a long tail, and a bit out in the tail there is a probability that 1 000 people crash into their death. This is a so called shortfall risk or the risk of a black swan if you wish.  The probability is low, but the consequences are very big.

    This is the reason for the cautionary steps taken by air authorities.   Another reason is that the probabilities both for the ash being dangerous and that planes will crash because of it are unknown probabilities.  Thirdly, changes in variable values will have a big impact.

    If the probability of the ash being dangerous is 10% rather than 1% and the probability of planes crashing is 1% rather than 0,01%, as much as 200 dead (or 3 planes) is expected while the extreme outcome is close to 6 400 dead.

    Expected value - number of dead higher probability of crash

    This is a simplified example of the modeling that is likely to be behind the airspace being closed.  I don’t know what probabilities are used, but I’m sure this is how they think.

    How we assess risk depends on who we are.  Some of us have a high risk appetite, some have low.  I’m glad I’m not the one to make the decision on whether to close the airspace or not.  It is not an easy decision.

    My model is of course very simple.  There are many factors to take into account, like wind direction and – strength, intensity of eruption and a number of other factors I don’t know about.  But as an illustration both of the factors that need to be estimated in this case and as a generic modeling case this is a good example.

    Originally published in Norwegian.

  • We’ve Got Mail !

    We’ve Got Mail !

    This entry is part 1 of 2 in the series Self-applause

    SlideShare#2Thanks
    S@R

     

  • Working Capital Strategy Revisited

    Working Capital Strategy Revisited

    This entry is part 3 of 3 in the series Working Capital

    Introduction

    To link the posts on working capital and inventory management, we will look at a company with a complicated market structure, having sales and production in a large number of countries and with a wide variety of product lines. Added to this is a marked seasonality with high sales in the years two first quarters and much lower sales in the years two last quarters ((All data is from public records)).

    All this puts a strain on the organizations production and distribution systems and of course on working capital.

    Looking at the development of net working capital ((Net working capital = Total current assets – Total current liabilities)) relative to net sales it seems as the company in the later years have curbed the initial net working capital growth:

    Just by inspecting the graph however it is difficult to determine if the company’s working capital management is good or lacking in performance. We therefore need to look in more detail at the working capital elements  and compare them with industry ‘averages’ ((By their Standard Industrial Classification (SIC) )).

    The industry averages can be found from the annual “REL Consultancy /CFO Working Capital Survey” that made its debut in 1997 in the CFO Magazine. We can thus use the survey’s findings to assess the company’s working capital performance ((Katz, M.K. (2010). Working it out: The 2010 Working Capital Scorecard. CFO Magazine, June, Retrieved from http://www.cfo.com/article.cfm/14499542
    Also see: https://www.strategy-at-risk.com/2010/10/18/working-capital-strategy-2/)).

    The company’s working capital management

    Looking at the different elements of the company’s working capital, we find that:

    I.    Day’s sales outstanding (DSO) is on average 70 days compared with REL’s reported industry median of 56 days.

    II.    Day’s payables outstanding (DPO) is the difference small and in the right direction, 25 days against the industry median of 23 days.

    III.    Day’s inventory outstanding (DIO) on average 138 days compared with the industry median of 23 days, and this is where the problem lies.

    IV.    The company’s days of working capital (DWC = DSO+DIO-DPO) (( Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC). Se endnote for more.)) have on average according to the above, been 183 days over the last five years compared to REL’s  median DWC of 72 days in for comparable companies.

    This company thus has more than 2.5 times ‘larger’ working capital than its industry average.

    As levers of financial performance, none is more important than working capital. The viability of every business activity rests on daily changes in receivables, inventory, and payables.

    The goal of the company is to minimize its ‘Days of Working Capital’ (DWC) or which is equivalent the ‘Cash Conversion Cycle’ (CCC), and thereby reduce the amount of outstanding working capital. This requires examining each component of DWC discussed above and taking actions to improve each element. To the extent this can be achieved without increasing costs or depressing sales, they should be carried out:

    1.    A decrease in ‘Day’s sales outstanding’ (DSO) or in ‘Day’s inventory outstanding’ (DIO) will represent an improvement, and an increase will indicate deterioration,

    2.    An increase in ‘Day’s payables outstanding’ (DPO) will represent an improvement and an decrease will indicate deterioration,

    3.    Reducing ‘Days of Working Capital’ (DWC or CCC) will represent an improvement, whereas an increasing (DWC or CCC) will represent deterioration.

    Day’s sales- and payables outstanding

    Many companies think in terms of “collecting as fast as possible, and paying as slowly as permissible.” This strategy, however, may not be the wisest.
    At the same time the company is attempting to integrate with its customers – and realize the related benefits – so are its suppliers. A “pay slow” approach may not optimize either the accounts or inventory, and it is likely to interfere with good supplier relationships.

    Supply-chain finance

    One way around this might be ‘Supply Chain Finance ‘(SCF) or reverse factoring ((“The reverse factoring method, still rare, is similar to the factoring insofar as it involves three actors: the ordering party, the supplier and the factor. Just as basic factoring, the aim of the process is to finance the supplier’s receivables by a financier (the factor), so the supplier can cash in the money for what he sold immediately (minus an interest the factor deducts to finance the advance of money).” http://en.wikipedia.org/wiki/Reverse_factoring)). Properly done, it can enable a company to leverage credit to increase the efficiency of its working capital and at the same time enhance its relationships with suppliers. The company can extend payment terms and the supplier receives advance payments discounted at rates considerably lower than their normal funding margins. The lender (factor), in turn, gets the benefit of a margin higher than the risk profile commands.

    This is thus a form of receivables financing using solutions that provide working capital to suppliers and/or buyers within any part of a supply chain and that is typically arranged on the credit risk of a large corporate within that supply chain.

    Day’s inventory outstanding (DIO)

    DIO is a financial and operational measure, which expresses the value of inventory in days of cost of goods sold. It represents how much inventory an organization has tied up across its supply chain or more simply – how long it takes to convert inventory into sales. This measure can be aggregated for all inventories or broken down into days of raw material, work in progress and finished goods. This measure should normally be produced monthly.

    By using the industry typical ‘days inventory outstanding’ (DIO) we can calculate the potential reduction in the company’s inventory – if the company should succeed in being as good in inventory management as its peers.

    If the industry’s typical DIO value is applicable, then there should be a potential for a 60 % reduction in the company’s inventory.

    Even if this overstates the true potential it is obvious that a fairly large reduction is possible since 98% of the 1000 companies in the REL report have a value for DIO less than 138 days:

    Adding to the company’s concern should also be the fact that the inventories seems to increase at a faster pace than net sales:

    Inventory Management

    Successfully addressing the challenge of reducing inventory requires an understanding of why inventory is held and where it builds in the system.
    Achieving this goal requires a focus on inventory improvement efforts on four core areas:

    1. demand management – information integration with both suppliers and customers,
    2. inventory optimization – using statistical/finance tools to monitor and set inventory levels,
    3. transportation and logistics – lead time length and variability and
    4. supply chain planning and execution – coordinating planning throughout the chain from inbound to internal processing to outbound.

    We believe that the best way of attacking this problems is to produce a simulation model that can ‘mimic’ the sales – distribution – production chain in necessary detail to study different strategies and the probabilities of stock-out and possible stock-out costs compared with the costs of doing the different products (items).

    The costs of never experience a stock-out can be excessively high – the global average of retail out-of-stocks is 8.3% ((Gruen, Thomas W. and Daniel Corsten (2008), A Comprehensive Guide to Retail Out-of-Stock Reduction in the Fast-Moving Consumer Goods Industry, Grocery Manufacturers of America, Washington, DC, ISBN: 978-3-905613-04-9)) .

    By basing the model on activity-based costing, it can estimate the cost and revenue elements of the product lines thus either identify and/or eliminate those products and services that are unprofitable or ineffective. The scope is to release more working capital by lowering values of inventories and streamlining the end to end value chain

    To do this we have to make improved forecasts of sales and a breakdown of risk and economic values both geographically and for product groups to find out were capital should be employed coming years  (product – geography) both for M&A and organic growth investments.

    A model like the one we propose needs detailed monthly data usually found in the internal accounts. This data will be used to statistically determine the relationships between the cost variables describing the different value chains. In addition will overhead from different company levels (geographical) have to be distributed both on products and on the distribution chains.

    Endnote

    Days Sales Outstanding (DSO) = AR/(total revenue/365)

    Year-end trade receivables net of allowance for doubtful accounts, plus financial receivables, divided by one day of average revenue.

    Days Inventory Outstanding (DIO) = Inventory/(total revenue/365)

    Year-end inventory plus LIFO reserve divided by one day of average revenue.

    Days Payables Outstanding (DPO) = AP/(total revenue/365)

    Year-end trade payables divided by one day of average revenue.

    Days Working Capital (DWC): (AR + inventory – AP)/(total revenue/365)

    Where:
    AR = Average accounts receivable
    AP = Average accounts payable
    Inventory = Average inventory + Work in progress

    Year-end net working capital (trade receivables plus inventory, minus AP) divided by one day of average revenue. (DWC = DSO+DIO-DPO).

    For the comparable industry we find an average of: DWC=56+39-23=72 days

    Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC) except that the CCC uses the Cost of Goods Sold (COGS) when calculating both the Days Inventory Outstanding (DIO) and the Days Payables Outstanding (DPO) whereas DWC uses sales (Total Revenue) for all calculations:

    CCC= Days in period x {(Average  inventory/COGS) + (Average receivables / Revenue) – (Average payables/[COGS + Change in Inventory)]

    Where:
    COGS= Production Cost – Change in Inventory

    Footnotes

     

  • Inventory management – Stochastic supply

    Inventory management – Stochastic supply

    This entry is part 4 of 4 in the series Predictive Analytics

     

    Introduction

    We will now return to the newsvendor who was facing a onetime purchasing decision; where to set the inventory level to maximize expected profit – given his knowledge of the demand distribution.  It turned out that even if we did not know the closed form (( In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution, we could find the inventory level that maximized profit and how this affected the vendor’s risk – assuming that his supply with certainty could be fixed to that level. But what if that is not the case? What if the supply his supply is uncertain? Can we still optimize his inventory level?

    We will look at to slightly different cases:

    1.  one where supply is uniformly distributed, with actual delivery from 80% to 100% of his ordered volume and
    2. the other where the supply have a triangular distribution, with actual delivery from 80% to 105% of his ordered volume, but with most likely delivery at 100%.

    The demand distribution is as shown below (as before):

    Maximizing profit – uniformly distributed supply

    The figure below indicates what happens as we change the inventory level – given fixed supply (blue line). We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profit will increase to a point of maximum.

    If we let the actual delivery follow the uniform distribution described above, and successively changes the order point expected profit will follow the red line in the graph below. We can see that the new order point is to the right and further out on the inventory axis (order point). The vendor is forced to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2300 to 2900 units with a mean close to the inventory level giving maximum profit for the fixed supply case:

    The realized profits are as shown in the frequency graph below:

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.

    Maximizing profit – triangular distributed supply

    Again we compare the expected profit with delivery following the triangular distribution as described above (red line) with the expected profit created by known and fixed supply (blue line).  We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profits will increase to a point of maximum. However the order point for the stochastic supply is to the right and further out on the inventory axis than for the non-stochastic case:

    The uncertain supply again forces the vendor to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2250 to 2900 units with a mean again close to the inventory level giving maximum profit for the fixed supply case ((This is not necessarily true for other combinations of demand and supply distributions.)) .

    The realized profits are as shown in the frequency graph below:

    Average profit has somewhat been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation this variability has increased by 10%, and this is again mainly caused by an increased negative skewness – again have the down side been raised.

    The introduction of uncertain supply has shown that profit can still be maximized however the profit will be reduced by increased costs both in lost sales and in excess inventory. But most important, profit variability will increase raising issues of possible other strategies.

    Summary

    We have shown through Monte-Carlo simulations, that the ‘order point’ when the actual delivered amount is uncertain can be calculated without knowing the closed form of the demand distribution. We actually do not need the closed form for the distribution describing delivery, only historic data for the supplier’s performance (reliability).

    Since we do not need the closed form of the demand distribution or supply, we are not limited to such distributions, but can use historic data to describe the uncertainty as frequency distributions. Expanding the scope of analysis to include supply disruptions, localization of inventory etc. is thus a natural extension of this method.

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.