Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Probability – Strategy @ Risk

Tag: Probability

  • Project Management under Uncertainty

    Project Management under Uncertainty

    You can’t manage what you can’t measure
    You can’t measure what you can’t define
    How do you define something that isn’t known?

    DeMarco, 1982

    1.     Introduction

    By the term Project we usually understand a unique, one-time operation designed to accomplish a set of objectives in a limited time frame. This could be building a new production plant, designing a new product or develop new software for a specific purpose.

    A project usually differ from normal operations by; being a onetime operation, having a limited time horizon and budget, having unique specifications and by working across organizational boundaries. A project can be divides into four phases: project definition, planning, implementation and project phase-out.

    2.     Project Scheduling

    The project planning phase, which we will touch upon in this paper, consists of braking down the project into tasks that must be accomplished for the project to be finished.

    The objectives of the project scheduling are to determine the earliest start and finish of each task in the project. The aim is to be able to complete the project as early as possible and to calculate the likelihood that the project will be completed within a certain time frame.

    The dependencies[i] between the tasks determine their predecessor(s) and successor(s) and thus their sequence (order of execution) in the project[1]. The aim is to list all tasks (project activities), their sequence and duration[2] (estimated activity time length). The figure[ii] below shows a simple project network diagram, and we will in the following use this as an example[iii].

    Sample-project#2This project thus consists of a linear flow of coordinated tasks where in fact time, cost and performance can vary randomly.

    A convenient way of organizing this information is by using a Gantt[iv] chart. This gives a graphic representation of the project’s tasks, the expected time it takes to complete them, and the sequence in which they must be done.

    There will usually be more than one path (sequence of tasks) from the first to the last task in a project. The path that takes the longest time to complete is the projects critical path. The objective of all this is to identify this path and the time it takes to complete it.

    3.     Critical Path Analysis

    The Critical Path (CP)[v] is defined as the sequence of tasks that, if delayed – regardless of whether the other project tasks are completed on or before time – would delay the entire project.

    The critical path is hence based on the forecasted duration of each task in the project. These durations are given as single point estimates[3] implying that the project’s tasks duration contain no uncertainty (deterministic). This is obviously wrong and will often lead to unrealistic project estimates due to the inherent uncertainty in all project work.

    Have in mind that: All plans are estimates and are only as good as the task estimates.

    As a matter of fact many different types of uncertainty can be expected in most projects:

    1. Ordinary uncertainty, where time, cost and performance can vary randomly, but inside predictable ranges. Variations in task durations will cause the projects critical path to shift, but this can be predicted and the variation in total project time can be calculated.
    2. Foreseen uncertainty, where a few known factors (events) can affect the project but in an unpredictable way[4]. This is projects where tasks and events occur probabilistic and contain logical relationships of a more complicated nature. E.g. from a specific event some tasks are undertaken with certainty while others probabilistically (Elmaghraby, 1964) and (Pritsker, 1966). The distribution for total project time can still be calculated, but will include variation from the chance events.
    3. Unforeseen uncertainty, where one or more factors (events) cannot be predicted. This will imply that decisions points about the projects implementation have to be included at one or more points in the projects execution.

    As a remedy to the critical path analysis inadequacy to the existence of ordinary uncertainty, the Program Evaluation and Review Technique (PERT[vi]) analysis was developed. PERT is a variation on Critical Path Analysis that takes a slightly more skeptical view of the duration estimates made for each of the project tasks.

    PERT uses a tree-point estimate,[vii] based on the forecast of the shortest possible task duration, the most likely task duration and the worst-case task duration. The tasks expected duration is then calculated as a weighted average of these tree estimates of the durations.

    This is assumed to help to bias time estimates away from the unrealistically short time-scales that often is the case.

    4.     CP, PERT and Monte Carlo Simulation

    The two most important questions we want answered are:

    • How long will it take to do the project?
    • How likely is the project to succeed within the allotted time frame?
    • In this example the projects time frame is set to 67 weeks.

    We will use the Critical Path method, PERT and Monte Carlo simulation to try to answer these questions, but first we need to make some assumptions on the variability of the estimated task durations. We will assume that the durations are triangular distributed and that the actual durations can be both higher and lower than their most likely value.

    The distributions will probably have a right tail since underestimation is common when assessing time and cost (positively skewed), but sometime people deliberately overestimate to avoid being responsible for later project delay (negatively skewed). The assumptions of the tasks duration are given in the table below:

    Project-table#2The corresponding paths, critical path and project durations is given in the table below. The critical path method finds path #1 (tasks: A,B,C,D,E) as the critical path and thus expected project duration to 65 weeks. The second question however cannot be answered by using this method. So, in regard to probable deviations from expected project duration, the project manager is left without any information.

    By using PERT, calculating expected durations and their standard deviation as described in endnote vii, we find the same critical path and roughly the same expected project duration (65.5 weeks), but since we now can calculate the estimate’s standard deviation we can find the probability of the project being finished inside the projects time frame.

    Project-table#1By assuming that the sum of task durations along the critical path is approximately normal distributed, we find that the probability of having the project finished inside the time frame of 67 weeks to 79%. Since this gives is a fairly high probability of project success the manager can rest contentedly – or can she?

    If we repeat the exercise, but now using Monte Carlo simulation we find a different answer. We can no longer with certainty establish a critical path. The tasks variability can in fact give three different critical paths. The most likely is path #1 as before, but there is a close to 30% probability that path #4 (tasks: A,B,C,G,E) will be the critical path. It is also possible, even if the probability is small (<5%), that path #3 (tasks: A,F,G,E) is the critical path (see figure below).Path-as-Critical-pathSo, in this case we cannot use the critical path method, it will give wrong answers and misleading information to the project manager and. More important is the fact that the method cannot use all the information we have about the project’s tasks, that is to say their variability.

    A better approach is to simulate project time to find the distribution for total project duration. This distribution will then include the duration of all critical paths that may arise during the project simulation, given by the red curve in figure below:

    Path-Durations-(CP)This figure gives the cumulative probability distribution for the possible critical paths duration (Path#: 1,3,4) as well as for total project duration. Since path #1 consistently have long duration times there are only in ‘extreme’ cases that path #4 is the critical path. Most strikingly is the large variation in path #3’s duration and the fact that it can end up in some of the simulation’s runs as critical path.

    The only way to find the distribution for total project duration is for every run in the simulation to find the critical path and calculate its duration.

    We now find the expected total project duration to be 67 weeks, one week more than what the CPM and PERT gave, but more important, we find that the probability of finishing the project inside the time frame is only 50%.

    By neglecting the probability that the critical path might change due to task variability PERT is underestimating project variance and thus the probability that the project will not finish inside the expected time frame.

    Monte Carlo models like this can be extended to include many types of uncertainty belonging to the classes of foreseen and unforeseen uncertainty. However, it will only be complete when all types of project costs and their variability are included.

    5.     Summary

    Key findings in comparative studies show that using Monte Carlo along with project planning techniques allows better understanding of project uncertainty and its risk level as well as provides project team with the ability to grasp various possible courses of the project within one simulation procedure.

    Notes

    [1] This can be visualized in a Precedence Diagram also known as a Project Network Diagram.In a Network Diagram, the start of an activity must be linked to the end of another activity

    [2] An event or a milestone is a point in time having no duration. A Precedence Diagram will always have a Start and an End event.

    [3] As a “best guess” or “best estimate” of a fixed or random variable.

    [4] E.g. repetition of tasks.

    Endnotes

    [i] There are four types of dependencies in a Precedence Diagram:

    1. Finish-Start: A task cannot start before a previous task has ended.
    2. Start-Start: There is a defined relationship between the start of tasks.
    3. Finish-Finish: There is a defined relationship between the end dates of tasks.
    4. Start-Finish: There is a defined relationship between the start of one task and the end date of a successor task.

    [ii] Taken from the Wikipedia article: Critical path drag, http://en.wikipedia.org/wiki/Critical_path_drag

    [iii] The Diagram contains more information than we will use. The diagram is mostly self-explaining, however Float (or Slack) and Drag is defined as the activity delay that the project can tolerate before the project comes in late and how much a task on the critical path is delaying project completion (Devaux,2012).

    [iv] The Gantt chart was developed by Henry Laurence Gantt in the 1910s.

    [v] The Critical Path Method (CPM) was developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley, Jr. of Remington Rand.

    [vi] The Program Evaluation and Review Technique (PERT) were developed by Booz Allen Hamilton and the U.S. Navy, at about the same time as the CPM. Key features of a PERT network are:

    1. Events must take place in a logical order.
    2. Activities represent the time and the work it takes to get from one event to another.
    3. No event can be considered reached until ALL activities leading to the event are completed.
    4. No activity may be begun until the event preceding it has been reached.

    [vii] Assuming, that a process with a double-triangular distribution underlies the actual task durations, the tree estimated values (min, ml, max) can then be used to calculate expected value (E) and standard deviation (SD) as L-estimators, with: E = (min + 4m + max)/6 and SD = (max − min)/6.

    E is thus a weighted average, taking into account both the most optimistic and most pessimistic estimates of the durations provided. SD measures the variability or uncertainty in the estimated durations.

    References

    Devaux, Stephen A.,(2012). “The Drag Efficient: The Missing Quantification of Time on the Critical Path” Defense AT&L magazine of the Defense Acquisition University. Retrieved from http://www.dau.mil/pubscats/ATL%20Docs/Jan_Feb_2012/Devaux.pdf

    DeMarco, T, (1982), Controlling Software Projects, Prentice-Hall, Englewood Cliffs, N.J., 1982

    Elmaghraby, S.E., (1964), An algebra for the Analyses of Generalized Activity Networks, Management Science, 10,3.

    Pritsker, A. A. B. (1966). GERT: Graphical Evaluation and Review Technique (PDF). The RAND Corporation, RM-4973-NASA.

  • The risk of planes crashing due to volcanic ash

    The risk of planes crashing due to volcanic ash

    This entry is part 4 of 4 in the series Airports

    Eyjafjallajokull volcano

    When the Icelandic volcano Eyafjallajøkul had a large eruption in 2010 it lead to closed airspace all over Europe, with corresponding big losses for airlines.  In addition it led to significant problems for passengers who were stuck at various airports without getting home.  In Norway we got a new word: “Ash stuck” ((Askefast)) became a part of Norwegian vocabulary.

    The reason the planes were put on ground is that mineral particles in the volcanic ash may lead to damage to the plane’s engines, which in turn may lead to them crashing.  This happened in 1982, when a flight from British Airways almost crashed due to volcanic particles in the engines. The risk of the same happening in 2010 was probably not large, but the consequences would have been great should a plane crash.

    Using simulation software and a simple model I will show how this risk can be calculated, and hence why the airspace was closed over Europe in 2010 even if the risk was not high.  I have not calculated any effects following the closure, since this isn’t a big model nor an in depth analysis.  It is merely meant as an example of how different issues can be modeled using Monte Carlo simulation.  The variable values are not factual but my own simple estimates.  The goal in this article is to show an example of modeling, not to get a precise estimate of actual risk.

    To model the risk of dangerous ash in the air there are a few key questions that have to be asked and answered to describe the issue in a quantitative way.

    Is the ash dangerousVariable 1. Is the ash dangerous?

    We first have to model the risk of the ash being dangerous to plane engines.  I do that by using a so called discrete probability.  It has a value 0 if the ash is not dangerous and a value 1 if it is.  Then the probabilities for each of the alternatives are set.  I set them to:

    • 99% probability that the as IS NOT dangerous
    • 1% probability that the ash IS dangerous

    Number of planes in the air during 2 hoursVariable 2. How many planes are in the air?

    Secondly we have to estimate how many planes are in the air when the ash becomes a problem.  Daily around 30 000 planes are in the air over Europe.  We can assume that if planes start crashing or get in big trouble the rest will immediately be grounded.  Therefore I only use 2/24 of these planes in the calculation.

    • 2 500 planes are in the air when the problem occurs

    I use a normal distribution and set the standard deviation for planes in the air in a 2 hour period to 250 planes.  I have no views on whether the curve is skewed one way or the other.  I assume it may well be, since there probably are different numbers of planes in the air depending on weekday, whether it’s a holiday season and so on, but I’ll leave that estimate to the air authority staff.

    Number of passengers and crewVariable 3.  How many people are there in each plane?

    Thirdly I need an estimate on how many passengers and crew there are in each plane.  I assume the following; I disregard the fact that there are a lot of intercontinental flights over the Eyafjallajøkul volcano, likely with more passengers than the average plane over Europe.  The curve might be more skewed that what I assume:

    • Average number of passengers/crew: 70
    • Lowest number of passengers/crew: 60
    • Highest number of passengers/crew: 95

    The reason I’m using a skewed curve here is that the airline business is constantly under pressure to fill up their planes.  In addition the number of passengers will vary by weekday and so on.  I think it is reasonable to assume that there are likely more passengers per plane rather than fewer.

    Number of planes crashingVariable 4. How many of the planes which are in the air will crash?

    The last variable that needs to be modeled is how many planes will crash should the ash be dangerous.  I assume that maybe no planes actually crash, even though the ash gets into their engines.  This is the low end of the curve.  I have in addition assumed the following:

    • Expected number of planes that crash: 0, 01%
    • Maximum number of planes that crash: 1, 0%

    Now we have what we need to start calculating!

    The formula I use to calculate is as follows:

    If(“Dangerous ash”=0;0)

    If(“Dangerous ash”=1;”Number of planes in the air”x”Number of planes crashing”x”Number of passengers/crew per plane”)

    If the ash is not dangerous, variable 1 is equal to 0, no planes crash and nobody dies.  If the ash is dangerous the number of dead is a product of the number of planes, number of passengers/crew and the number of planes crashing.

    Running this model with a simulation tool gives the following result:

    Expected value - number of dead

    As the graph shows the expected value is low; 3 people, meaning that the probability for a major loss of planes is very low.  But the consequences may be devastatingly high.  In this model run there is a 1% probability that the ash is dangerous, and a 0, 01% probability that planes actually crash.  However the distribution has a long tail, and a bit out in the tail there is a probability that 1 000 people crash into their death. This is a so called shortfall risk or the risk of a black swan if you wish.  The probability is low, but the consequences are very big.

    This is the reason for the cautionary steps taken by air authorities.   Another reason is that the probabilities both for the ash being dangerous and that planes will crash because of it are unknown probabilities.  Thirdly, changes in variable values will have a big impact.

    If the probability of the ash being dangerous is 10% rather than 1% and the probability of planes crashing is 1% rather than 0,01%, as much as 200 dead (or 3 planes) is expected while the extreme outcome is close to 6 400 dead.

    Expected value - number of dead higher probability of crash

    This is a simplified example of the modeling that is likely to be behind the airspace being closed.  I don’t know what probabilities are used, but I’m sure this is how they think.

    How we assess risk depends on who we are.  Some of us have a high risk appetite, some have low.  I’m glad I’m not the one to make the decision on whether to close the airspace or not.  It is not an easy decision.

    My model is of course very simple.  There are many factors to take into account, like wind direction and – strength, intensity of eruption and a number of other factors I don’t know about.  But as an illustration both of the factors that need to be estimated in this case and as a generic modeling case this is a good example.

    Originally published in Norwegian.

  • Simulation of balance sheet risk

    Simulation of balance sheet risk

    This entry is part 6 of 6 in the series Balance simulation

    iStock_000013200637XSmall

    As I wrote in the article about balance sheet risk, a company with covenants in its loan agreements may have to hedge balance sheet risk even though it is not optimal from a market risk perspective.

    But how can the company know which covenant to hedge?  Often a company will have more than one covenant, and hedging one of them may adversely impact the other.  To answer the question it is necessary to calculate the effect of a hedge strategy, and the best way to do that is by using a simulation model.  Such a model can give the answer by estimating the probability of breech of a covenant.

    Which hedging strategy the company is to choose demands knowledge about what covenant is the most at risk.   How likely is it that the company will face a breech?  Like I described in the previous article:

    Which hedging strategy the company chooses depends on which covenant is most at risk.  There are inherent conflicts between the different hedging strategies, and therefore it is necessary to make a thorough assessment before implementing any such hedging strategy.

    In addition:

    If the company hedges gearing, the size of the equity will be more at risk [..], And in addition, drawing a larger proportion of debt in the home (or functional) currency may imply an increase in economic risk.  [..] Hence, if the company does not have to hedge gearing it should hedge its equity.

    To analyse the impact of different strategies and to answer the questions above I have included simulation of currency rates in the example from the previous article:

    simulation model balance sheet risk

    The result of strategy choice given a +/- 10% change in currency rates  was shown in the previous article.  But that model cannot give the answer to how likely it is that the company will face a breech situation.  How large changes in currency rates can the company take?

    To look at this issue I have used the following modeling of currency rates:

    • Rates at the last day of every quarter from 31/12/02 to 30/06/2013.  The reason for choosing these dates is of course that they are the dates when the balance sheet is measured.  It doesn’t matter if the currency rates are unproblematic March 1st if they are problematic March 31st.  Because that is the date when books are closed for Q1 and the date when the balance sheet is measured.
    • I have analysed the rated using Excel @Risk, which can fit a probability curve on historical rates.  There are, of course, many methods for estimating currency rates and I will get back to that later.  But this method has advantages; the basis is actual rates which have actually occurred.

    The closest fit to the data was a LapLace-curve ((RiskLaplace (μ,σ) specifies a laplace distribution with the entered μ location and σ scale parameters. The laplace distribution is sometimes called a “double exponential distribution” because it resembles two exponential distributions placed back to back, positioned with the entered location parameter.))  for EUR and a Uniform-curve ((RiskUniform(minimum,maximum) specifies a uniform probability distribution with the entered minimum and maximum values. Every value across the range of the uniform distribution has an equal likelihood of occurrence)) for USD against NOK.

    estimatkurverIt is always a good idea to ask yourself if the fitted result has a good story behind it.  Is it logical?  What we want is to find a good estimate for future currency rates.  If the logic is hard to see, we should go back and analyze more.  But there seems to be a good logic/story behind these estimates in my opinion:

    • EUR against NOK is so called mean reverting, meaning that it normally will revert back to a level of around 8 NOK +/- for 1 EUR.  Hence, the curve is pointed and has long tails.  We most likely will have to pay 8 NOK for 1 EUR, but it can move quite a bit away from the expected mean, both up and down.
    • USD is more unpredictable against NOK and a uniform curve, with any level of USD/NOK being as likely, sound like a good estimate.

    In addition to the probability curves for USD and EUR an estimate for the correlation between them is needed.  I used the same historical data to calculate historical correlation.  On the end quarter rates it has been 0,39.  A positive correlation means that the rates move the same way – if one goes up, so does the other.  The reason is that it was the NOK that moved against both currencies.  That’s also a good assessment, I believe. History has shown it to be the case.

    Now we have all the information needed to simulate how much at risk our (simple) balance sheet is to adverse currency movements.  And based on the simulation, the answer is: Quite a bit.

    I have modeled the following covenants:

    • Gearing < 1,5
    • Equity > 3 000

    This is the result of the simulation (click on the image to zoom):

    Simulation results

    Gearing is the covenant most at risk, as the tables/graphs show.  Both in the original mix (all debt in NOK) and if the company is hedging equity there is a high likelihood of breaching the gearing covenant.

    There is a probability of 22% in the first case (all debt in NOK) and a probability of 23% in the second (equity-hedge).  This is a rather high probability, considering that the NOK may move quite a bit, quit quickly.

    The equity is less at risk and the covenant has more headroom.  There is a 13% probability for breech with all debt in NOK, but 0% should the company choose either of the two hedging strategies.  This is due to the fact that currency loans will reduce risk, regardless of whether debt fully hedges assets, or only partially.

    Hence, based on this example it is easy to give advice to the company.  The company should hedge gearing by drawing debt in a mix of currencies reflecting its assets.  Reality is of course more complex than this example, but the mechanism will be the same.  And the need for accurate decision criteria – likelihood of breech – is more important the more complex the business is.

    debtOne thing that complicates the picture is the impact different strategies have on the company’s debt.  Debt levels may vary substantially, depending on choice of strategy.

    If the company has to refinance some of its debt, and at the same time there is a negative impact on the value of the debt (weaker home currency), the refinancing need will be substantially higher than what would have been the case with local debt. This is also answers you can get from the simulation modeling.

    The answer to the questions: “How likely is it that the company to breech its covenants and what are the consequences of strategic choices on key figures, debt and equity?” is something really only a good simulation model can give.

    Originally published in Norwegian.

  • Uncertainty – lack of information

    Uncertainty – lack of information

    This entry is part 3 of 6 in the series Monte Carlo Simulation

     

    Every item in a budget or a profit and loss account represents in reality a probability distribution. In this framework all items whether from the profit and loss account or from the balance sheet will have individual probability distributions. These distributions are generated by the combination of distributions from factors of production that define the item.

    Variance will increase as we move down the items in the profit and loss account. The message is that even if there is a low variance in the input variables (sales, prices, costs etc.) metrics like NOPLAT, Free Cash Flow and Economic Profit will have a much higher variance.

    The key issue is to identify the various items and establish the individual probability distribution. This can take place by using historical data, interviewing experts or comparing data from other relevant sources. There are three questions we need to answer to define the proportions of the uncertainty:

    • What is the expected value?
    • What is the lowest likely value?
    • What is the highest likely value?

    When we have decided the limits where we with 95% probability estimate the result to be within we then decide what kind of probability distribution is relevant for the item. There are several to choose among, but we will emphasize three types here.

    1. The Normal Distribution
    2. The Skewed Distribution
    3. The Triangular Distribution

    The Normal Distribution is being used when we have situations where there is a likeliness for a symmetric result. It can be a good result but has the same probability of being bad.

    The Skew Distribution is being used when it can occur situations where we are lucky and experience more sales than we expected and vice versa we can experience situations where expenditure is less than expected.

    The Triangular Distribution is being used when we are planning investments. This is due to the fact that we tend to know fairly well what we expect to pay and we know we will not get merchandise for free and there is a limit for how much we are willing to pay.

    When we have defined the limits for the uncertainty where we with 95% probability estimate the result to be within we can start to calculate the risk and prioritize the items that matters in terms of creating value or loss.

  • Risk – Exposure to Gain and Loss

    Risk – Exposure to Gain and Loss

    This entry is part 4 of 6 in the series Monte Carlo Simulation

     

    It is first when the decision involves consequences for the decision maker he faces a situation of risk. A traditional way of understanding risk is to calculate how much a certain event varies over time. The less it varies the minor the risk. In every decision where historical data exists we can identify historical patterns, study them and calculate how much they varies. Such a study gives us a good impression of what kind of risk profile we face.

    • Risk – randomness with knowable probabilites.
    • Uncertainty – randomness with unknowable probabilities.

    Another situation occurs when little or no historical data is available but we know fairly well all the options (e.g. tossing a dice). We have a given resource, certain alternatives and a limited number of trials. This is equal to the Manahattan project.

    In both cases we are interested in the probability of success. We like to get a figure, a percentage of the probability for gain or loss. When we know that number we can decide whether we will accept the risk or not.

    Just to illustrate risk, budgeting makes a good example. If we have five items in our budget where we have estimated the expected values (that is 50% probability) it is only three percent probability that all five will target their expectation at the same time.

    0.5^5 = 3,12%

    A common mistake is to summarize the items rather than multiplying them. The risk is expressed by the product of the opportunities.