Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Simulation modeling – Page 2 – Strategy @ Risk

Tag: Simulation modeling

  • Budgeting

    Budgeting

    This entry is part 1 of 2 in the series Budgeting

     

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Admittedly a number of large public building projects are calculated this way, but more often than not is the aim only to calculate some percentile (usually 85%) as expected budget cost.

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall.  A manager’s subjective probability describes his personal judgement ebitabout how likely a particular event is to occur. It is not based on any precise computation but is a reasonable assessment by a knowledgeable person. Selecting the budget value however is more difficult. Should it be the “mean” or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average ((Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November (2002): 20-21.))

    When judging probability, people can locate the source of the uncertainty either in their environment or in their own imperfect knowledge ((Kahneman D, Tversky A . ” On the psychology of prediction.” Psychological Review 80(1973): 237-251)). When assessing uncertainty, people tend to underestimate it – often called overconfidence and hindsight bias.

    Overconfidence bias concerns the fact that people overestimate how much they actually know: when they are p percent sure that they have predicted correctly, they are in fact right on average less than p percent of the time ((Keren G.  “Calibration and probability judgments: Conceptual and methodological issues”. Acta Psychologica 77(1991): 217-273.)).

    Hindsight bias concerns the fact that people overestimate how much they would have known had they not possessed the correct answer: events which are given an average probability of p percent before they have occurred, are given, in hindsight, probabilities higher than p percent ((Fischhoff B.  “Hindsight=foresight: The effect of outcome knowledge on judgment under uncertainty”. Journal of Experimental Psychology: Human Perception and Performance 1(1975) 288-299.)).

    We will however not endeavor to ask for the managers subjective probabilities only ask for the range of possible values (5-95%) and their best guess of the most likely value. We will then use this to generate an appropriate log-normal distribution for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate we will use rectangular distributions.

    We will then proceed as if the distributions where known (Keynes):

    [Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.  ((John Maynard Keynes. ” General Theory of Employment, Quarterly Journal of Economics (1937))

    budget_actual_expected

    The data collection can easily be embedded in the ordinary budget process, by asking the managers to set the lower and upper 5% values for all variables demining the budget, and assuming that the budget figures are the most likely values.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – usually 1000 – of net revenue, operating expenses and finally EBIT (DA).

    In this case the budget was optimistic with ca 84% probability of having an outcome below and only with 26% probability of having an outcome above. The accounts also proved it to be high (actual) with final EBIT falling closer to the expected value. In our experience expected value is a better estimator for final result than the budget  EBIT.

    However, the most important part of this exercise is the shape of the cumulative distribution curve for EBIT. The shape gives a good picture of the uncertainty the company faces in the year to come, a flat curve indicates more uncertainty both in the budget forecast and the final result than a steeper curve.

    Wisely used the curve (distribution) can be used both to inform stakeholders about risk being faced and to make contingency plans foreseeing adverse events.percieved-uncertainty-in-ne

    Having the probability distributions for net revenue and operating expenses we can calculate and plot the manager’s perceived uncertainty by using coefficients of variation.

    In our material we find on average twice as much uncertainty in the forecasts for net revenue than for operating expenses.

    As many often have budget values above expected value they are exposing a downward risk. We can measure this risk by the Upside Potential Ratio, which is the expected return above budget value per unit of downside risk. It can be found using the upper and lower moments calculated at budget value.

    References

  • Fish farming

    Fish farming

    When we in 2002 were asked to look into the risk of Cod fish farming, we had to start with the basics; how do cod feed and grow at different locations and what is the mortality at the same locations.

    The first building block was Björn Björnsson’s paper; Björnsson, B., Steinarsson, A., Oddgeirsson, M. (2001). Optimal temperature for growth and feed conversion of immature cod. ICES Journal of Marine Science, 58: 29-38.

    Together with: Björn Björnsson, Marine Research Institute, Iceland and Nils Henrik Risebro, University of Oslo, Norway we did the study presented in the attached paper – Growth, mortality, feed conversion and optimal temperature for maximum rate of increase in biomass and earnings in cod fish farming. (Growth, mortality, feed conversion and optimal temperature for maximum …..)

    This formed the basis for a stochastic simulation model used to calculate the risk in investing in cod fish farming at different locations in Norway.

    simulation-model-for-fisher

    The stochastic part was taken from the “estimation errors” for the relations between growth, feed conversion, mortality etc. as function of deviation from optimal temperature.

    As optimal temperature  varies with cod size, temperature at a fixed location will during the year and over the production cycle deviate from optimal temperature. Locations with temperature profiles close to optimal temperature profile for growth in biomass will, other parameters held constant, are more favorable.

    The results that came out favorably for certain locations were subsequently used as basis for an IPO to finance the investment.

    The use of the model was presented as an article in Norsk Fiskeoppdrett 2002, #4 and 5. It can be downloaded here  (See: Cod fish farming), even if it is in Norwegian some of the graphs might be of interest.

    The following graph sums up the project. It is based on local yield in biomass relative to yield at optimal temperature profile for growth in biomass. Farming operation is simulated on different locations along the coast of Norway and local yield and its coefficient of variation (standard deviation divided by mean) is in the graph plotted against the locations position north. As we can see is not only the yield increasing as the location moves north, but also the coefficient of variation, indicating less risk in an investment.

    yield-as-function-of-positi

    The temperature profile for the locations was taken from the Institute of Marine Research publication: Hydrographic normals and long – term variations at fixed surface layer stations along the Norwegian coast from 1936 to 2000, Jan Aure and Øyvin Strand, Fisken og Havet, #13, 2001.

    Locations of fixed termografic stations along the coast of Norway.

    Locations of fixed termografic stations along the coast of Norway.

    The study gives the monthly mean and standard deviation of temperature (and salinity) in the surface layer at the coastal stations between Sognesjøen and Vardø, for the period 1936 – 1989.

    Monthly mean of temperature in the surface layer at all stations

    Monthly mean of temperature in the surface layer at all stations

    By employing a specific temperature profile in the simulation model we were able to estimate the probability distribution for one cycle biomass at that location as given in the figure below.

    position-n7024

    Having the probability distribution for production we added forecasts for cost and prices as well as for their variance. The probability distributions for production also give the probability distribution for the necessary investment, so that we in the end were able to calculate the probability distribution for value of the entity (equity).

    value-of-fish-farm-operatio

  • Corporate Risk Analysis

    Corporate Risk Analysis

    This entry is part 2 of 6 in the series Balance simulation

     

    Strategy @Risk has developed a radical and new approach to the way risk is assessed and measured when considering current and future investment. A key part of our activity in this sensitive arena has been the development of a series of financial models that facilitate understanding and measurement of risk set against a variety of operating scenarios.

    We have written a paper which outlines our approach to Corporate Risk Analysis to outline our approach. Read it here.

    Risk

    Our purpose in this paper is to show that every item written into a firm’s profit and loss account and its balance sheet is a stochastic variable with a probability distribution derived from probability distributions for each factor of production. Using this approach we are able to derive a probability distribution for any measure used in valuing companies and in evaluating strategic investment decisions. Indeed, using this evaluation approach we are able to calculate expected gain, loss and probability when investing in a company where the capitalized value (price) is known.

  • The advantages of simulation modelling

    The advantages of simulation modelling

    This entry is part 6 of 6 in the series Monte Carlo Simulation

     

    All businesses need the ability to, if not predict the future; assess what its future economic performance can be. In most organizations this is done using a deterministic model, which is a model which does not consider the uncertainty inherent in all the inputs to the model. The exercise can best described as pinning jelly to a wall; it is that easy to find the one number which correctly describes the future.

    The apparent weakness of the one number which is to describe the future is usually paired with so called sensitivity analysis. Such analysis usually means changing the value of one variable, and observe what the result then is. Then another variable is changed, and again the result is observed. Usually it is the very extreme cases which are analyzed, and some times these sensitivities are even summed up to show extreme values and improbable downsides.

    Such a sensitivity analysis is as much pinning jelly to the wall as is the deterministic model itself. The relationship between variables is not considered, and rarely is the probability of each scenario stated.

    What the simulation model does is to model the relationship between variables, the probability of different scenarios, and to analyze the business as a complex whole. Each uncertain variable is assessed by key decision makers giving their estimates for

    • The expected value of the variable
    • The low value at a given probability
    • The high value at a corresponding probability level
    • The shape of the probability curve

    The relationship between variables is either modeled by its correlation coefficient or a regression.

    Then a simulation tool is needed to do the simulation itself. The tool uses the assigned probability curves to draw values from each of the curves. After a sufficient number of simulations, it will give a probability curve for the desired goal function(s) of the model, in addition to the variables themselves.

    As decision support this is an approach which will give you answers to questions like:

    • The probability of a project NPV being at a given, required level
    • The probability of a project or a business generating enough cash to run a successful business
    • The probability of default
    • What the risk inherent in the business is, in monetary terms
    • And a large number of other very useful questions

    The simulation model gives you a total view of risk where the sensitivity analysis or the deterministic analysis gives you only one number, without any known probability. And it also reveals the potential upside which is in every project. It is this upside which must be weighted against the potential downside and the risk level which is appropriate for each entity.

    The S@R-model

    The S@R-model is simulation tool which is built on proven financial and statistical technology. It is written in a language especially made for modelling financial decision problems, called Pilot Lightship. The model output is in the form of both probabilities for different aspects of a financial or business decision, and in the form of a fully fledged balance sheet and P&L. Hence, it gives you what you normally expect as output from at deterministic model, and in addition it gives you simulated results given defines probability curves and relationships between variables.

    The operational part of the business can be modeled either in a separate subroutine, or directly into the main part of the simulation tool, called the main model. For complex goal functions with numerous variables and relationships, it is recommended to use the subroutine, as it gives greater insight into the complexity of the business. Data from the operational subroutine is later transferred to the main model as a compiled file.