Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Uncertainty – Page 5 – Strategy @ Risk

Category: Uncertainty

  • Perception of Risk

    Perception of Risk

    Google Trends and Google Insights for Search gives us the opportunity to gain information on a subject’s popularity. A paper by Google Inc. and Centers for Disease Control and Prevention (USA) have shown how search queries can be used to estimate the current level of influenza activity in the United States. (Ginsberg, Mohebbi, Patel, Brammer, Smolinski, & Brilliant, 2009)

    It is tempting to use these Google tools to see how searches for terms connected to risk and strategy has developed over the last years. Using Google Trends searching for the terms; economic risk and financial strategy we find the relative and normalized search frequencies as shown in the graphs below:

    Search-volume-index_1

    The weekly observations starts in January 2004, but we have due to missing data (?) started the economic risk search series in September 2004. As is evident from the time series, the search terms are highly correlated (appr. 0.80) and there is a consistent seasonal variation – with heightened activity in spring and fall. The average value for the normalized search volume index (index) is 1.0 for the term economic risk and 1.58 for financial strategy. The term financial strategy has then on average been used 0.58 times more than economic risk.

    The numbers …. on the y-axis of the Search Volume Index aren’t absolute search traffic numbers. Instead, Trends scales the first term you’ve entered so that its average search traffic in the chosen time period is 1.0; subsequent terms are then scaled relative to the first term. Note that all numbers are relative to total traffic. (About Google Trends, 2009)

    Both series shows a falling trend from early 2004 to mid 2006, indicating the terms lower relative shares of all Google searches. However from that on the relative shares have been maintained, indicating increased interest in the terms against increased Internet search activity.

    It is also possible to rank the different regions interest in the subject (the table can be sorted by pressing the column label):

    Region Ranking

    RegionRiskStrategy
    Singapore1.000.80
    South Africa0.861.43
    Hong Kong0.740.83
    Malaysia0.701.06
    India0.501.10
    South Korea0.440.46
    Philippines0.410.58
    Australia0.360.50
    Indonesia0.350.35
    New Zealand0.260.38

    Singapore is the region with the highest shares of searches including  the term ‘risk’ and South African the region with the highest shares of searches including ‘strategy’ In India the term ‘financial strategy’ is important but ‘risk’ is less important.`

    The most striking feature of the table however is the lack of American and European regions. Is there less interest in these subjects in the Vest than in the East ?

    References

    Ginsberg, J, Mohebbi, M, Patel, R, Brammer, L, Smolinski, M., & Brilliant, L., (2009). Detecting influenza epidemics using search engine query data. Nature, 457, 1012-1014.

    (n.d.). About Google trends. Retrieved from http://www.google.com/intl/en/trends/about.html#7

  • Where do you go from risk mapping?

    Where do you go from risk mapping?

    You can’t control what you can’t measure. (DeMarco 1998)

    Risk mapping is a much advocated and often used tool. Numerous articles, books, guidelines and standards have been written on the subject and software has been developed to facilitate the process ( e.g., AS/NZS 4360, 2004). It is the first stepping stone in Risk Management; the logical and systematic method of identifying, analyzing, treating and monitoring the risks and opportunities involved in any activity or process. Risk management is now becoming an integral part of any organizations planning regardless of the type of business, activity or function.

    Risk Mapping

    The risk mapping process is usually divided into seven ordered activities. The sequence can be as shown below, but the process can imply repeated prior activities as results of later appraisals of risky events in the process:

    Risk-mapping-process

    The objective is to separate the acceptable risks from the unacceptable risks, and to provide data to assist in the evaluation and control of risks and opportunities.

    The Risk Events List

    The risk list is the result of risk identification activities. It consists of a list of all risks and opportunities grouped by an agreed upon classification. It is put together by the risk identification group lead by the risk officer; the key person responsible for risk management. The risk list is the basis for the risk data database containing information about each project, risk and persons involved in risk management. The main output table is the risk register.

    Risk Register

    The Risk Register is a form containing a large set of fields for each risky event being analyzed and controlled. The form contains data about the event, its computational aspects and all risk response information. This register is the basis for a number of cross tables visualizing types of risk, likelihood, impact, response, responsibility etc.  Of those one is of special interest to us – the risk probability and impact matrix.

    The Risk Level Matrix

    The risk level matrix is based on two tables established during the third activity in the risk mapping process; the likelihood and the impact table.

    The Likelihood table

    During the risk analysis the potential likelihood that a given risk will occur is assessed, and an appropriate risk probability is selected from the table below:

    Probability-table_risk-mapp

    The Impact Table

    At the same time the potential impact of each risk is analyzed, and an appropriate impact level is selected from the table below:

    Impact-table_risk-mapping

    The Risk Matrix

    The risk level matrix shows the combination (product) of risk impact and probability, and is utilized to decide the relative priority of risks.  Risks that fall into the upper right triangle of the matrix are the highest priority, and should receive the majority of risk management resources during response planning and risk monitoring/control.  Risks that fall on the diagonal of the matrix are the next highest priority, followed by risks that fall into the lower left triangle of the matrix:

    Risk-matrix_risk-mappingIn practice it can look like this with impact in four groups (the numbers refers to the risk description in the risk register):

    Impact-vs-likelihoodFrom the graph we can see that there are no risks with high probability and high impact and that we have at least four clusters of risks (centroid method). The individual risks location determines the actions needed:

    risk_map2We can multiply impact with likelihood and calculate something like expected effect and use this to rank order the risks, but this is as far as we can get with this method.

    However it is a great tool for the introduction of risk management in any organization; it is easy to communicate, places responsibilities, creates awareness and most of all – lists all known hazards and risks that faces the organization.

    But it has all the limitations of qualitative analysis. Word form or descriptive scales are used to describe the magnitude of potential consequences and their likelihood. No relations between the risks exist and their individual or combined effect on the P&L and Balance sheet is at best difficult to understand.

    Most risks are attributable to one or more observable variables. They can be continuous or have discrete values, but they are all stochastic variables.

    Now, even a “qualitative“ variable like political risk is measurable. Political risk is usually manifested as uncertainty about taxes, repatriation of funds, nationalization etc. Such risks can mostly be modeled and analyzed with decision-tree techniques, giving project value distributions for the different scenarios. Approaches like that give better control than just applying some general qualitative country risk measure.

    Risk Breakdown Structure (RBS)

    A first step in the direction of quantitative risk analysis can be to perform a risk breakdown analysis to source-orient the individual risks. This is usually done in descending levels increasing the details in the definition of sources of risk. This will give a better and often new understanding of the types of risk, their dependencies, root and possible covariation. (Zacharias, Panopoulos, Askounis, 2008)

    RBS can be further developed using Bayesian network techniques to describe and simulate discrete types of risk, usually types of hazard, failures or fault prediction in operations. (Fenton, Neil, 2007)

    But have we measured the risks and what is the organizations total risk? Is it the sum of all risks, or some average?

    You can’t measure what you can’t define. (Kagan, 1993)

    Can we really manage the risks and exploit the opportunities with the tool (risk model) we now have? A model is a way of representing some feature of reality. Models are not true or false. They are simply useful or not useful for some purpose.

    Risk mapping is – apart from its introductory qualities to risk management – not useful for serious corporate risk analysis. It does not define total corporate risk nether does it measure it. Its focus on risk (hazard) also makes one forget about the opportunities, which has to be treated separately and not as what it really is – the other side of the probability distribution.

    The road ahead

    We need to move to quantitative analysis with variables that describes the operations, and where numerical values are calculated for both consequences and likelihood – combining risk and opportunity.

    This implies modeling the operations in sufficient detail to describe numerically what’s going on. In paper production this means modeling the market (demand and prices), competitor behavior (market shares and sales), fx-rates for input materials and possible exports, production (wood, chemicals, recycled paper, filler, pulp, water etc, cost, machine speeds, trim width, basis weight, total efficiency, max days of production, electricity consumption, heat cost and recovery packaging, manning level, hazards etc.), labor cost, distribution cost, rebates, commissions, fixed costs, maintenance and reinvestment, interest rates, taxes etc. All of which are stochastic variable.

    These variables, their shape and location are the basis for all uncertainty the firm faces whether it be risk or opportunities. The act of measuring their behavior and interrelationship helps improve precision and reduce uncertainty about the firm’s operations. (Hubbard, 2007)

    To us short term risk is about the location and shape of the EBITDA distribution for the next one to three years and long term risk about the location and shape of the today’s company’s equity value distribution, calculated by estimating the company’s operations over a ten to fifteen years horizon.  Risk is then the location and left tail of the distribution while the possible opportunities (upside) are in the right tail of the same distribution. And now all kinds of tools can be used to measure risk and opportunities.

    Risk mapping is in this context a little like treating a disease’s symptoms rather than the disease itself.

    References

    AS/NZS 4360:2004 http://www.saiglobal.com/shop/script/Details.asp?DocN=AS0733759041AT

    Demarco, T., (1982). Controlling Software Projects. Englewood Cliffs: Yourdon Press.

    Fenton, F. Neil, M. (2007, November). Managing Risk in the Modern World. Retrieved from http://www.lms.ac.uk/activities/comp_sci_com/KTR/apps_bayesian_networks.pdf

    Hubbard, D., (2007). How to Measure Anything. Chichester: John Wiley & Sons.

    Kagan, S. L. (1993). Defining, assessing and implementing readiness: Challenges and opportunities.

    Zacharias O., Panopoulos D., Askounis D.  (2008). Large Scale Program Risk Analysis Using a Risk Breakdown Structure. European Journal of Economics, Finance and Administrative Sciences, (12), 170-181.

  • Public Works Projects

    Public Works Projects

    This entry is part 2 of 4 in the series The fallacies of scenario analysis

     

    It always takes longer than you expect, even when you take into account Hofstadter’s Law. (Hofstadter,1999)

    In public works and large scale construction or engineering projects – where uncertainty mostly (only) concerns cost, a simplified scenario analysis is often used.

    Costing Errors

    An excellent study carried out by Flyvberg, Holm and Buhl (Flyvbjerg, Holm, Buhl2002) address the serious questions surrounding the chronic costing errors in public works projects. The purpose was to identify typical deviation from budget and the specifics of the major causes for these deviations:

    The main findings from the study reported in their article – all highly significant and most likely conservative -are as follows:

    In 9 out of 10 transportation infrastructure projects, costs are underestimated. For a randomly selected project, the probability of actual costs being larger than estimated costs is  0.86. The probability of actual costs being lower than or equal to estimated costs is only 0.14. For all project types, actual costs are on average 28% higher than estimated costs.

    Cost underestimation:

    – exists across 20 nations and 5 continents:  appears to be a global phenomena.
    – has not decreased over the past 70 years:  no improvement in cost estimate accuracy.
    – cannot be excused by error:  seems best explained by strategic misrepresentation, i.e. the planned,   systematic  distortion or misstatement of facts inn the budget process. (Jones, Euske,1991)

    Demand Forecast Errors

    The demand forecasts only adds more errors to the final equation (Flyvbjerg, Holm, Buhl, 2005):

    • 84 percent of rail passenger forecasts are wrong by more than ±20 percent.
    • 50 percent of road traffic forecasts are wrong by more than ±20 percent.
    • Errors in traffic forecasts are found in the 14 nations and 5 continents covered by the study.
    • Inaccuracy is constant for the 30-year period covered: no improvement over time.

    The Machiavellian Formulae

    Adding the cost and demand errors to other uncertain effects, we get :

    Machiavelli’s Formulae:
    Overestimated revenues + Overvalued development effects – Underestimated cost – Undervalued environmental impact = Project Approval (Flyvbjerg, 2007)

    Cost Projections

    Transportation infrastructure projects do not appear to be more prone to cost underestimation than are other types of large projects like: power plants, dams, water distribution, oil and gas extraction, information technology systems, aerospace systems, and weapons systems.

    All of the findings above should be considered forms of risk. As has been shown in cost engineering research, poor risk analysis account for many project cost overruns.
    Two components of errors in the cost estimate can easily be identified (Bertisen, 2008):

    • Economic components: these errors are the result of incorrectly forecasted exchange rates, inflation rates of unit prices, fuel prices, or other economic variables affecting the realized nominal cost. Many of these variables have positive skewed distribution. This will then feed through to positive skewness in the total cost distribution.
    • Engineering components: this relates to errors both in estimating unit prices and in the required quantities. There may also be an over- or underestimation of the contingency needed to capture excluded items. Costs and quantity errors are always limited on the downside. However, there is no limit to costs and quantities on the upside, though. For many cost and quantity items, there is also a small probability of a “catastrophic event”, which would dramatically increase costs or quantities.

    When combining these factors the result is likely to be a positive skewed cost distribution, with many small and large under run and overrun deviations (from most likely value) joined by a few very large or catastrophic overrun deviations.

    Since the total cost (distribution) is positively skewed, expected cost can be considerably higher than the calculated most likely cost.

    We will have these findings as a backcloth when we examine the Norwegian Ministry of Finance’s guidelines  for assessing risk in public works (Ministry of Finance, 2008, pp 3) (Total uncertainty equal to the sum of systematic and unsystematic uncertainty):

    Interpreting the guidelines, we find the following assumption and advices:

    1. Unsystematic risk cancels out looking at large portfolios of projects.
    2. All systematic risk is perfectly correlated to the business cycle.
    3. Total cost approximately normal distributed.

    Since total risk is equal to the sum of systematic and unsystematic risk will, by the 2nd assumption, unsystematic risks comprise all uncertainty not explained by the business cycle. That is it will be comprised of all uncertainty in planning, mass calculations etc. and production of the project.

    It is usually in these tasks that the projects inherent risks later are revealed. Based on the above studies it is reasonable to believe that the unsystematic risk have a skewed distribution and is located in its entirety on the positive part of the cost axis i.e. it will not cancel out even in a portfolio of projects.

    The 2nd assumption that all systematic risk is perfectly correlated to the business cycle is a convenient one. It opens for a simple summation of percentiles (10%/90%) for all cost variables to arrive at total cost percentiles. (see previous post in this series)

    The effect of this assumption is that the risk model becomes a perverted one, with only one stochastic variable. All the rest can be calculated from the outcomes of the “business cycle” distribution.

    Now we know that delivery time, quality and prices for all equipment, machinery and raw materials are dependent on the activity level in all countries demanding or producing the same items. So, even if there existed a “business cycle” for every item (and a measure for it) these cycles would not necessarily be perfectly synchronised and thus prove false the assumption.

    The 3rd assumption implies either that all individual cost distributions are “near normal” or that they are independent and identically-distributed with finite variance, so that the central limit theorem can be applied.

    However, the individual cost distributions will be the product of unit price, exchange rate and quantity so even if the elements in the multiplication has a normal distribution, the product will not have a normal distribution.

    Claiming the central limit theorem is also a no-go since the cost elements by the 2nd assumption is perfectly correlated, they can not be independent.

    All experience and every study concludes that the total cost distribution does not have a normal distribution. The cost distribution evidently is positively skewed with fat tails whereas the normal distribution is symmetric with thin tails.

    Our concerns about the wisdom of the 3rd assumption, was confirmed in 2014, see: The implementation of the Norwegian Governmental Project Risk Assessment Scheme and the following articles.

    The solution to all this is to establish a proper simulation model for every large project and do the Monte Carlo simulation necessary to establish the total cost distribution, and then calculate the risks involved.

    “If we arrive, as our forefathers did, at the scene of battle inadequately equipped, incorrectly trained and mentally unprepared, then this failure will be a criminal one because there has been ample warning” — (Elliot-Bateman, 1967)

    References

    Bertisen, J., Davis, Graham A. (2008). Bias and error in mine project capital cost estimation.. Engineering Economist, 01-APR-08

    Elliott-Bateman, M. (1967). Defeat in the East: the mark of Mao Tse-tung on war. London: Oxford University Press.

    Flyvbjerg Bent (2007), Truth and Lies about Megaprojects, Inaugural speech, Delft University of Technology, September 26.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2002), “Underestimating Costs in Public Works Projects: Error or Lie?” Journal of the American Planning Association, vol. 68, no. 3, 279-295.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2005), “How (In)accurate Are Demand Forecasts in Public Works Projects?” Journal of the American Planning Association, vol. 71, no. 2, 131-146.

    Hofstadter, D., (1999). Gödel, Escher, Bach. New York: Basic Books

    Jones, L.R., K.J. Euske (1991).Strategic Misrepresentation in Budgeting. Journal of Public Administration Research and Theory, 1(4), 437-460.

    Ministry of Finance, (Norway) (2008,). Systematisk usikkerhet. Retrieved July 3, 2009, from The Concept research programme Web site: http://www.ivt.ntnu.no/bat/pa/forskning/Concept/KS-ordningen/Dokumenter/Veileder%20nr%204%20Systematisk%20usikkerhet%2011_3_2008.pdf

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.

  • Budgeting

    Budgeting

    This entry is part 1 of 2 in the series Budgeting

     

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Admittedly a number of large public building projects are calculated this way, but more often than not is the aim only to calculate some percentile (usually 85%) as expected budget cost.

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall.  A manager’s subjective probability describes his personal judgement ebitabout how likely a particular event is to occur. It is not based on any precise computation but is a reasonable assessment by a knowledgeable person. Selecting the budget value however is more difficult. Should it be the “mean” or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average ((Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November (2002): 20-21.))

    When judging probability, people can locate the source of the uncertainty either in their environment or in their own imperfect knowledge ((Kahneman D, Tversky A . ” On the psychology of prediction.” Psychological Review 80(1973): 237-251)). When assessing uncertainty, people tend to underestimate it – often called overconfidence and hindsight bias.

    Overconfidence bias concerns the fact that people overestimate how much they actually know: when they are p percent sure that they have predicted correctly, they are in fact right on average less than p percent of the time ((Keren G.  “Calibration and probability judgments: Conceptual and methodological issues”. Acta Psychologica 77(1991): 217-273.)).

    Hindsight bias concerns the fact that people overestimate how much they would have known had they not possessed the correct answer: events which are given an average probability of p percent before they have occurred, are given, in hindsight, probabilities higher than p percent ((Fischhoff B.  “Hindsight=foresight: The effect of outcome knowledge on judgment under uncertainty”. Journal of Experimental Psychology: Human Perception and Performance 1(1975) 288-299.)).

    We will however not endeavor to ask for the managers subjective probabilities only ask for the range of possible values (5-95%) and their best guess of the most likely value. We will then use this to generate an appropriate log-normal distribution for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate we will use rectangular distributions.

    We will then proceed as if the distributions where known (Keynes):

    [Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.  ((John Maynard Keynes. ” General Theory of Employment, Quarterly Journal of Economics (1937))

    budget_actual_expected

    The data collection can easily be embedded in the ordinary budget process, by asking the managers to set the lower and upper 5% values for all variables demining the budget, and assuming that the budget figures are the most likely values.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – usually 1000 – of net revenue, operating expenses and finally EBIT (DA).

    In this case the budget was optimistic with ca 84% probability of having an outcome below and only with 26% probability of having an outcome above. The accounts also proved it to be high (actual) with final EBIT falling closer to the expected value. In our experience expected value is a better estimator for final result than the budget  EBIT.

    However, the most important part of this exercise is the shape of the cumulative distribution curve for EBIT. The shape gives a good picture of the uncertainty the company faces in the year to come, a flat curve indicates more uncertainty both in the budget forecast and the final result than a steeper curve.

    Wisely used the curve (distribution) can be used both to inform stakeholders about risk being faced and to make contingency plans foreseeing adverse events.percieved-uncertainty-in-ne

    Having the probability distributions for net revenue and operating expenses we can calculate and plot the manager’s perceived uncertainty by using coefficients of variation.

    In our material we find on average twice as much uncertainty in the forecasts for net revenue than for operating expenses.

    As many often have budget values above expected value they are exposing a downward risk. We can measure this risk by the Upside Potential Ratio, which is the expected return above budget value per unit of downside risk. It can be found using the upper and lower moments calculated at budget value.

    References