Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Decision making – Page 4 – Strategy @ Risk

Category: Decision making

  • The Uncertainty in Forecasting Airport Pax

    The Uncertainty in Forecasting Airport Pax

    This entry is part 3 of 4 in the series Airports

     

    When planning airport operations, investments both air- and land side or only making next years budget you need to make some forecasts of what traffic you can expect. Now, there are many ways of doing that most of them ending up with a single figure for the monthly or weekly traffic. However we do know that the probability for that figure to be correct is near zero, thus we end up with plans based on assumptions that most likely newer will happen.

    This is why we use Monte Carlo simulation to get a grasp of the uncertainty in our forecast and how this uncertainty develops as we go into the future. The following graph (from real life) shows how the passenger distribution changes as we go from year 2010 (blue) to 2017 (red). The distribution moves outwards showing an expected increase in Pax at the same time it spreads out on the x-axis (Pax) giving a good picture of the increased uncertainty we face.

    Pax-2010_2017This can also be seen from the yearly cumulative probability distributions given below. As we move out into the future the distributions are leaning more and more to the right while still being “anchored” on the left to approximately the same place – showing increased uncertainty in the future Pax forecasts. However our confidence in that the airport will reach at least 40M Pax during the next 5 years is bolstered.

    Pax_DistributionsIf we look at the fan-chart for the Pax forecasts below, the limits of the dark blue region give the lower (25%) and upper (75%) quartiles for the yearly Pax distributions i.e. the region where we expect with 50% probability the actual Pax figures to fall.

    Pax_Uncertainty

    The lower und upper limits give the 5% and 95% percentiles for the yearly Pax distributions i.e. we can expect with 90% probability that the actual Pax figures will fall somewhere inside these three regions.

    As shown the uncertainty about the future yearly Pax figures is quite high. With this as the backcloth for airport planning it is evident that the stochastic nature of the Pax forecasts has to be taken into account when investment decisions (under uncertainty) are to be made. (ref) Since the airport value will relay heavily on these forecasts it is also evident that this value will be stochastic and that methods from decision making under uncertainty have to be used for possible M&R.

    Major Airport Operation Disruptions

    Delays – the time lapse which occurs when a planned event does not happen at the planned time – are pretty common at most airports Eurocontrol  estimates it on average to approx 13 min on departure for 45%  of the flights and approx 12 min for arrivals in 42% of the flights (Guest, 2007). Nevertheless the airport costs of such delays are small; it can even give an increase in revenue (Cook, Tanner, & Anderson, 2004).

    We have lately in Europe experienced major disruptions in airport operations thru closing of airspace due to volcanic ash. Closed airspace give a direct effect on airport revenue and a higher effect the closer it is to an airport. Volcanic eruptions in some regions might be considered as Black Swan events to an airport, but there are a large number of volcanoes that might cause closing of airspace for shorter or longer time. The Smithsonian Global Volcanism Program lists more than 540 volcanoes with previous documented eruption.

    As there is little data for events like this it is difficult to include the probable effects of closed airspace due to volcanic eruptions in the simulation. However, the data includes effects of the 9/11 terrorist attack and the left tails of the yearly Pax distributions will be influenced by this.

    References

    Guest, Tim. (2007, September). A Matter of time: air traffic delay in Europe. , EUROCONTROL Trends in Air Traffic I, 2.

    Cook, A., Tanner, G., & Anderson, S. (2004). Evaluating the true cost to airlines of one minute of airborne or ground delay: final report. [University of Westminster]. Retrieved from, www.eurocontrol.int/prc/gallery/content/public/Docs/cost_of_delay.pdf

  • The Case of Enterprise Risk Management

    The Case of Enterprise Risk Management

    This entry is part 2 of 4 in the series A short presentation of S@R

     

    The underlying premise of enterprise risk management is that every entity exists to provide value for its stakeholders. All entities face uncertainty and the challenge for management is to determine how much uncertainty to accept as it strives to grow stakeholder value. Uncertainty presents both risk and opportunity, with the potential to erode or enhance value. Enterprise risk management enables management to effectively deal with uncertainty and associated risk and opportunity, enhancing the capacity to build value. (COSO, 2004)

    The evils of a single point estimate

    Enterprise risk management is a process, effected by an entity’s board of directors, management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

    Traditionally, when estimating costs, project value, equity value or budgeting, one number is generated – a single point estimate. There are many problems with this approach.  In budget work this point is too often given as the best the management can expect, but in some cases budgets are set artificially low generating bonuses for later performance beyond budget. The following graph depicts the first case.

    Budget_actual_expected

    Here, we have based on the production and market structure and on the managements assumptions of the variability of all relevant input and output variables simulated the probability distribution for next years EBITDA. The graph gives the budgeted value, the actual result and the expected value. Both budget and actual value are above expected value, but the budgeted value was far too high, giving with more than 80% probability a realized EBITDA lower than budget. In this case the board will be mislead with regard to the company’ ability to earn money and all subsequent decisions made based on the budget EBITDA can endanger the company.

    The organization’s ERM system should function to bring to the board’s attention the most significant risks affecting entity objectives and allow the board to understand and evaluate how these risks may be correlated, the manner in which they may affect the enterprise, and management’s mitigation or response strategies. (COSO, 2009)

    It would have been much more preferable to the board to be given both the budget value and the accompanying probability distribution allowing it to make independent judgment about the possible size of the next years EBITDA. Only then will the board – both from the shape of the distribution, its localization and the point estimate of budget EBITDA – be able to assess the risk and opportunity facing the company.

    Will point estimates cancel out errors?

    In the following we measure the deviation of the actual result from both from the budget value and from the expected value. The blue dots represent daughter companies located in different countries. For each company we have the deviation (in percent) of the budgeted EBITDA (bottom axis) and the expected value (left axis) from the actual EBITDA observed 1 ½ year later.

    If the deviation for a company falls in the upper right quadrant the deviation are positive for both budget and expected value – and the company is overachieving.

    If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the company is underachieving.

    If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – the company is overachieving but has had a to high budget.

    With left skewed EBITDA distributions there should not be any observations in the lower right quadrant that will only happen when the distributions is skewed to the right – and then there will not be any observations in the upper left quadrant.

    The graph below shows that two companies have seriously underperformed and that the budget process did not catch the risk they were facing.  The rest of the companies have done very well, some however have seriously underestimated opportunities manifested by the actual result. From an economic point of view, the mother company would of course have preferred all companies (blue dots) above the x-axis, but due to the stochastic nature of the EBITDA it have to accept that some always will fall below.  Risk wise, it would have preferred the companies to fall to the right of the y-axis but will due to budget uncertainties have to accept that some always will fall to the left. However, large deviations both below the x-axis and to the left of the y-axis add to the company risk.

    Budget_actual_expected#1

    A situation like the one given in the graph below is much to be preferred from the board’s point of view.

    Budget_actual_expected#2

    The graphs above, taken from real life – shows that budgeting errors will not be canceled out even across similar daughter companies. Consolidating the companies will give the mother company a left skewed EBITDA distribution. They also show that you need to be prepared for deviations both positive and negative – you need a plan. So how do you get a plan? You make a simulation model! (See Pdf: Short-presentation-of-S@R#2)

    Simulation

    The Latin verb simulare means to “to make like”, “to create an exact representation” or imitate. The purpose of a simulation model is to imitate the company and is environment, so that its functioning can be studied. The model can be a test bed for assumptions and decisions about the company. By creating a representation of the company a modeler can perform experiments that are impossible or prohibitively expensive in the real world. (Sterman, 1991)

    There are many different simulation techniques, including stochastic modeling, system dynamics, discrete simulation, etc. Despite the differences among them, all simulation techniques share a common approach to modeling.

    Key issues in simulation include acquisition of valid source information about the company, selection of key characteristics and behaviors, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.

    Optimization models are prescriptive, but simulation models are descriptive. A simulation model does not calculate what should be done to reach a particular goal, but clarifies what could happen in a given situation. The purpose of simulations may be foresight (predicting how systems might behave in the future under assumed conditions) or policy design (designing new decision-making strategies or organizational structures and evaluating their effects on the behavior of the system). In other words, simulation models are “what if” tools. Often is such “what if” information more important than knowledge of the optimal decision.

    However, even with simulation models it is possible to mismanage risk by (Stulz, 2009):

    • Over-reliance on historical data
    • Using too narrow risk metrics , such as value at risk—probably the single most important measure in financial services—have underestimated risks
    • Overlooking knowable risks
    • Overlooking concealed risks
    • Failure to communicate effectively – failing to appreciate the complexity of the risks being managed.
    • Not managing risks in real time, you have to be able to monitor changing markets and,  respond to appropriately – You need a plan

    Being fully aware of the possible pitfalls we have methods and techniques’ that can overcome these issues and since we estimate the full probability distributions we can deploy a number of risk metrics  not having to relay on simple measures like value at risk – which we actually never uses.

    References

    COSO, (2004, September). Enterprise risk management — integrated framework. Retrieved from http://www.coso.org/documents/COSO_ERM_ExecutiveSummary.pdf

    COSO, (2009, October). Strengthening enterprise risk management for strategic advantage. Retrieved from http://www.coso.org/documents/COSO_09_board_position_final102309PRINTandWEBFINAL_000.pdf

    Sterman, J. D. (1991). A Skeptic’s Guide to Computer Models. In Barney, G. O. et al. (eds.),
    Managing a Nation: The Microcomputer Software Catalog. Boulder, CO: Westview Press, 209-229.

    Stulz, R.M. (2009, March). Six ways companies mismanage risk. Harvard Business Review (The Magazine), Retrieved from http://hbr.org/2009/03/six-ways-companies-mismanage-risk/ar/1

    Enterprise risk management is a process, effected by an entity’s board of directors,

    management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

  • Public Works Projects

    Public Works Projects

    This entry is part 2 of 4 in the series The fallacies of scenario analysis

     

    It always takes longer than you expect, even when you take into account Hofstadter’s Law. (Hofstadter,1999)

    In public works and large scale construction or engineering projects – where uncertainty mostly (only) concerns cost, a simplified scenario analysis is often used.

    Costing Errors

    An excellent study carried out by Flyvberg, Holm and Buhl (Flyvbjerg, Holm, Buhl2002) address the serious questions surrounding the chronic costing errors in public works projects. The purpose was to identify typical deviation from budget and the specifics of the major causes for these deviations:

    The main findings from the study reported in their article – all highly significant and most likely conservative -are as follows:

    In 9 out of 10 transportation infrastructure projects, costs are underestimated. For a randomly selected project, the probability of actual costs being larger than estimated costs is  0.86. The probability of actual costs being lower than or equal to estimated costs is only 0.14. For all project types, actual costs are on average 28% higher than estimated costs.

    Cost underestimation:

    – exists across 20 nations and 5 continents:  appears to be a global phenomena.
    – has not decreased over the past 70 years:  no improvement in cost estimate accuracy.
    – cannot be excused by error:  seems best explained by strategic misrepresentation, i.e. the planned,   systematic  distortion or misstatement of facts inn the budget process. (Jones, Euske,1991)

    Demand Forecast Errors

    The demand forecasts only adds more errors to the final equation (Flyvbjerg, Holm, Buhl, 2005):

    • 84 percent of rail passenger forecasts are wrong by more than ±20 percent.
    • 50 percent of road traffic forecasts are wrong by more than ±20 percent.
    • Errors in traffic forecasts are found in the 14 nations and 5 continents covered by the study.
    • Inaccuracy is constant for the 30-year period covered: no improvement over time.

    The Machiavellian Formulae

    Adding the cost and demand errors to other uncertain effects, we get :

    Machiavelli’s Formulae:
    Overestimated revenues + Overvalued development effects – Underestimated cost – Undervalued environmental impact = Project Approval (Flyvbjerg, 2007)

    Cost Projections

    Transportation infrastructure projects do not appear to be more prone to cost underestimation than are other types of large projects like: power plants, dams, water distribution, oil and gas extraction, information technology systems, aerospace systems, and weapons systems.

    All of the findings above should be considered forms of risk. As has been shown in cost engineering research, poor risk analysis account for many project cost overruns.
    Two components of errors in the cost estimate can easily be identified (Bertisen, 2008):

    • Economic components: these errors are the result of incorrectly forecasted exchange rates, inflation rates of unit prices, fuel prices, or other economic variables affecting the realized nominal cost. Many of these variables have positive skewed distribution. This will then feed through to positive skewness in the total cost distribution.
    • Engineering components: this relates to errors both in estimating unit prices and in the required quantities. There may also be an over- or underestimation of the contingency needed to capture excluded items. Costs and quantity errors are always limited on the downside. However, there is no limit to costs and quantities on the upside, though. For many cost and quantity items, there is also a small probability of a “catastrophic event”, which would dramatically increase costs or quantities.

    When combining these factors the result is likely to be a positive skewed cost distribution, with many small and large under run and overrun deviations (from most likely value) joined by a few very large or catastrophic overrun deviations.

    Since the total cost (distribution) is positively skewed, expected cost can be considerably higher than the calculated most likely cost.

    We will have these findings as a backcloth when we examine the Norwegian Ministry of Finance’s guidelines  for assessing risk in public works (Ministry of Finance, 2008, pp 3) (Total uncertainty equal to the sum of systematic and unsystematic uncertainty):

    Interpreting the guidelines, we find the following assumption and advices:

    1. Unsystematic risk cancels out looking at large portfolios of projects.
    2. All systematic risk is perfectly correlated to the business cycle.
    3. Total cost approximately normal distributed.

    Since total risk is equal to the sum of systematic and unsystematic risk will, by the 2nd assumption, unsystematic risks comprise all uncertainty not explained by the business cycle. That is it will be comprised of all uncertainty in planning, mass calculations etc. and production of the project.

    It is usually in these tasks that the projects inherent risks later are revealed. Based on the above studies it is reasonable to believe that the unsystematic risk have a skewed distribution and is located in its entirety on the positive part of the cost axis i.e. it will not cancel out even in a portfolio of projects.

    The 2nd assumption that all systematic risk is perfectly correlated to the business cycle is a convenient one. It opens for a simple summation of percentiles (10%/90%) for all cost variables to arrive at total cost percentiles. (see previous post in this series)

    The effect of this assumption is that the risk model becomes a perverted one, with only one stochastic variable. All the rest can be calculated from the outcomes of the “business cycle” distribution.

    Now we know that delivery time, quality and prices for all equipment, machinery and raw materials are dependent on the activity level in all countries demanding or producing the same items. So, even if there existed a “business cycle” for every item (and a measure for it) these cycles would not necessarily be perfectly synchronised and thus prove false the assumption.

    The 3rd assumption implies either that all individual cost distributions are “near normal” or that they are independent and identically-distributed with finite variance, so that the central limit theorem can be applied.

    However, the individual cost distributions will be the product of unit price, exchange rate and quantity so even if the elements in the multiplication has a normal distribution, the product will not have a normal distribution.

    Claiming the central limit theorem is also a no-go since the cost elements by the 2nd assumption is perfectly correlated, they can not be independent.

    All experience and every study concludes that the total cost distribution does not have a normal distribution. The cost distribution evidently is positively skewed with fat tails whereas the normal distribution is symmetric with thin tails.

    Our concerns about the wisdom of the 3rd assumption, was confirmed in 2014, see: The implementation of the Norwegian Governmental Project Risk Assessment Scheme and the following articles.

    The solution to all this is to establish a proper simulation model for every large project and do the Monte Carlo simulation necessary to establish the total cost distribution, and then calculate the risks involved.

    “If we arrive, as our forefathers did, at the scene of battle inadequately equipped, incorrectly trained and mentally unprepared, then this failure will be a criminal one because there has been ample warning” — (Elliot-Bateman, 1967)

    References

    Bertisen, J., Davis, Graham A. (2008). Bias and error in mine project capital cost estimation.. Engineering Economist, 01-APR-08

    Elliott-Bateman, M. (1967). Defeat in the East: the mark of Mao Tse-tung on war. London: Oxford University Press.

    Flyvbjerg Bent (2007), Truth and Lies about Megaprojects, Inaugural speech, Delft University of Technology, September 26.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2002), “Underestimating Costs in Public Works Projects: Error or Lie?” Journal of the American Planning Association, vol. 68, no. 3, 279-295.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2005), “How (In)accurate Are Demand Forecasts in Public Works Projects?” Journal of the American Planning Association, vol. 71, no. 2, 131-146.

    Hofstadter, D., (1999). Gödel, Escher, Bach. New York: Basic Books

    Jones, L.R., K.J. Euske (1991).Strategic Misrepresentation in Budgeting. Journal of Public Administration Research and Theory, 1(4), 437-460.

    Ministry of Finance, (Norway) (2008,). Systematisk usikkerhet. Retrieved July 3, 2009, from The Concept research programme Web site: http://www.ivt.ntnu.no/bat/pa/forskning/Concept/KS-ordningen/Dokumenter/Veileder%20nr%204%20Systematisk%20usikkerhet%2011_3_2008.pdf

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.

  • The Risk of Spreadsheet Errors

    The Risk of Spreadsheet Errors

    This entry is part 1 of 2 in the series Spreadsheet Errors

     

    Spreadsheets create an illusion of orderliness, accuracy, and integrity. The tidy rows and columns of data, instant calculations, eerily invisible updating, and other features of these ubiquitous instruments contribute to this soothing impression.  The quote are taken from Ivars Peterson’s MathTrek Column written in back in 2005, but it still applies to day. ((Peterson, Ivars. “The Risky Business of Spreadsheet Errors.” MAA Online December 19, 2005 26 Feb 2009 .))

    Over the years we have learned a good deal about spreadsheet errors we even have got a spread sheet risk interest group (EuSpRIG) ((EuSpRIG: http://www.eusprig.org/index.htm)).

    Audits done shows that nearly 90% of the spreadsheets contained serious errors. Code inspection experiments also shows that even experienced users have a hard time finding errors succeeding in only finding 54% on average.

    Panko (2009) summarized the results of seven field audits in which operational spreadsheets were examined, typically by an outsider to the organization. His results show that 94% of spreadsheets have errors and that the average cell error rates (the ratio of cells with errors to all cells with formulas) is 5.2%. ((Panko, Raymond R.. “What We Know About Spreadsheet Errors.” Spreadsheet Research (SSR. 2 16 2009. University of Hawai’i. 27 Feb 2009 . ))

    Some of the problems stems from the fact that a cell can contain any of the following: operational values, document properties, file names, sheet names, file paths, external links, formulas, hidden cells, nested Ifs, macros etc. and that the workbook can contain, hidden sheets and very hidden sheets.

    Add to this reuse and recirculation of workbooks and code; after cutting and pasting information, the spreadsheet might not work the way it did before — formulas can be damaged, links can be broken, or cells can be overwritten. How many uses version controls and change logs? In addition the spreadsheet is a perfect environment for perpetrating fraud due to the mixture of formulae and data.

    End-users and organizations that rely on spreadsheets generally do not fully recognize the risks of spreadsheet errors:  It is completely within the realms of possibility that a single, large, complex but erroneous spreadsheet could directly cause the accidental loss of a corporation or institution (Croll 2005)  ((Croll, Grenville J.. “The Importance and Criticality of Spreadsheets in the City of London.” Notes from Eusprig 2005 Conference . 2005. EuSpRIG. 2 Mar 2009 .))

    A very comprehensive literature review on empirical evidence of spreadsheet errors is given in the article Spreadsheet Accuracy Theory.  ((Kruck, S. E., Steven D. Sheetz. “Spreadsheet Accuracy Theory.” Journal of Information Systems Education 12(2007): 93-106.))

    EUSPRIG also publicises verified public reports with a quantified error or documented impact of spreadsheet errors. ((” Spreadsheet mistakes – news stories.” EuSpRIG. 2 Mar 2009 .))

    We will in the following use publicised data from a well documented study on spreadsheet errors. The data is the result of an audit of 50 completed and operational spreadsheets from a wide variety of sources. ((Powell, Stephen G., Kenneth R. Baker, Barry Lawson. “Errors in Operational Spreadsheets.” Tuck School of Business. November 15, 2007. Dartmouth College. 2 Mar 2009))

    Powell et alii settled for six error types:

    1. Hard-coding in a formula – one or more numbers appear in formulas
    2. Reference error – a formula contains one or more incorrect references to other cells
    3. Logic error – a formula is used incorrectly, leading to an incorrect result
    4. Copy/Paste error – a formula is wrong due to inaccurate use of copy/paste
    5. Omission error – a formula is wrong because one or more of its input cells is blank
    6. Data input error – an incorrect data input is used

    And these were again grouped as Wrong Result or Poor Practise depending on the errors effect on the calculation.

    Only three workbooks were without errors, giving a spreadsheet error rate of 94%. In the remaining 47 workbooks they found 483 instances ((An error instance is a single occurrence of one of the six errors in their taxonomy)) of errors; 281 giving wrong result and 202 involving poor practise.

    cell_errors_instances

    The distribution on the different types of error is given in the instances table. It is worth noting that in poor practice hard-coding errors was the most common while incorrect references and incorrectly used formulas was the most numerous errors in wrong result.

    cell_errors_cells

    The 483 instances involved 4,855 error cells, which with 270,722 cells audited gives a cell error rate of 1.79%. The corresponding distribution of errors is given in the cells table. The Cell Error Rate (CER) for wrong result is 0.87% while the CER for poor practise is 1.79%.

    In the following graph we have plotted the cell error rates against the proportion of spreadsheets having that error rate (zero CER is excluded). We can se that most spreadsheets have a low CER and only a few a high CER. This is more evident for wrong result than for poor practise.

    cell_error_rates_frequencie

    If we accumulate the above frequencies and include the spreadsheets with zero errors we get the “probability distributions” below. We find that 60% of the spread sheets have a CER giving a wrong result of 1% or more and that only 10% have a CER of 5% or more.

    cell_error_rates_accumulate

    The high percentage of spreadsheets having errors is due to the fact that bottom-line values are computed through long cascades of formula cells. Because in tasks that contain many sequential operations error rates multiply along cascades of subtasks, the fundamental equation for the bottom-line error rate is based on a memoryless geometric distribution over cell errors. ((Lorge, Irving, Herbert Solomon. “Two Models of Group Behavior in the Solution of Eureka-Type Problems.” Psykometrika 20(1955): 139-148. )):

    E=1-(1-e)^n

    Here, E is the bottom-line error rate, e is the cell error rate and n is the number of cells in the cascade. E indicates the probability of an incorrect result in the last cascade cell, given the probability of an error in each cascade cell is equal to the cell error rate. ((Bregar, Andrej. “Complexity Metrics for Spreadsheet Models.” Proceedings ofEuSpRIG 2004. http://www.eusprig.org/. 1 Mar 2009 .))

    In the figure below we have used the CER for wrong result (0.87%) and for poor practise (1.79%) to calculate the probability of a corresponding worksheet error, given the cascade length. For poor practice at a calculation cascade of 100 cells there is a probability of 84% an error and 65 cells it is 95%. For wrong result 100 cells give a probability of 58% for an error and at 343 cells it is 95%.

    cascading-probability

    Now if we consider a net present value calculation over a 10 year forecast period in a valuation problem it will easily have more than 343 cells that with high probability contains error.

    This is why S@R uses programming languages for simulation models. Of course will models like that also have errors, but it will not mix data and code, the quality control is easier, it will have columnar consistency, be protected by being compiled, having numerous intrinsic error checks, data entry controls and validation checks (see: Who we are).

    Efficient computing tools are essential for statistical research, consulting, and teaching. Generic packages such as Excel are not sufficient even for the teaching of statistics, let alone for research and consulting (American Statistical Association )

    References

  • What we do; Predictive and Prescriptive Analytics

    What we do; Predictive and Prescriptive Analytics

    This entry is part 1 of 3 in the series What We Do

     

    Analytics is the discovery and communication of meaningful patterns in data. It is especially valuable in areas rich with recorded information – as in all economic activities. Analytics relies on the simultaneous application of statistical methods, simulation modeling and operations research to quantify performance.

    Prescriptive analytics goes beyond descriptive, diagnostic and predictive analytics; by being able to recommend specific courses of action and show the likely outcome of each decision.

    Predictive analytics will tell what probably will happen, but will leave it up to the client to figure out what to do with it.

    Prescriptive analytics will also tell what probably will happen, but in addition:  when it probably will happen and why it likely will happen, thus how to take advantage of this predictive future. Since there are always more than one course of action prescriptive analytics have to include: predicted consequences of actions, assessment of the value of the consequences and suggestions of the actions giving highest equity value for the company.

    By employing simulation modeling (Monte Carlo methods) we can give answers – by probability statements – to the critical question at the top of the value staircase.

     

    Prescriptive-analytics

     

    This feature is a basic element of the S@R balance simulation model, where the Monte Carlo simulation can be stopped at any point on the probability distribution for company value  (i.e. very high or very low value of company) giving full set of reports: P&L and balance sheet etc. – enabling a full postmortem analysis: what it was that happened and why it did happen.

    Different courses of actions to repeat or avoid the result with high probability can then be researched and assessed. The EBITDA client specific model will capture relationships among many factors to allow simultaneous assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions. Even the language we use to write the models are specially developed for making decision support systems.

    Our methods will as well include data and information visualization to clearly and effectively communicate both information and acquired knowledge – to reinforce comprehension and cognition.

    Firms may thus fruitfully apply analytics to business data, to describe, predict, and improve its business performance.