Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Valuation – Strategy @ Risk

Series: Valuation

  • Valuation as a strategic tool

    Valuation as a strategic tool

    This entry is part 1 of 2 in the series Valuation

     

    Valuation is something usually done only when selling or buying a company (see: probability of gain and loss). However it is a versatile tool in assessing issues as risk and strategies both in operations and finance.

    The risk and strategy element is often not evident unless the valuation is executed as a Monte Carlo simulation giving the probability distribution for equity value (or the value of entity).  We will in a new series of posts take a look at how this distribution can be used.

    By strategy we will in the following mean a plan of action designed to achieve a particular goal. The plan may involve issues across finance and operation of the company; debt, equity, taxes, currency, markets, sales, production etc. The goal usually is to move the value distribution to the right (increasing value), but it may well be to shorten the left tail – reducing risk – or increasing the upside by lengthening the right tail.

    There are a variety of definitions of risk. In general, risk can be described as; “uncertainty of loss” (Denenberg, 1964); “uncertainty about loss” (Mehr &Cammack, 1961); or “uncertainty concerning loss” (Rabel, 1968). Greene defines financial risk as the “uncertainty as to the occurrence of an economic loss” (Greene, 1962).

    Risk can also be described as “measurable uncertainty” when the probability of an outcome is possible to calculate (is knowable), and uncertainty, when the probability of an outcome is not possible to determine (is unknowable) (Knight, 1921). Thus risk can be calculated, but uncertainty only reduced.

    In our context some uncertainty is objectively measurable like down time, error rates, operating rates, production time, seat factor, turnaround time etc. For others like sales, interest rates, inflation rates, etc. the uncertainty can only subjectively be measured.

    “[Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.” (John Maynard Keynes, 1937)

    On this basis we will proceed, using managers best guess about the range of possible values and most likely value for production related variables and market consensus etc. for possible outcomes for variables like inflation, interest etc. We will use this to generate appropriate distributions (log-normal) for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate or does not exist, we will use rectangular distributions.

    Benoit Mandelbrot (Mandelbrot, 2004) and Taleb Nasim (Nasim, 2007) have rightly criticized the economic profession for “over use” of the normal distribution – the bell curve. The argument is that it has too thin and short tails. It will thus underestimate the possibility of far out extremes – that is, low probability events with high impact (Black Swan’s).

    Since we use Monte Carlo simulation we can use any distribution to represent possible outcomes of a variable. So using the normal distribution for it’s statistically nicety is not necessary. We can even construct distributions that have the features we look for, without having to describe it mathematically.

    However using normal distributions for some variables and log-normal for others etc. in a value simulation will not give you a normal or log-normal distributed equity value. A number of things can happen in the forecast period; adverse sales, interest or currency rates, incurred losses, new equity called etc. Together with tax, legal and IFRS rules etc. the system will not be linear and much more complex to calculate then mere additions, subtraction or multiplication of probability distributions.

    We will in the following adhere to uncertainty and loss, where loss is an event where calculated equity value is less than book value of equity or in the case of M&A, less than the price paid.

    Assume that we have calculated  the value distribution (cumulative) for two different strategies. The distribution for current operations (blue curve) have a shape showing considerable downside risk (left tail) and a limited upside potential; give a mean equity value of $92M with a minimum of $-28M and a maximum of $150M. This, the span of possible outcomes and the fact that it can be negative compelled the board to look for new strategies reducing downside risk.

    strategy1

    They come up with strategy #1 (green curve) which to a risk-averse board is a good proposition: reducing downward risk by substantially shortening the left tail, increasing expected value of equity by moving the distribution to the right and reducing the overall uncertainty by producing a more vertical curve. In numbers; the minimum value was reduced to $68M, the mean value of equity was increased to $112M and the coefficient of variation was reduced from 30% to 14%. The upside potential increased somewhat but not much.
    To a risk-seeking board strategy#2 (red curve) would be a better proposition: the right tail has been stretched out giving a maximum value of $241M, however so have the left tail giving a minimum value to $-163M, increasing the event space and the coefficient of variation to 57%. The mean value of equity has been slightly reduced to $106M.

    So how could the strategies have been brought about?  Strategy #1 could involve introduction of long term energy contracts taking advantage of today’s low energy cost. Strategy #2 introduces a new product with high initial investments and considerable uncertainties about market acceptance.

    As we now can see the shape of the value distribution gives a lot of information about the company’s risk and opportunities.  And given the boards risk appetite it should be fairly simple to select between strategies just looking at the curves. But what if it is not obvious which the best is? We will return later in this series to answer that question and how the company’s risk and opportunities can be calculated.

    References

    Denenberg, H., et al. (1964). Risk and insurance. Englewood Cliffs, NJ: PrenticeHall,Inc.
    Greene, M. R. (1962). Risk and insurance. Cincinnati, OH: South-Western Publishing Co.
    Keynes, John Maynard. (1937). General Theory of Employment. Quarterly Journal of Economics.
    Knight, F. H. (1921). Risk, uncertainty and profit. Boston, MA: Houghton Mifflin Co.
    Mandelbrot, B., & Hudson, R. (2006). The (Mis) Behavior of Markets. Cambridge: Perseus Books Group.
    Mehr, R. I. and Cammack, E. (1961). Principles of insurance, 3.  Edition. Richard D. Irwin, Inc.
    Rable, W. H. (1968). Further comment. Journal of Risk and Insurance, 35 (4): 611-612.
    Taleb, N., (2007). The Black Swan. New York: Random House.

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.