Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Balance sheet and P&L simulation – Page 4 – Strategy @ Risk

Category: Balance sheet and P&L simulation

  • Top Ten Concerns of CFO’s – May 2009

    Top Ten Concerns of CFO’s – May 2009

    A poll of more than 1200 senior finance executives by CFO Europe together with Tilburg and Duke University ranks the ten top external and internal concerns in Europe, Asia and America (Jason, 2009).

    cfo_europe_top_ten1

    High in all regions we find as external concerns; consumer demand, interest rates, currency volatility and competition.

    For the internal concerns the ability to forecast results together with working capital management and balance sheet weakness ranked highest. This is concerns that balance simulation addresses with the purpose of calculating the effects of different strategies. Adding the uncertainty of future currency and interest rates, demand and competition you have all the ingredients implying the necessity of a stochastic simulation model.

    The risk that “now” has surfaced should compel more managers to look into the risk inherent in their operations. Even if you can’t plan for an uncertain future you can prepare for what it might bring.

    References

    Karaian, Jason (2009, May). Top Ten Concerns of CFO’s. CFO Europe, 12(1), 10-11.

  • The Probability of Bankruptcy

    The Probability of Bankruptcy

    This entry is part 3 of 4 in the series Risk of Bankruptcy

     

    In the simulation we have for every year calculated all four metrics, and over the 250 runs their mean and standard deviation. All metrics is thus based on the same data set. During the forecast period the company invested heavily, financed partly by equity and partly by loans. The operations admittedly give a low but fairly stable return to assets. It was however never at any time in need for capital infusion to avoid insolvency. Since we now “know” the future we can judge the metrics ability to predict bankruptcy.

    A good metric should have a low probability of rejecting a true hypothesis of bankruptcy (false positive) and a high probability of rejecting a false hypothesis of bankruptcy (false negative).

    In the figures below the more or less horizontal curve gives the most likely value of the metric, while the vertical red lines indicate the 90% event space. By visual inspection of the area covered by the red lines we can get an indication of the false negative and false positive rate.

    The Z-Index shows an increase over time in the probability of insolvency, but the probability is very low for all years in the forecast period. The most striking effect is the increase in variance as we move towards the end of the simulated period. This is caused by the fact that uncertainty is “accumulated” over the forecast period. However, according to the Z-index, this company will not be endangered inside the 15 year horizon.

    z-index_time_serie

    In our case the Z-Index correctly identifies the probability of insolvency as small. By inspecting the yearly outcomes represented by the vertical lines we also find an almost zero false negative rate.

    The Z-score metrics tells a different story. The Z’’-score starts in the grey area and eventually ends up in the distress zone. The two others put the company in the distress zone for the whole forecast period.

    z-scores_time_series

    Since the distress zone for the Z-score is below 1.8, a visual inspection of the area covered by the red lines indicates that most of the outcomes fall in the distress zone. The Z-score metrics in this case performs type II errors by giving false negative judgements. However it is not clear what this means – only that the company in some respect is similar to companies gone bankrupt.

    z-score_time_serie

    If we look at the Z metrics for the individual years we find that the Z-score have values from minus two to plus three, in fact it has a coefficient of variation ranging from 300% to 500%. In addition there is very little evidence of the expected cumulative effect.

    z-coeff-of-var

    The other two metrics (Z’ and Z’’) shows much less variation and the expected cumulative effect.  The Z’-score outcomes fall entirely in the distress zone, giving a 100% false negative rate.

    z-score_time_serie1

    The Z’’-score outcome falls mostly in the distress zone below 1.1, but more and more falls in the grey area as we move forward in time. If we combine the safe zone with the grey we get a much lower false negative rate than for both the Z and the Z’ score.

    z-score_time_serie2

    It is difficult to draw conclusions from this exercise, but it points to the possibility of high false negative rates for the Z metrics. Use of ratios in assessing a company’s performance is often questionable and a linear metric based on a few such ratios will obviously have limitations. The fact that the original sample consisted of the same number of healthy and bankrupt companies might also have contributed to a bias in the discriminant coefficients. In real life the failure rate is much lower than 50%!

  • Predicting Bankruptcy

    Predicting Bankruptcy

    This entry is part 2 of 4 in the series Risk of Bankruptcy

     

    The Z-score formula for predicting bankruptcy was developed in 1968 by Edward I. Altman. The Z-score is not intended to predict when a firm will file a formal declaration of bankruptcy in a district court. It is instead a measure of how closely a firm resembles other firms that have filed for bankruptcy.

    The Z-score is classification method using a multivariate discriminant function that measures corporate financial distress and predicts the likelihood of bankruptcy within two years. ((Altman, Edward I., “Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy”. Journal of Finance, (September 1968): pp. 589-609.))

    Others like Springate ((Springate, Gordon L.V., “Predicting the Possibility of Failure in a Canadian Firm”. Unpublished M.B.A. Research Project, Simon Fraser University, January 1978.)), Fulmer ((Fulmer, John G. Jr., Moon, James E., Gavin, Thomas A., Erwin, Michael J., “A Bankruptcy Classification Model For Small Firms”. Journal of Commercial Bank Lending (July 1984): pp. 25-37.)) and the CA-SCORE model ((“C.A. – Score, A Warning System for Small Business Failures”, Bilanas (June 1987): pp. 29-31.)) have later followed in Altman’s track using step-wise multiple discriminant analysis to evaluate a large number of financial ratio’s ability to discriminate between corporate future failures and successes.

    Since Altman’s discriminant function only is linear in the explanatory variables, there has been a number of attempts to capture non-linear relations thru other types of models ((Berg, Daniel. “Bankruptcy Prediction by Generalized Additive Models.” Statistical Research Report. January 2005. Dept. of Math. University of Oslo. 20 Mar 2009 <http://www.math.uio.no/eprint/stat_report/2005/01-05.pdf>.))  ((Dakovic, Rada,Claudia Czado,Daniel Berg. Bankruptcy prediction in Norway: a comparison study. June 2007. Dept. of Math. University of Oslo. 20 Mar 2009 <http://www.math.uio.no/eprint/stat_report/2007/04-07.pdf>.)). Even if some of these models shows a somewhat better predicting ability, we will use the better known Z-score model in the following.

    Studies measuring the effectiveness of the Z-score claims the model to be accurate with >70% reliability. Altman found that about 95% of the bankrupt firms were correctly classified as bankrupt. And roughly 80% of the sick, non-bankrupt firms were correctly classified as non-bankrupt (( Altman, Edward I.. “Revisiting Credit Scoring Models in a Basel 2 Environment.” Finance Working Paper Series . May 2002. Stern School of Business. 20 Mar 2009 <http://w4.stern.nyu.edu/finance/docs/WP/2002/html/wpa02041.html>. )). However others find that the Z-score tends to misclasifie the non-bankrupt firms ((Ricci, Cecilia Wagner. “Bankruptcy Prediction: The Case of the CLECS.” Mid-American Journal of Business 18(2003): 71-81.)).

    The Z-score combines four or five common business ratios using a linear discriminant function to determine the regions with high likelihood of bankruptcy. The discriminant coefficients (ratio value weights) were originally based on data from publicly held manufacturers, but have since been modified for private manufacturing, non-manufacturing and service companies.

    The original data sample consisted of 66 firms, half of which had filed for bankruptcy under Chapter 7. All businesses in the database were manufacturers and small firms with assets of <$1million was eliminated.

    The advantage of discriminant analysis is that many characteristics can be combined into a single score. A low score implies membership in one group, a high score implies membership in the other group, and a middling score causes uncertainty as to which group the subject belongs.

    The original score was as follows:

    Z = 1.2 WC/TA + 1.4 RE/TA + 3.3 EBIT/TA +0.6R ME/BL +0.999 S/TA
    where:

    WC/TA= Working Capital / Total Assets, RE/TA= Retained Earnings / Total Assets
    EBIT/TA = EBIT/ Total Assets, S/TA = Sales/ Total Assets
    ME/BL = Market Value of Equity / Book Value of Total Liabilities

    From about 1985 onwards, the Z-scores have gained acceptance by auditors, management accountants, courts, and database systems used for loan evaluation. It has been used in a variety of contexts and countries, but was designed originally for publicly held manufacturing companies with assets of more than $1 million. Later revisions take into account the book value of privately held shares, and the fact that turnover ratios vary widely in non-manufacturing industries:

    1. Z-score for publicly held Manufacturers
    2. Z’-score for private Firms
    3. Z’’-score for Manufacturers, Non-Manufacturer Industrials & Emerging Market Credits

    The estimated discriminant coefficients for the different models is given in the following table: [Table=3] and the accompanying borders of the different regions – risk zones – are given in the table below. [Table=4] In the following calculations we will use the estimated value of equity as a proxy for market capitalization. Actually it is the other way around since the market capitalization is a guesstimate of the intrinsic equity value.

    In our calculations the Z-score metrics will become stochastic variables with distributions derived both from the operational input distributions for sale, prices, costs etc. and the distributions for the financial variables like risk free interest rate, inflation etc. The figures below are taken from the fifth year in the simulation to be comparable with the previous Z-index calculation that gave a very low probability for insolvency.

    We have in the following calculated all three Z metrics, even when only the Z-score fits the company description.

    z-score

    Using the Z-score metric we find that the company with high probability will be found in the distress area – it can even have negative Z-score. The last is due to the fact that the company has negative working capital – being partly financed by its suppliers and partly to the use of calculated value of equity – which can be negative.

    The Z’’-score is even more somber giving no possibility for values outside the distress area:

    z-score1

    The Z’’-score however puts most of the observations in the gray area:

    z-score2

    Before drawing any conclusions we will in the next post look at the time series for both the Z-index and the Z-scores. Nevertheless one observation can be made – the Z metric is a stochastic variable with an event space that easily can encompass all three risk zones – we therefore need the probability distribution over the zones to forecast the risk of bankruptcy.

    References

  • The Risk of Bankruptcy

    The Risk of Bankruptcy

    This entry is part 1 of 4 in the series Risk of Bankruptcy

     

    Investors should be skeptical of history-based models. Constructed by a nerdy-sounding priesthood using esoteric terms such as beta, gamma, sigma and the like, these models tend to look impressive. Too often, though, investors forget to examine the assumptions behind the symbols. Our advice: Beware of geeks bearing formulas.  – Warren E. Buffett. ((Buffett, Warren E., “Shareholder Letters.” Berkshire Hathaway Inc. 27 February 2009,. Berkshire Hathaway Inc. 13 Mar 2009 <http://www.berkshirehathaway.com/letters/letters.html>.))

    Historic growth is usually a risky estimate for future growth. To be able to forecast a company’s future performance you have to make assumptions on the future most likely values and their event space of a large number of variables, and then calculate both the probability of future necessary cash infusions and if they do not materialized – the risk of bankruptcy.

    The following calculations are carried out using the Strategy& Risk simulation model. Such simulations can be carried out on all types of enterprises including the financial sector. There are several models in use for predicting bankruptcy and we have in our balance simulation model implemented two;  Altman’s Z-score model and the risk index Z developed by Hannan and Hanweck.

    Atman’s Z-score model is based on financial ratios and their relation to bankruptcy found from discriminant analysis. ((Altman, E. I.. “Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy.” The Journal of Finance 23(1968): 589-609. ))  The coefficients in the discriminant function has in later studies been revised – the Z’-score and Z’’-score models.

    Hannan and Hanweck’s probability of insolvency is based on the likelihood of return to assets being negative and larger then the capital-asset ratio. ((Timothy H., Hannan, Gerald A. Hanweck. “Bank Insolvency Risk and the Market for Large Certificates of Deposit.” Journal of Money, Credit and Banking 20(1988): 203-211.)) The Z index has been used  to forecast bank insolvency ((Kimball, Ralph C.. “Economic Profit and Performance Measurement in Banking.” New England Economic Review July/August(1998): 35-53.)) ((Jordan, John S.. “Problem Loans at New England Banks, 1989 to 1992: Evidence of Aggressive Loan Policies.” New England Economic Review January/February(1998): 23-38.)), but can profitably be used to study large private companies with low return to assets.

    We will here take a look at the Z-index and in a later post use the same data to calculate the Z-scores.

    The following calculations are based on forecasts, EBITDA and balance simulations – not on historic balance sheet data. The Z-index is defined as:

    Z=  (ROA+K)/sigma

    where ROA is the pre-tax return on assets, K the ratio of equity to assets, and s the standard deviation of pre-tax ROA. The Z-index give pr unit of standard deviation of ROA the decline in ROA the company can manage before equity is exhausted and becomes insolvent.

    We will in the simulation (250 runs) for every year in the 15 year forecast period – both forecast the yearly ROA and K, and use the variance in ROA to estimate s. For every value of Z – assuming a symmetric distribution – we can calculate the perceived probability (upper bound) of insolvency (p) from:

    P =  (1/2)*sigma^2/(E(ROA)+K)^2

    where the multiplication by (1/2) reflects the fact that insolvensy occurs only in the left tail of the distribution. The relation of p to Z is inverse one, with higher Z-ratios indicating low probability of insolvency.

    z-indexs-probability

    Since our simulation cover a 15 year period it is fully possible that multi-period losses, thru decline in K, can wipe out the equity and cause a failure of the company.

    In year five of the simulation the situation is as follows, the pre-tax return on assets is low – on average only 1.3% and in 20% of the cases it is zero or negative.

    pre-tax-roa

    However the ratio of equity to assets is high – on average 37% with standard deviation of only 1.2.

    ratio-of-equity-to-assets

    The distribution of the corresponding Z-Index values is given in the chart below. It is skewed with a long right tail; the mean is 32 with a minimum value of 16.

    z-index

    From the graph giving the relation between the Z-index and probability of insolvency it is clear that the company’s economic situation is far from being threatened. If we look at the distribution for the probability of insolvency as calculated from the estimated Z-index values this is confirmed having values in the range from 0.1 to 0.3.

    probability-of-insolvency

    Having the probability of insolvency pr year gives us the opportunity to calculate the probability of failure over the forecast period for any chosen strategy.

    If it can’t be expressed in figures, it is not science; it is opinion. It has long been known that one horse can run faster than another — but which one? Differences are crucial. ((Heinlein, Robert. Time Enough for Love. New York: Putnam, 1973))

    References

  • The weighted average cost of capital

    The weighted average cost of capital

    This entry is part 1 of 2 in the series The Weighted Average Cost of Capital

     

    A more extensive version of this article can be read here in .pdf format.

    The weighted cost of capital (WACC) and the return on invested capital (ROIC) are the most important elements in company valuation, and the basis for most strategy and performance evaluation methods.

    WACC is the discount rate (time value of money) used to convert expected future cash flow into present value for all investors. Usually it is calculated both assuming a constant cost of capital and a fixed set of target market value weights ((Valuation, Measuring and Managing the Value of Companies. Tom Copeland et al.)) , throughout the time frame of the analysis. As this simplifies the calculations, it also imposes severe restrictions on how a company’s financial strategy can be simulated.

    Now, to be able to calculate WACC we need to know the value of the company, but to calculate that value we need to know WACC. So we have a circularity problem involving the simultaneous solution of WACC and company value.

    In addition all the variables and parameters determining the company value will be stochastic, either by themselves or by being functions of other stochastic variables. As such WACC is a stochastic variable– determined by the probability distributions for yield curves, exchange rates, sale, prices, costs and investments. But this also enables us – by Monte Carlo simulation –to estimate a confidence interval for WACC.

    Some researchers have claimed that the free cash flow value only in special cases will be equal to the economic profit value. By solving the simultaneous equations, giving a different WACC for every period, we will always satisfy the identity between free cash flow and economic profit value. In fact we will use this to check that the calculations are consistent.

    We will use the most probable value for variables/parameters in the calculations. Since most of the probability distributions involved are non-symmetric (sale, prices etc), the expected values will in general not be equal to the most probable values. And as we shall see, this is also the case for the individual values of WACC.

    WACC

    To be consistent with the free cash flow or economic profit approach, the estimated cost of capital must comprise a weighted average of the marginal cost of all sources of capital that involves cash payment – excluding non-interest bearing liabilities (in simple form):

    WACC = {C_d}(1-t)*{D/V} + {C_e}*{E/V}

    {C_d} = Pre-tax debt nominal interest rate
    {C_e} = Opportunity cost of equity,
    t = Corporate marginal tax rate
    D = Market value debt
    E = Market value of equity
    V = Market value of entity (V=D+E).

    The weights used in the calculation are the ratio between the market value of each type of debt and equity in the capital structure, and the market value of the company. To estimate WACC we then first need to establish the opportunity cost of equity and non-equity financing and then the market value weights for the capital structure.

    THE OPPORTUNITY COST OF EQUITY AND NON-EQUITY FINANCING

    To have a consistent WACC, the estimated cost of capital must:

    1. Use interest rates and cost of equity of new financing at current market rates,
    2. Be computed after corporate taxes,
    3. Be adjusted for systematic risk born by each provider of capital,
    4. Use nominal rates built from real rates and expected inflation.

    However we need to forecast the future risk free rates. They can usually be found from the yield curve for treasury notes, by calculating the implicit forward rates.

    THE OPPORTUNITY COST OF EQUITY

    The equation for the cost of equity (pre investor tax), using the capital asset pricing model (CAPM) is:

    C = R+M*beta+L

    R  = risk-free rate,
    beta  = the levered systematic risk of equity,
    M  = market risk premium,
    L  = liquidity premium.

    If tax on dividend and interest income differs, the risk-free rate and the market premium has to be adjusted, assuming tax rate -ti, for interest income:

    R = (1-t_i)*R  and  M = M+t_i*R.

    t_i = Investor tax rate,
    R  = tax adjusted risk-free rate,
    M = tax adjusted market premium

    The pre-tax cost of equity can then be computed as:

    R/(1-t_d)+{beta}*{M/(1-t_d)}+{LP/(1-t_d)}

    C_e(pre-tax) = C_e/(1-t_d) = R/(1-t_d)+{beta}*{M/(1-t_d)}+{LP/(1-t_d)}

    Where the first line applies for an investor with a tax rate of -td, on capital income, the second line for an investor when tax on dividend and interest differs  ((See also: Wacc and a Generalized Tax Code, Sven Husmann et al.,  Diskussionspapier 243 (2001), Universität Hannover)) .

    The long-term strategy is a debt-equity ratio of one, the un-levered beta is assumed to be 1.1 and the market risk premium 5.5%. The corporate tax rate is 28%, and the company pays all taxes on dividend. The company’s stock has low liquidity, and a liquidity premium of 2% has been added.

    cost-of-equity_corrected

    In the Monte Carlo simulation all data in the tables will be recalculated for every trial (simulation), and in the end produce the basis for estimating the probability distributions for the variables. This approach will in fact create a probability distribution for every variable in the profit and loss account as well as in the balance sheet.

    THE OPPORTUNITY COST OF DEBT

    It is assumed that the pre-tax debt interest rate can be calculated using risk adjusted return on capital (RAROC) as follows:

    Lenders Cost = L_C+L_L+L_A+L_RP

    L_C = Lenders Funding Cost (0.5%),
    L_L = Lenders Average Expected Loss (1.5%),
    L_A = Lenders Administration Cost (0.8%),
    L_RP= Lenders Risk Premium (0.5%).

    The parameters (and volatility) have to be estimated for the different types of debt involved. In this case there are two types; short -term with a maturity of four years and long-term with a maturity of 10 years. The risk free rates are taken from the implicit forward rates in the yield curve and lenders cost are set to 3.3%.

    In every period the cost and value of debt are recalculated using the current rates for that maturity, ensuring use of the current (future) opportunity cost of debt.

    THE MARKET VALUE WEIGHTS

    By solving the simultaneous equations, we find the market value for each type of debt and equity:

    And the value weights:

    Multiplying the value weights by the respective rate and adding, give us the periodic most probable WACC rate:

    As can be seen from the table above, the rate varies slightly from year to year. The relative small differences are mainly due to the low gearing in the forecast period.

    MONTE CARLO SIMULATION

    In the figure below we have shown the result from simulation of the company’s operations, and the resulting WACC for year 2002. This shows that the expected value of WACC in is 17.4 %, compared with the most probable value of 18.9 %. This indicates that the company will need more capital in the future, and that an increasing part will be financed by debt. A graph of the probability distributions for the yearly capital transactions (debt and equity) in the forecast period would have confirmed this.

    In the figure the red curve indicates the cumulative probability distribution for the value of WACC in this period and the blue columns the frequencies. By drawing horizontal lines on the probability axis (left), we can find confidence intervals for WACC. In this case there is only a 5% probability that WACC will be less than 15%, and a 95% probability that it will be less than 20%. So we can expect WACC for 2002 with 90% probability to fall between 15% and 20%. The variation is quite high  – with a coefficient of variation of 6.8 ((Coefficient of variation = 100*st.dev/mean)).

    VALUATION

    The value of the company and the resulting value of equity can be calculated using either the free cash flow or the economic profit approach. Correctly done, both give the same value. This is the final test for consistency in the business model. The calculations are given in the tables below, and calculated as the value at end of every year in the forecast period.

    As usual, the market value of free cash flow is the discounted value of the yearly free cash flow in the forecast period, while the continuing value is the value of continued operation after the forecast period. All surplus cash are paid, as dividend so there is no excess marketable securities.

    The company started operations in 2002 after having made the initial investments. The charge on capital is the WACC rate multiplied by the value of invested capital. In this case capital at beginning of each period is used, but average capital or capital at end could have been used with a suitable definition of capital charge.
    Economic profit has been calculated by multiplying RIOC – WACC with invested capital, and the market value at any period is the net present value of future economic profit. The value of debt as the net present value of future debt payments – is equal for both methods.

    For both methods using the same series of WACC when discounting cash the flows, we find the same value for the both company and equity. This ensures that the calculations are both correct and consistent.

    Tore Olafsen and John Martin Dervå

    reprint_fen

  • The advantages of simulation modelling

    The advantages of simulation modelling

    This entry is part 6 of 6 in the series Monte Carlo Simulation

     

    All businesses need the ability to, if not predict the future; assess what its future economic performance can be. In most organizations this is done using a deterministic model, which is a model which does not consider the uncertainty inherent in all the inputs to the model. The exercise can best described as pinning jelly to a wall; it is that easy to find the one number which correctly describes the future.

    The apparent weakness of the one number which is to describe the future is usually paired with so called sensitivity analysis. Such analysis usually means changing the value of one variable, and observe what the result then is. Then another variable is changed, and again the result is observed. Usually it is the very extreme cases which are analyzed, and some times these sensitivities are even summed up to show extreme values and improbable downsides.

    Such a sensitivity analysis is as much pinning jelly to the wall as is the deterministic model itself. The relationship between variables is not considered, and rarely is the probability of each scenario stated.

    What the simulation model does is to model the relationship between variables, the probability of different scenarios, and to analyze the business as a complex whole. Each uncertain variable is assessed by key decision makers giving their estimates for

    • The expected value of the variable
    • The low value at a given probability
    • The high value at a corresponding probability level
    • The shape of the probability curve

    The relationship between variables is either modeled by its correlation coefficient or a regression.

    Then a simulation tool is needed to do the simulation itself. The tool uses the assigned probability curves to draw values from each of the curves. After a sufficient number of simulations, it will give a probability curve for the desired goal function(s) of the model, in addition to the variables themselves.

    As decision support this is an approach which will give you answers to questions like:

    • The probability of a project NPV being at a given, required level
    • The probability of a project or a business generating enough cash to run a successful business
    • The probability of default
    • What the risk inherent in the business is, in monetary terms
    • And a large number of other very useful questions

    The simulation model gives you a total view of risk where the sensitivity analysis or the deterministic analysis gives you only one number, without any known probability. And it also reveals the potential upside which is in every project. It is this upside which must be weighted against the potential downside and the risk level which is appropriate for each entity.

    The S@R-model

    The S@R-model is simulation tool which is built on proven financial and statistical technology. It is written in a language especially made for modelling financial decision problems, called Pilot Lightship. The model output is in the form of both probabilities for different aspects of a financial or business decision, and in the form of a fully fledged balance sheet and P&L. Hence, it gives you what you normally expect as output from at deterministic model, and in addition it gives you simulated results given defines probability curves and relationships between variables.

    The operational part of the business can be modeled either in a separate subroutine, or directly into the main part of the simulation tool, called the main model. For complex goal functions with numerous variables and relationships, it is recommended to use the subroutine, as it gives greater insight into the complexity of the business. Data from the operational subroutine is later transferred to the main model as a compiled file.