Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Stochastic simulation – Strategy @ Risk

Tag: Stochastic simulation

  • The risk of planes crashing due to volcanic ash

    The risk of planes crashing due to volcanic ash

    This entry is part 4 of 4 in the series Airports

    Eyjafjallajokull volcano

    When the Icelandic volcano Eyafjallajøkul had a large eruption in 2010 it lead to closed airspace all over Europe, with corresponding big losses for airlines.  In addition it led to significant problems for passengers who were stuck at various airports without getting home.  In Norway we got a new word: “Ash stuck” ((Askefast)) became a part of Norwegian vocabulary.

    The reason the planes were put on ground is that mineral particles in the volcanic ash may lead to damage to the plane’s engines, which in turn may lead to them crashing.  This happened in 1982, when a flight from British Airways almost crashed due to volcanic particles in the engines. The risk of the same happening in 2010 was probably not large, but the consequences would have been great should a plane crash.

    Using simulation software and a simple model I will show how this risk can be calculated, and hence why the airspace was closed over Europe in 2010 even if the risk was not high.  I have not calculated any effects following the closure, since this isn’t a big model nor an in depth analysis.  It is merely meant as an example of how different issues can be modeled using Monte Carlo simulation.  The variable values are not factual but my own simple estimates.  The goal in this article is to show an example of modeling, not to get a precise estimate of actual risk.

    To model the risk of dangerous ash in the air there are a few key questions that have to be asked and answered to describe the issue in a quantitative way.

    Is the ash dangerousVariable 1. Is the ash dangerous?

    We first have to model the risk of the ash being dangerous to plane engines.  I do that by using a so called discrete probability.  It has a value 0 if the ash is not dangerous and a value 1 if it is.  Then the probabilities for each of the alternatives are set.  I set them to:

    • 99% probability that the as IS NOT dangerous
    • 1% probability that the ash IS dangerous

    Number of planes in the air during 2 hoursVariable 2. How many planes are in the air?

    Secondly we have to estimate how many planes are in the air when the ash becomes a problem.  Daily around 30 000 planes are in the air over Europe.  We can assume that if planes start crashing or get in big trouble the rest will immediately be grounded.  Therefore I only use 2/24 of these planes in the calculation.

    • 2 500 planes are in the air when the problem occurs

    I use a normal distribution and set the standard deviation for planes in the air in a 2 hour period to 250 planes.  I have no views on whether the curve is skewed one way or the other.  I assume it may well be, since there probably are different numbers of planes in the air depending on weekday, whether it’s a holiday season and so on, but I’ll leave that estimate to the air authority staff.

    Number of passengers and crewVariable 3.  How many people are there in each plane?

    Thirdly I need an estimate on how many passengers and crew there are in each plane.  I assume the following; I disregard the fact that there are a lot of intercontinental flights over the Eyafjallajøkul volcano, likely with more passengers than the average plane over Europe.  The curve might be more skewed that what I assume:

    • Average number of passengers/crew: 70
    • Lowest number of passengers/crew: 60
    • Highest number of passengers/crew: 95

    The reason I’m using a skewed curve here is that the airline business is constantly under pressure to fill up their planes.  In addition the number of passengers will vary by weekday and so on.  I think it is reasonable to assume that there are likely more passengers per plane rather than fewer.

    Number of planes crashingVariable 4. How many of the planes which are in the air will crash?

    The last variable that needs to be modeled is how many planes will crash should the ash be dangerous.  I assume that maybe no planes actually crash, even though the ash gets into their engines.  This is the low end of the curve.  I have in addition assumed the following:

    • Expected number of planes that crash: 0, 01%
    • Maximum number of planes that crash: 1, 0%

    Now we have what we need to start calculating!

    The formula I use to calculate is as follows:

    If(“Dangerous ash”=0;0)

    If(“Dangerous ash”=1;”Number of planes in the air”x”Number of planes crashing”x”Number of passengers/crew per plane”)

    If the ash is not dangerous, variable 1 is equal to 0, no planes crash and nobody dies.  If the ash is dangerous the number of dead is a product of the number of planes, number of passengers/crew and the number of planes crashing.

    Running this model with a simulation tool gives the following result:

    Expected value - number of dead

    As the graph shows the expected value is low; 3 people, meaning that the probability for a major loss of planes is very low.  But the consequences may be devastatingly high.  In this model run there is a 1% probability that the ash is dangerous, and a 0, 01% probability that planes actually crash.  However the distribution has a long tail, and a bit out in the tail there is a probability that 1 000 people crash into their death. This is a so called shortfall risk or the risk of a black swan if you wish.  The probability is low, but the consequences are very big.

    This is the reason for the cautionary steps taken by air authorities.   Another reason is that the probabilities both for the ash being dangerous and that planes will crash because of it are unknown probabilities.  Thirdly, changes in variable values will have a big impact.

    If the probability of the ash being dangerous is 10% rather than 1% and the probability of planes crashing is 1% rather than 0,01%, as much as 200 dead (or 3 planes) is expected while the extreme outcome is close to 6 400 dead.

    Expected value - number of dead higher probability of crash

    This is a simplified example of the modeling that is likely to be behind the airspace being closed.  I don’t know what probabilities are used, but I’m sure this is how they think.

    How we assess risk depends on who we are.  Some of us have a high risk appetite, some have low.  I’m glad I’m not the one to make the decision on whether to close the airspace or not.  It is not an easy decision.

    My model is of course very simple.  There are many factors to take into account, like wind direction and – strength, intensity of eruption and a number of other factors I don’t know about.  But as an illustration both of the factors that need to be estimated in this case and as a generic modeling case this is a good example.

    Originally published in Norwegian.

  • Introduction to Simulation Models

    Introduction to Simulation Models

    This entry is part 4 of 6 in the series Balance simulation

     

    Simulation models sets out to mimic real life company operations, that is describing the transformation of raw materials and labor to finished products in such a way that it can be used as support for strategic decision making.

    A full simulation model will usually consist of two separate models:

    1. an EBITDA model that describes the particular firm’s operations and
    2. an generic P&L and Balance simulation model (PL&B).

     

     

    The EBITDA model ladder

    Both the deterministic and stochastic balance simulation can be approached as a ladder with two steps, where the first is especially well suited as an introduction to risk simulation and the second gives a full blown risk analysis. In these successive steps the EBITDA calculations will be based on:

    1. financial information only, by using coefficients of fabrications and unit prices (e.g. kg flour per 1000 bread and cost of flour per kg, etc.) as direct input to the balance model – the direct method and
    2. EBITDA models to give a detailed technical description of the company’s operations.

    The first step uses coefficients of fabrications and their variations give a low effort (cost) alternative, usually using the internal accounting as basis. In many cases, this will often give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans. The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    This step is especially well suited for introduction to risk simulation and the art of communicating risk and uncertainty throughout the firm. It can also profitably be used in cases where time and data is limited and where one wishes to limit efforts in an initial stage. Data and assumptions can later be augmented to much more sophisticated analyses within the same framework. This way the analysis can be successively built in the direction the previous studies suggested.

    The second step implies setting up a dedicated EBITDA subroutine to the balance model. This can then give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long-term planning and strategy development.

    The EBITDA model can both be a stand-alone model and a subroutine to the PL&B model. The stand-alone EBITDA model can be used to in detail study the firm’s operations and how different operational strategies will or can affect EBITDA outcomes and distribution.

    When connected to the PL&B model it will act as a subroutine giving the necessary information to produce the P&L and ultimately the Balance and the – outcomes distributions.

    This gives great flexibility in model formulations and the opportunity to fit models to different industries and accommodate for the data available.

    P&L and Balance simulation

    The generic PL&B model – based on the IFRS standard – can be used for a wide range of business activities both:

    1. describes the firm’s financial environment (taxes, interest rates, currency etc.) and
    2. acts as a testing bed for financial strategies (hedging, translation risk, etc.)

    Since S@R has set out to create models that can give answers to both deterministic and stochastic questions thus the PL&B model is a real balance simulation model – not a simple cash flow forecast model.

    All runs in the simulation produces a complete P&L and Balance it enables uncertainty curves (distributions) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. to be produced.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    The results from these analyses can be presented in different forms from detailed traditional financial reports to graphs describing the range of possible outcomes for all items in the P&L and Balance (+ much more) looking at the coming one to five (short term) or five to fifteen years (long term) and showing the impacts to e.g. equity value, company value, operating income etc.

    The goal is to find the distribution for the firm’s equity value which will incorporate all uncertainty facing the firm.

    This uncertainty gives both shape and location of the equity value distribution, and this is what we – if possible – are aiming to change:

    1. reducing downside risk by reducing the left tail (blue curve)
    2. increasing expected company value by moving the curve to the right (green curve)
    3. increasing the upside potential by  increasing the right tail (red curve) etc.

     

    The Data

    To be able to simulate the operations we need to put into the model all variables that will affect the firm’s net yearly result. Most of these will be collected by S@R from outside sources like central banks, local authorities and others, but some will have to be collected from the firm.

    The production and firm specific variables are related to every day’s activities in the firm. Their historic values can be collected from internal accounts or from production reports.  Someone in the procurement-, production- or sales department will have their records and most always the controllers.  The rest will be variables inside the domain of the CEO and the company treasurer.

    The variables fall in five groups:

    i.      general  variables describing the firm’s financial environment ,
    ii.      variables describing the firms strategy,
    iii.      general variables used for forecasting purposes,
    iv.      direct problem related variables and
    v.      the firm specific:
    a.  production coefficients  and
    b.  cost of raw materials and labor related variables.

    The first group will contain – for all countries either delivering raw materials or buying the finished product (s) – variables like: taxes, spot exchange rates etc.  For the firm’s domestic country it will in addition contain variables like: Vat rates, taxes on investments and dividend income, depreciation rates and method, initial tax allowances, overdraft interest rates etc.

    The second group will contain variables like: minimum cash levels, debt distribution on short and long term loans and currencies, hedge ratios, targeted leverage, economic depreciation etc.

    The third group will contain variables needed for forecasting purposes: yield curves, inflation forecasts, GDP forecasts etc. The expected values and their 5 % and 95 % probability limits will be used to forecast exchange rates, interest rates, demand etc. They will be collected by S@R.

    The fourth group will contain variables related to sales forecasts: yearly air temperature profiles (and variation) for forecasting beer sales and yearly water temperature profiles (and variation) for forecasting increase in biomass in fish farming.

    The fifth group will contain variables that specify the production and costs of production. They will vary according to the type of operations e.g.: operating rate (%), max days of production, tools maintenance (h per 10.000 units) , error rate (errors per 1000 units), waste (% of weight of prod unit), cycle time (units per min), number of machines per shift (#), concession density (kg per m3), feed rates (%), mortality rates (%) etc., etc.. This variable specifies the production and will they be stochastic in the sense that they are not constant but will vary inside a given – theoretical or historic – range.

    To simulate costs of production we use the coefficients of fabrication and their unit costs. Both the coefficients and their unit costs will always be of stochastic nature and they can vary with capacity utilization:  energy per unit produced (kwh/unit) and energy price (cost per Kwh), malt use (kg per hectoliter), malt price (per kg), maximum takeoff weight (ton), takeoff charge (per ton), specific consumption of wood, (m3/Adt), cost of round wood (per m3), etc., etc.

    The uncertainty (and risk) stemming from all groups of variables will be propagated through the P&L and down to the Balance, ending up as volatility in the equity distribution.

    The aim is to estimate the economic impact that such uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight, and the ability to deal with uncertainties in an informed way – and thus benefits above ordinary spread-sheet exercises.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    Most companies have some sort of model describing the company’s operations. They are used mostly for budgeting, but in some cases also for forecasting cash flow and other important performance measures.

    If the client already has spread sheet models describing the operations, we can build on this. There is no reason to reinvent what has already been done – thus saving time and resources that can be better utilized in other phases of the project.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    In some cases, we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.

    The first approach can also be considered as an introduction and stepping-stone to a more complete EBITDA model and detailed simulations.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However, the type of data will have to be agreed upon depending on the scope of analysis.

    Very often, key people from the controller group will be adequate for this work and if they do not have the direct knowledge, they usually know whom to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months – depending on the scope of the project.

    For S&R, the period will depend on the availability of key personnel from the client and the availability of data. For the second alternative, it can take from one to three weeks of normal work to three to six months for the second alternative for more complex models. The total time will also depend on the number of analyses that needs to be run and the type of reports that has to be delivered.

    The team’s participation in the project also makes communication of the results up or down in the system simpler. Since the input data is collected by templates this gives the responsible departments and persons, ownership to assumptions, data and results. These templates thus visualize the flow of data thru the organization and the interdependence between the departments – facilitating the communication of risk and the different strategies both reviewed and selected.

    No knowledge or expertize on uncertainty calculations or statistical methods is required from the clients side. The team will thru ‘osmosis’ acquires the necessary knowledge. Usually the team finds this as an exciting experience.

  • Uncertainty modeling

    Uncertainty modeling

    This entry is part 2 of 3 in the series What We Do

    Prediction is very difficult, especially about the future.
    Niels Bohr. Danish physicist (1885 – 1962)

    Strategy @ Risks models provide the possibility to study risk and uncertainties related to operational activities;  cost, prices, suppliers,  markets, sales channels etc. financial issues like; interest rates risk, exchange rates risks, translation risk , taxes etc., strategic issues like investments in new or existing activities, valuation and M&As’ etc and for a wide range of budgeting purposes.

    All economic activities have an inherent volatility that is an integrated part of its operations. This means that whatever you do some uncertainty will always remain.

    The aim is to estimate the economic impact that such critical uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight: the ability to deal with uncertainties in an informed way and thus benefits above ordinary spread-sheet exercises.

    The results from these analyzes can be presented in form of B/S and P&L looking at the coming one to five (short term) or five to fifteen years (long term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ probabilities
    • Evaluate alternative strategic investment options at risk
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    S@R has set out to create models that can give answers to both deterministic and stochastic questions, by linking dedicated Ebitda models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or
    2. by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model – the ‘short cut’ method.

    The first approach implies setting up a dedicated Ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long term planning and strategy development.

    The second (‘the short cut’) uses coefficients of fabrications and their variations, and is a low effort (cost) alternative, usually using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans.  The data needed for the company’s economic environment (taxes, interest rates etc) will be the same in both alternatives:

    The ‘short cut’ approach is especially suited for quick appraisals of M&A cases where time and data is limited and where one wishes to limit efforts in an initial stage. Later the data and assumptions can be augmented to much more sophisticated analysis within the same ‘short cut’ framework. In this way analysis can be successively built in the direction the previous studies suggested.

    This also makes it a good tool for short-term (3-5 years) analysis and even for budget assessment. Since it will use a limited number of variables – usually less than twenty – describing the operations, it is easy to maintain and operate. The variables describing financial strategy and the economic environment come in addition, but will be easy to obtain.

    Used in budgeting it will give the opportunity to evaluate budget targets, their probable deviation from expected result and the probable upside or down side given the budget target (Upside/downside ratio).

    Done this way analysis can be run for subsidiaries across countries translating the P&L and Balance to any currency for benchmarking, investment appraisals, risk and opportunity assessments etc. The final uncertainty distributions can then be “aggregated’ to show global risk for the mother company.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    Since all runs (500 to 1000) in the simulation produces a complete P&L and Balance the uncertainty curve (distribution) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. can be produced.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic Ebitda model.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However the type of data will have to be agreed upon depending on the scope of analysis.

    Very often will key people from the controller group be adequate for this work and if they don’t have the direct knowledge they usually know who to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months.

    For S&R the time frame will depend on the availability of key personnel from the client and the availability of data. For the second alternative it can take from one to three weeks of normal work to three to six months for the first alternative for more complex models. The total time will also depend on the number of analysis that needs to be run and the type of reports that has to be delivered.

    S@R_ValueSim

    Selecting strategy

    Models like this are excellent for selection and assessment of strategies. Since we can find the probability distribution for equity value, changes in this brought by different strategies will form a basis for selection or adjustment of current strategy. Models including real option strategies are a natural extension of these simulation models:

    If there is a strategy with a curve to the right and under all other feasible strategies this will be the stochastic dominant one. If the curves crosses further calculations needs to be done before a stochastic dominant or preferable strategy can be found:

    Types of problems we aim to address:

    The effects of uncertainties on the P&L and Balance and the effects of the Boards strategies (market, hedging etc.) on future P&L and Balance sheets evaluating:

    • Market position and potential for growth
    • Effects of tax and capital cost
    • Strategies
    • Business units, country units or product lines –  capital allocation – compare risk, opportunity and expected profitability
    • Valuations, capital cost and debt requirements, individually and effect on company
    • The future cash-flow volatility of company and the individual BU’s
    • Investments, M&A actions, their individual value, necessary commitments and impact on company
    • Etc.

    The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against uncertain factors.

    Used in budgeting, this will improve budget stability through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.

    This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and Choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.

    A severe depression like that of 1920-1921 is outside the range of probability. –The Harvard Economic Society, 16 November 1929

  • Two letters

    Two letters

    Dear S@R,

    I am not interested in the use of stochastic models, and particularly Monte Carlo simulations.  I believe that these approaches too often lead to underestimating risks of extreme events, by failing to indentify correlated variables, first order or second order variables, and correlations in sample populations. I believe that the use of these models carries an important responsibility in the way banks failed to address risks correctly.
    Best regards,
    NN

    Dear NN,

    We wholeheartedly agree on the errors you point out, especially for the banking sector. However this is per se not the fault of Monte Carlo simulation as a technique, but in the way some models has been implemented and later misused.

    We also have read the stories about bank risk managers (and modellers) forced by higher management to change important risk parameters to make further loans possible.

    We just do not relay only on normal variables with short slim tails and simple VaR calculations. For risk calculations we alternatively use shortfall and spectral risk, the latter to give progressively larger weights to losses that can be disastrous. This will be a topic in a future post on our Web site.

    However I beg to differ with you on the question of correlations. In my experience large correlation matrixs is a part of the problem you describe. Such correlation matrixs will undoubtedly contain spurious correlations giving false estimates of important relations. This is why we model all important relations, using the unexplained variance as a part of the uncertainty describing the problem under study – the company’s operations.

    Many claim that what killed Wall Street was uncritical use of David X. Li’s copula formula, where errors massively increase the risk of the whole equation blowing up (Salmon, 2009). We have therefore never used his work, relaying more on both B. Mandelbrot and Taleb Nasim’s views.

    As we se it, the use of copula’s formua was done to avoid serious statistical analysis and simulation work – which is what we do.

    If you should reconsider, we will be happy to meet with you to explain the nature of our work. To us nothing is better than a demanding customer.

    Best regards

    S@R

    References

    Salmon, Felix (2009,02,23). Recipe for Disaster: The Formula That Killed Wall Street. Wired Magazine, Retrieved 0702,2009, from http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all

  • Corporate Risk Analysis

    Corporate Risk Analysis

    This entry is part 2 of 6 in the series Balance simulation

     

    Strategy @Risk has developed a radical and new approach to the way risk is assessed and measured when considering current and future investment. A key part of our activity in this sensitive arena has been the development of a series of financial models that facilitate understanding and measurement of risk set against a variety of operating scenarios.

    We have written a paper which outlines our approach to Corporate Risk Analysis to outline our approach. Read it here.

    Risk

    Our purpose in this paper is to show that every item written into a firm’s profit and loss account and its balance sheet is a stochastic variable with a probability distribution derived from probability distributions for each factor of production. Using this approach we are able to derive a probability distribution for any measure used in valuing companies and in evaluating strategic investment decisions. Indeed, using this evaluation approach we are able to calculate expected gain, loss and probability when investing in a company where the capitalized value (price) is known.

  • The weighted average cost of capital

    The weighted average cost of capital

    This entry is part 1 of 2 in the series The Weighted Average Cost of Capital

     

    A more extensive version of this article can be read here in .pdf format.

    The weighted cost of capital (WACC) and the return on invested capital (ROIC) are the most important elements in company valuation, and the basis for most strategy and performance evaluation methods.

    WACC is the discount rate (time value of money) used to convert expected future cash flow into present value for all investors. Usually it is calculated both assuming a constant cost of capital and a fixed set of target market value weights ((Valuation, Measuring and Managing the Value of Companies. Tom Copeland et al.)) , throughout the time frame of the analysis. As this simplifies the calculations, it also imposes severe restrictions on how a company’s financial strategy can be simulated.

    Now, to be able to calculate WACC we need to know the value of the company, but to calculate that value we need to know WACC. So we have a circularity problem involving the simultaneous solution of WACC and company value.

    In addition all the variables and parameters determining the company value will be stochastic, either by themselves or by being functions of other stochastic variables. As such WACC is a stochastic variable– determined by the probability distributions for yield curves, exchange rates, sale, prices, costs and investments. But this also enables us – by Monte Carlo simulation –to estimate a confidence interval for WACC.

    Some researchers have claimed that the free cash flow value only in special cases will be equal to the economic profit value. By solving the simultaneous equations, giving a different WACC for every period, we will always satisfy the identity between free cash flow and economic profit value. In fact we will use this to check that the calculations are consistent.

    We will use the most probable value for variables/parameters in the calculations. Since most of the probability distributions involved are non-symmetric (sale, prices etc), the expected values will in general not be equal to the most probable values. And as we shall see, this is also the case for the individual values of WACC.

    WACC

    To be consistent with the free cash flow or economic profit approach, the estimated cost of capital must comprise a weighted average of the marginal cost of all sources of capital that involves cash payment – excluding non-interest bearing liabilities (in simple form):

    WACC = {C_d}(1-t)*{D/V} + {C_e}*{E/V}

    {C_d} = Pre-tax debt nominal interest rate
    {C_e} = Opportunity cost of equity,
    t = Corporate marginal tax rate
    D = Market value debt
    E = Market value of equity
    V = Market value of entity (V=D+E).

    The weights used in the calculation are the ratio between the market value of each type of debt and equity in the capital structure, and the market value of the company. To estimate WACC we then first need to establish the opportunity cost of equity and non-equity financing and then the market value weights for the capital structure.

    THE OPPORTUNITY COST OF EQUITY AND NON-EQUITY FINANCING

    To have a consistent WACC, the estimated cost of capital must:

    1. Use interest rates and cost of equity of new financing at current market rates,
    2. Be computed after corporate taxes,
    3. Be adjusted for systematic risk born by each provider of capital,
    4. Use nominal rates built from real rates and expected inflation.

    However we need to forecast the future risk free rates. They can usually be found from the yield curve for treasury notes, by calculating the implicit forward rates.

    THE OPPORTUNITY COST OF EQUITY

    The equation for the cost of equity (pre investor tax), using the capital asset pricing model (CAPM) is:

    C = R+M*beta+L

    R  = risk-free rate,
    beta  = the levered systematic risk of equity,
    M  = market risk premium,
    L  = liquidity premium.

    If tax on dividend and interest income differs, the risk-free rate and the market premium has to be adjusted, assuming tax rate -ti, for interest income:

    R = (1-t_i)*R  and  M = M+t_i*R.

    t_i = Investor tax rate,
    R  = tax adjusted risk-free rate,
    M = tax adjusted market premium

    The pre-tax cost of equity can then be computed as:

    R/(1-t_d)+{beta}*{M/(1-t_d)}+{LP/(1-t_d)}

    C_e(pre-tax) = C_e/(1-t_d) = R/(1-t_d)+{beta}*{M/(1-t_d)}+{LP/(1-t_d)}

    Where the first line applies for an investor with a tax rate of -td, on capital income, the second line for an investor when tax on dividend and interest differs  ((See also: Wacc and a Generalized Tax Code, Sven Husmann et al.,  Diskussionspapier 243 (2001), Universität Hannover)) .

    The long-term strategy is a debt-equity ratio of one, the un-levered beta is assumed to be 1.1 and the market risk premium 5.5%. The corporate tax rate is 28%, and the company pays all taxes on dividend. The company’s stock has low liquidity, and a liquidity premium of 2% has been added.

    cost-of-equity_corrected

    In the Monte Carlo simulation all data in the tables will be recalculated for every trial (simulation), and in the end produce the basis for estimating the probability distributions for the variables. This approach will in fact create a probability distribution for every variable in the profit and loss account as well as in the balance sheet.

    THE OPPORTUNITY COST OF DEBT

    It is assumed that the pre-tax debt interest rate can be calculated using risk adjusted return on capital (RAROC) as follows:

    Lenders Cost = L_C+L_L+L_A+L_RP

    L_C = Lenders Funding Cost (0.5%),
    L_L = Lenders Average Expected Loss (1.5%),
    L_A = Lenders Administration Cost (0.8%),
    L_RP= Lenders Risk Premium (0.5%).

    The parameters (and volatility) have to be estimated for the different types of debt involved. In this case there are two types; short -term with a maturity of four years and long-term with a maturity of 10 years. The risk free rates are taken from the implicit forward rates in the yield curve and lenders cost are set to 3.3%.

    In every period the cost and value of debt are recalculated using the current rates for that maturity, ensuring use of the current (future) opportunity cost of debt.

    THE MARKET VALUE WEIGHTS

    By solving the simultaneous equations, we find the market value for each type of debt and equity:

    And the value weights:

    Multiplying the value weights by the respective rate and adding, give us the periodic most probable WACC rate:

    As can be seen from the table above, the rate varies slightly from year to year. The relative small differences are mainly due to the low gearing in the forecast period.

    MONTE CARLO SIMULATION

    In the figure below we have shown the result from simulation of the company’s operations, and the resulting WACC for year 2002. This shows that the expected value of WACC in is 17.4 %, compared with the most probable value of 18.9 %. This indicates that the company will need more capital in the future, and that an increasing part will be financed by debt. A graph of the probability distributions for the yearly capital transactions (debt and equity) in the forecast period would have confirmed this.

    In the figure the red curve indicates the cumulative probability distribution for the value of WACC in this period and the blue columns the frequencies. By drawing horizontal lines on the probability axis (left), we can find confidence intervals for WACC. In this case there is only a 5% probability that WACC will be less than 15%, and a 95% probability that it will be less than 20%. So we can expect WACC for 2002 with 90% probability to fall between 15% and 20%. The variation is quite high  – with a coefficient of variation of 6.8 ((Coefficient of variation = 100*st.dev/mean)).

    VALUATION

    The value of the company and the resulting value of equity can be calculated using either the free cash flow or the economic profit approach. Correctly done, both give the same value. This is the final test for consistency in the business model. The calculations are given in the tables below, and calculated as the value at end of every year in the forecast period.

    As usual, the market value of free cash flow is the discounted value of the yearly free cash flow in the forecast period, while the continuing value is the value of continued operation after the forecast period. All surplus cash are paid, as dividend so there is no excess marketable securities.

    The company started operations in 2002 after having made the initial investments. The charge on capital is the WACC rate multiplied by the value of invested capital. In this case capital at beginning of each period is used, but average capital or capital at end could have been used with a suitable definition of capital charge.
    Economic profit has been calculated by multiplying RIOC – WACC with invested capital, and the market value at any period is the net present value of future economic profit. The value of debt as the net present value of future debt payments – is equal for both methods.

    For both methods using the same series of WACC when discounting cash the flows, we find the same value for the both company and equity. This ensures that the calculations are both correct and consistent.

    Tore Olafsen and John Martin Dervå

    reprint_fen