Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Monte Carlo – Strategy @ Risk

Tag: Monte Carlo

  • Simulation of balance sheet risk

    Simulation of balance sheet risk

    This entry is part 6 of 6 in the series Balance simulation

    iStock_000013200637XSmall

    As I wrote in the article about balance sheet risk, a company with covenants in its loan agreements may have to hedge balance sheet risk even though it is not optimal from a market risk perspective.

    But how can the company know which covenant to hedge?  Often a company will have more than one covenant, and hedging one of them may adversely impact the other.  To answer the question it is necessary to calculate the effect of a hedge strategy, and the best way to do that is by using a simulation model.  Such a model can give the answer by estimating the probability of breech of a covenant.

    Which hedging strategy the company is to choose demands knowledge about what covenant is the most at risk.   How likely is it that the company will face a breech?  Like I described in the previous article:

    Which hedging strategy the company chooses depends on which covenant is most at risk.  There are inherent conflicts between the different hedging strategies, and therefore it is necessary to make a thorough assessment before implementing any such hedging strategy.

    In addition:

    If the company hedges gearing, the size of the equity will be more at risk [..], And in addition, drawing a larger proportion of debt in the home (or functional) currency may imply an increase in economic risk.  [..] Hence, if the company does not have to hedge gearing it should hedge its equity.

    To analyse the impact of different strategies and to answer the questions above I have included simulation of currency rates in the example from the previous article:

    simulation model balance sheet risk

    The result of strategy choice given a +/- 10% change in currency rates  was shown in the previous article.  But that model cannot give the answer to how likely it is that the company will face a breech situation.  How large changes in currency rates can the company take?

    To look at this issue I have used the following modeling of currency rates:

    • Rates at the last day of every quarter from 31/12/02 to 30/06/2013.  The reason for choosing these dates is of course that they are the dates when the balance sheet is measured.  It doesn’t matter if the currency rates are unproblematic March 1st if they are problematic March 31st.  Because that is the date when books are closed for Q1 and the date when the balance sheet is measured.
    • I have analysed the rated using Excel @Risk, which can fit a probability curve on historical rates.  There are, of course, many methods for estimating currency rates and I will get back to that later.  But this method has advantages; the basis is actual rates which have actually occurred.

    The closest fit to the data was a LapLace-curve ((RiskLaplace (μ,σ) specifies a laplace distribution with the entered μ location and σ scale parameters. The laplace distribution is sometimes called a “double exponential distribution” because it resembles two exponential distributions placed back to back, positioned with the entered location parameter.))  for EUR and a Uniform-curve ((RiskUniform(minimum,maximum) specifies a uniform probability distribution with the entered minimum and maximum values. Every value across the range of the uniform distribution has an equal likelihood of occurrence)) for USD against NOK.

    estimatkurverIt is always a good idea to ask yourself if the fitted result has a good story behind it.  Is it logical?  What we want is to find a good estimate for future currency rates.  If the logic is hard to see, we should go back and analyze more.  But there seems to be a good logic/story behind these estimates in my opinion:

    • EUR against NOK is so called mean reverting, meaning that it normally will revert back to a level of around 8 NOK +/- for 1 EUR.  Hence, the curve is pointed and has long tails.  We most likely will have to pay 8 NOK for 1 EUR, but it can move quite a bit away from the expected mean, both up and down.
    • USD is more unpredictable against NOK and a uniform curve, with any level of USD/NOK being as likely, sound like a good estimate.

    In addition to the probability curves for USD and EUR an estimate for the correlation between them is needed.  I used the same historical data to calculate historical correlation.  On the end quarter rates it has been 0,39.  A positive correlation means that the rates move the same way – if one goes up, so does the other.  The reason is that it was the NOK that moved against both currencies.  That’s also a good assessment, I believe. History has shown it to be the case.

    Now we have all the information needed to simulate how much at risk our (simple) balance sheet is to adverse currency movements.  And based on the simulation, the answer is: Quite a bit.

    I have modeled the following covenants:

    • Gearing < 1,5
    • Equity > 3 000

    This is the result of the simulation (click on the image to zoom):

    Simulation results

    Gearing is the covenant most at risk, as the tables/graphs show.  Both in the original mix (all debt in NOK) and if the company is hedging equity there is a high likelihood of breaching the gearing covenant.

    There is a probability of 22% in the first case (all debt in NOK) and a probability of 23% in the second (equity-hedge).  This is a rather high probability, considering that the NOK may move quite a bit, quit quickly.

    The equity is less at risk and the covenant has more headroom.  There is a 13% probability for breech with all debt in NOK, but 0% should the company choose either of the two hedging strategies.  This is due to the fact that currency loans will reduce risk, regardless of whether debt fully hedges assets, or only partially.

    Hence, based on this example it is easy to give advice to the company.  The company should hedge gearing by drawing debt in a mix of currencies reflecting its assets.  Reality is of course more complex than this example, but the mechanism will be the same.  And the need for accurate decision criteria – likelihood of breech – is more important the more complex the business is.

    debtOne thing that complicates the picture is the impact different strategies have on the company’s debt.  Debt levels may vary substantially, depending on choice of strategy.

    If the company has to refinance some of its debt, and at the same time there is a negative impact on the value of the debt (weaker home currency), the refinancing need will be substantially higher than what would have been the case with local debt. This is also answers you can get from the simulation modeling.

    The answer to the questions: “How likely is it that the company to breech its covenants and what are the consequences of strategic choices on key figures, debt and equity?” is something really only a good simulation model can give.

    Originally published in Norwegian.

  • Big Issues Needs Big Tools

    Big Issues Needs Big Tools

    This entry is part 3 of 3 in the series What We Do

     

    You can always amend a big plan, but you can never expand a little one. I don’t believe in little plans. I believe in plans big enough to meet a situation which we can’t possibly foresee now. Harry S. Truman : American statesman (33rd US president: 1945-53)

    We believe you know your business best and will in your capacity implement the necessary resources, competence, tools and methods for running a successful and efficient organization.

    Still issues related to uncertainty, whether it is finance, stakeholders, production , purchase or sale, has in most cases increased due to more complex business operational environment. Excellent systems for a range of processes; consolidation, Customer relationship, accounting has kept up with increasingly complex environments, and so has your most important tool – people.

    But we believe you do not possess the best available method and tool for bringing people – competence, experience, economic/financial facts, assumptions and economic/financial tools together.

    You know your budgets, valuations, projections and estimates, scenario analysis all are made and presented with valuable information regarding uncertainty left out on the way. Whether this is because human experience related to risk, or analyzing, understanding and projection macro or micro risks is hard to capture, or tools not designed to capture risk is the cause. It is a fact that most complex big issues important for companies are based on insufficient information, a portion of luck, gut feeling and believes in market turns /stability/cycles or other comforting assumptions shared by peers.

    Or you are restricted to giving guidelines, min/max orders, specifications and trust to third party experts that one hope are better capable of capturing risk and potential in a narrow area of expertise. Regardless of this risk spreading or differentiation works – you need the best assistance for setting your guidelines and road-map both for your internal as well as external resources.

    Systems and methods (( A Skeptic’s Guide to Computer Models (Pdf, pp 25) , by John D. Sterman. This paper is reprinted from Sterman, J. D. (1991). A Skeptic’s Guide to Computer Models. In Barney, G. O. et al., Managing a Nation: The Microcomputer Software Catalog. Boulder, CO: Westview Press, 209-229.)) are never better than human experience, knowledge and excellence, but if you want to look closer at a method/tool that can capture the best of your existing decision making process and bringing it to a new level. You should look closer at a stochastic complete p/L/balance simulation model for those big issues and big decisions.

    If you are not familiar with stochastic simulations and probability distributions, take a look at a report for the most likely outcome (Pdf, pp 32) from the simulations – similar reports could have been made for the outcomes you would not have liked to see, giving a heads up for the sources of downside risk, OR for outcomes you would have loved you see – explaining the generators of up-side possibilities.

    Endnotes

  • Stochastic Balance Simulation

    Stochastic Balance Simulation

    This entry is part 1 of 6 in the series Balance simulation

    Introduction

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on a single values forecasts; the expected or average value of the input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong (Savage, 2002).  In addition deterministic models will miss the important dimension of uncertainty – that gives both the different risks facing the company and the opportunities they produce.

    In contrast, a stochastic model will be calculated a large number of times with different values for the input variable drawn from all possible values of the individual variables. Each run will then give a probable realization of future cash flow or of the company’s equity value etc. With thousands of runs we can plot the relative frequencies of the calculated values:

    and thus, we have succeeded in generating the probability distribution for the company’s equity value. In insurance this type of technique is often called Dynamic Financial Analysis (DFA) which actually is a fitting name.

    The Balance Simulation Model

    The main tool in the S&R toolbox is the balance model. The starting point is the company’s balance, which is treated as the simulations opening balance. In the case of a greenfield project – new factories, power plants, airports, etc. built from scratch – the opening balance is empty.

    The successive balances are then built from the Profit & Loss, by simulation of the company’s operation thru an EBITDA model mimicking the real life operations. Investments can be driven by demand (capacity calculations) or by investment programs giving the necessary or planned production capacity. The model will throughout the simulation raise debt (short and/or long term) or equity (domestic or foreign) according to the financial strategy set out by the company and the difference between cash outflow and inflow adjusted for the minimum cash level.

    Since this is a dynamic model, it will raise equity when losses occur and/or the maximum Debt/equity ratio has been exceeded. On the other hand it will repay loans, pay dividend, repurchase shares or purchase excess marketable securities (cash above the need for the operations) – all in line with the board’s shareholder strategy.

    The ledger and Double-entry Bookkeeping

    The activity described in the EBITDA model; investments, purchase of raw materials, production, payment of wages, income from sales, payment of special taxes on investments etc. is registered as transactions in the ledger, following a standard chart of accounts with double-entry bookkeeping. In a similar fashion are all financial transactions; loans repayments, cash, taxes paid and deferred, Agio and Disagio, etc. posted in the ledger. Currently, approximately 400 accounts are in use.

    The Trial Balance and the Financial Statements

    The trial balance (Post-Closing) is compiled and checked for balance between total debts and total credits. The income statement is then prepared using revenue and expense accounts from the trial balance and the balance sheet is prepared from the asset and liability accounts by including net income with the other equity accounts – using the International Financial Reporting Standards (IFRS).

    The general purpose of producing the trial balance is to ensure that the entries in the ledger are mathematically correct. Have in mind that every run in a simulation will produce a number of entries in the ledger and that they might differ not only in size but also in type depending on the realized states of the company’s operations (see above). We therefore need to be sure that the final financial statements – for every run – are correctly produced, since they will be the basis for all further financial analysis of the company.

    There are of course other sources of errors in book keeping; compensating errors, errors of omission, errors of principle etc. but after many years of use – with millions of runs – we feel confident that the ledger and financial statements are produced correctly. The point is that serious problems need serious models.

    However there are more benefits to be had from simulating the ledger and trial balance:

    1. It increases the models transparency; the trial balance can be printed out and audited. Together with the models extensive reporting and error/consistency control, it is no longer a ‘black box’ to the user.
    2. It makes it easy to plug inn new EBITDA models for other types of industry giving an automated check for consistency with the main balance simulation model.
    3. It is used to ensure correct solving of all implicit equations in the model, the most obvious is of course the interest and bank balance equation (interest depends on the bank balance and the bank balance depends on the interest) but others like translation hedging and limits set by the company’s financial strategy, create large and complicated systems of simultaneous equations.
    4. The trial balance changes from year to year are also used to ensure correct year to year balance transition.

    Financial Analysis, Financial Measures and Valuation

    Given the framework described above financial analysis can be performed and the expected value, variability and probability distributions for the different types of ratios; profitability, liquidity, activity, debt and equity etc. can be calculated and given as graphs. All important measures are calculated at least twice from different starting points to ensure consistency and correct solving of implicit equations.

    The following table shows the reconciliation of Economic Profit, initially calculated from (ROIC-WACC) multiplied with Invested capital:

    The motivation for doing all these consistency controls – in all nearly one hundred – lies in previously experience from Cash Flow/ Valuation models written in Excel. The level of detail is more often than not so low that there is no way to establish if they are right or wrong.

    More interesting than ratios, are the yearly distributions for EBITDA, EBIT, NOPLAT, Profit (loss) for the period, Free cash Flow, Economic profit, ROIC, Wacc, Debt and Equity and Equity value etc. giving a visual picture of the uncertainties and risks the company faces:

    Financial analysis is the conversion of financial data into useful information for decision making. Therefore, virtually any use of financial statements or other financial data for some purpose is financial analysis and is the primary focus of accounting and finance. Financial analysis can be internal (e.g., decision analysis by a company using internal data to understand or improve management and operating results) or external (e.g., comprehensive analysis for the purposes of commercial lending, mergers and acquisition or investment activities). The key is how to analysis available data to make correct decisions.

     

    Input

    As input the model needs parameter values and operational data. The parameter values fall in seven groups:

    1. Parameters describing investors preferences; Market risk premium etc.
    2. Parameters describing the company’s financial strategy; Leverage, Long/Short-term Debt ratio, Expected Foreign/ Domestic Debt Ratio, Economic Depreciation, Maximum Dividend Pay-out Ratio, Translation Hedging Strategy etc.
    3. Parameters describing the economic regime under which it operates: Taxes, Depreciation Scheme etc.
    4. Opening Balance etc.

    Since the model have to produces stochastic forecasts of interest(s) and exchange rates it will need for every currency involved (included lower and upper 5% probability limit):

    1. The Yield curves,
    2. Expected yearly inflation
    3. Depending on the forecast method(s) chosen for the exchange rates; the different currencies expected risk premiums or real exchange rates etc.

    Since there is a large number of parameters they are usually read from an excel template but the program will if necessary ask for missing or report inconsistent values of the parameters.

    The company’s operations are best described through an EBITDA model even if prices, costs and production coefficients and their variability can be read from an excel template. A dedicated EBITDA model will always give the opportunity to give a more detailed and in some cases complex description of the operations, include forecast and demand models, ‘exotic’ taxes, real options strategies etc., etc.

    Output

    S@R has set out to create models that can give answers to both deterministic and stochastic questions the tables will answer most deterministic issues while graphs must be used to answer the risk and uncertainty related questions:

    [TABLE=6]

    1.    In all 27 different reports with more than 70 pages describing operations and the economics of operations.
    2.    In addition the probability distributions for all input and output variables are produced.

    Use

    By linking dedicated EBITDA models to holistic balance simulation, taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:
    1.    by a using a EBITDA model to describe the companies operations or
    2.    by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model.

    The first approach implies setting up a dedicated EBITDA performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.
    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modeling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    Strategy@Risk takes advantage of a program language developed and used for financial risk simulation. We have used the program language for over 25years, and developed a series of simulation models for industry, banks and financial institutions.

    The language has as one of its strengths, to be able to solve implicit equations in multiple dimensions. For the specific problems we seek to solve, this is a necessity that provides the necessary degrees of freedom to formulate the approach to problems.

    The Strategy@Risk tools have highly advance properties:

    • Using models written in dedicated financial simulation language (with code and data separated; see The risk of spreadsheet errors).
    • Solving implicit systems of equations giving unique WACC calculated for every period ensuring that “Free Cash Flow” always equals “Economic Profit” value.
    • Programs and models in “windows end-user” style.
    • Extended test for consistency in input, calculations and results.
    • Transparent reporting of assumptions and results.

    References

    Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November 2002, pp. 20-21

    Mukherjee, Mukherjee (2003). Financial Accounting. New York: Harper Perennial, ISBN 9780070581555.

  • Public Works Projects

    Public Works Projects

    This entry is part 2 of 4 in the series The fallacies of scenario analysis

     

    It always takes longer than you expect, even when you take into account Hofstadter’s Law. (Hofstadter,1999)

    In public works and large scale construction or engineering projects – where uncertainty mostly (only) concerns cost, a simplified scenario analysis is often used.

    Costing Errors

    An excellent study carried out by Flyvberg, Holm and Buhl (Flyvbjerg, Holm, Buhl2002) address the serious questions surrounding the chronic costing errors in public works projects. The purpose was to identify typical deviation from budget and the specifics of the major causes for these deviations:

    The main findings from the study reported in their article – all highly significant and most likely conservative -are as follows:

    In 9 out of 10 transportation infrastructure projects, costs are underestimated. For a randomly selected project, the probability of actual costs being larger than estimated costs is  0.86. The probability of actual costs being lower than or equal to estimated costs is only 0.14. For all project types, actual costs are on average 28% higher than estimated costs.

    Cost underestimation:

    – exists across 20 nations and 5 continents:  appears to be a global phenomena.
    – has not decreased over the past 70 years:  no improvement in cost estimate accuracy.
    – cannot be excused by error:  seems best explained by strategic misrepresentation, i.e. the planned,   systematic  distortion or misstatement of facts inn the budget process. (Jones, Euske,1991)

    Demand Forecast Errors

    The demand forecasts only adds more errors to the final equation (Flyvbjerg, Holm, Buhl, 2005):

    • 84 percent of rail passenger forecasts are wrong by more than ±20 percent.
    • 50 percent of road traffic forecasts are wrong by more than ±20 percent.
    • Errors in traffic forecasts are found in the 14 nations and 5 continents covered by the study.
    • Inaccuracy is constant for the 30-year period covered: no improvement over time.

    The Machiavellian Formulae

    Adding the cost and demand errors to other uncertain effects, we get :

    Machiavelli’s Formulae:
    Overestimated revenues + Overvalued development effects – Underestimated cost – Undervalued environmental impact = Project Approval (Flyvbjerg, 2007)

    Cost Projections

    Transportation infrastructure projects do not appear to be more prone to cost underestimation than are other types of large projects like: power plants, dams, water distribution, oil and gas extraction, information technology systems, aerospace systems, and weapons systems.

    All of the findings above should be considered forms of risk. As has been shown in cost engineering research, poor risk analysis account for many project cost overruns.
    Two components of errors in the cost estimate can easily be identified (Bertisen, 2008):

    • Economic components: these errors are the result of incorrectly forecasted exchange rates, inflation rates of unit prices, fuel prices, or other economic variables affecting the realized nominal cost. Many of these variables have positive skewed distribution. This will then feed through to positive skewness in the total cost distribution.
    • Engineering components: this relates to errors both in estimating unit prices and in the required quantities. There may also be an over- or underestimation of the contingency needed to capture excluded items. Costs and quantity errors are always limited on the downside. However, there is no limit to costs and quantities on the upside, though. For many cost and quantity items, there is also a small probability of a “catastrophic event”, which would dramatically increase costs or quantities.

    When combining these factors the result is likely to be a positive skewed cost distribution, with many small and large under run and overrun deviations (from most likely value) joined by a few very large or catastrophic overrun deviations.

    Since the total cost (distribution) is positively skewed, expected cost can be considerably higher than the calculated most likely cost.

    We will have these findings as a backcloth when we examine the Norwegian Ministry of Finance’s guidelines  for assessing risk in public works (Ministry of Finance, 2008, pp 3) (Total uncertainty equal to the sum of systematic and unsystematic uncertainty):

    Interpreting the guidelines, we find the following assumption and advices:

    1. Unsystematic risk cancels out looking at large portfolios of projects.
    2. All systematic risk is perfectly correlated to the business cycle.
    3. Total cost approximately normal distributed.

    Since total risk is equal to the sum of systematic and unsystematic risk will, by the 2nd assumption, unsystematic risks comprise all uncertainty not explained by the business cycle. That is it will be comprised of all uncertainty in planning, mass calculations etc. and production of the project.

    It is usually in these tasks that the projects inherent risks later are revealed. Based on the above studies it is reasonable to believe that the unsystematic risk have a skewed distribution and is located in its entirety on the positive part of the cost axis i.e. it will not cancel out even in a portfolio of projects.

    The 2nd assumption that all systematic risk is perfectly correlated to the business cycle is a convenient one. It opens for a simple summation of percentiles (10%/90%) for all cost variables to arrive at total cost percentiles. (see previous post in this series)

    The effect of this assumption is that the risk model becomes a perverted one, with only one stochastic variable. All the rest can be calculated from the outcomes of the “business cycle” distribution.

    Now we know that delivery time, quality and prices for all equipment, machinery and raw materials are dependent on the activity level in all countries demanding or producing the same items. So, even if there existed a “business cycle” for every item (and a measure for it) these cycles would not necessarily be perfectly synchronised and thus prove false the assumption.

    The 3rd assumption implies either that all individual cost distributions are “near normal” or that they are independent and identically-distributed with finite variance, so that the central limit theorem can be applied.

    However, the individual cost distributions will be the product of unit price, exchange rate and quantity so even if the elements in the multiplication has a normal distribution, the product will not have a normal distribution.

    Claiming the central limit theorem is also a no-go since the cost elements by the 2nd assumption is perfectly correlated, they can not be independent.

    All experience and every study concludes that the total cost distribution does not have a normal distribution. The cost distribution evidently is positively skewed with fat tails whereas the normal distribution is symmetric with thin tails.

    Our concerns about the wisdom of the 3rd assumption, was confirmed in 2014, see: The implementation of the Norwegian Governmental Project Risk Assessment Scheme and the following articles.

    The solution to all this is to establish a proper simulation model for every large project and do the Monte Carlo simulation necessary to establish the total cost distribution, and then calculate the risks involved.

    “If we arrive, as our forefathers did, at the scene of battle inadequately equipped, incorrectly trained and mentally unprepared, then this failure will be a criminal one because there has been ample warning” — (Elliot-Bateman, 1967)

    References

    Bertisen, J., Davis, Graham A. (2008). Bias and error in mine project capital cost estimation.. Engineering Economist, 01-APR-08

    Elliott-Bateman, M. (1967). Defeat in the East: the mark of Mao Tse-tung on war. London: Oxford University Press.

    Flyvbjerg Bent (2007), Truth and Lies about Megaprojects, Inaugural speech, Delft University of Technology, September 26.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2002), “Underestimating Costs in Public Works Projects: Error or Lie?” Journal of the American Planning Association, vol. 68, no. 3, 279-295.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2005), “How (In)accurate Are Demand Forecasts in Public Works Projects?” Journal of the American Planning Association, vol. 71, no. 2, 131-146.

    Hofstadter, D., (1999). Gödel, Escher, Bach. New York: Basic Books

    Jones, L.R., K.J. Euske (1991).Strategic Misrepresentation in Budgeting. Journal of Public Administration Research and Theory, 1(4), 437-460.

    Ministry of Finance, (Norway) (2008,). Systematisk usikkerhet. Retrieved July 3, 2009, from The Concept research programme Web site: http://www.ivt.ntnu.no/bat/pa/forskning/Concept/KS-ordningen/Dokumenter/Veileder%20nr%204%20Systematisk%20usikkerhet%2011_3_2008.pdf

  • The fallacies of Scenario analysis

    The fallacies of Scenario analysis

    This entry is part 1 of 4 in the series The fallacies of scenario analysis

     

    Scenario analysis is often used in company valuation – with high, low and most likely scenarios to estimate the value range and expected value. A common definition seems to be:

    Scenario analysis is a process of analyzing possible future events or series of actions by considering alternative possible outcomes (scenarios). The analysis is designed to allow improved decision-making by allowing consideration of outcomes and their implications.

    Actually this definition covers at least two different types of analysis:

    1. Alternative scenario analysis; in politics or geo-politics, scenario analysis involves modeling the possible alternative paths of a social or political environment and possibly diplomatic and war risks – “rehearsing the future”,
    2. Scenario analysis; a number of versions of the underlying mathematical problem are created to model the uncertain factors in the analysis.

    The first addresses “wicked” problems; ill-defined, ambiguous and associated with strong moral, political and professional issues. Since they are strongly stakeholder dependent, there is often little consensus about what the problem is, let alone how to resolve it. (Rittel & Webber,1974)

    The second cover “tame” problems; that has well-defined and stable problem statements and belongs to a class of similar problems which are all solved in the same similar way. (Conklin, 2001) Tame however does not mean simple – a tame problem can be very technically complex.

    Scenario analysis in the last sense is a compromise between computational complex stochastic models (the S&R approach) and the overly simplistic and often unrealistic deterministic models. Each scenario is a limited representation of the uncertain elements and one sub-problem is generated for each scenario.

    Best Case/ Worse Case Scenarios analysis.
    With risky assets, the actual cash flows can be very different from expectations. At the minimum, we can estimate the cash flows if everything works to perfection – a best case scenario – and if nothing does – a worst case scenario.

    In practice, each input into asset value is set to its best (or worst) possible outcome and the cash flows estimated with those values.

    Thus, when valuing a firm, the revenue growth rate and operating margin etc. is set at the highest possible level while interest rates etc. is set at its lowest level, and then the best-case scenario value is computed.

    The question now is – if this really is the best (or worst) value or if let’s say a 95% (5%) percentile is chosen for each input – will that give the 95% (5%) percentile for the firm’s value?

    Let’ say that we in the first case – (X + Y) – want to calculate entity value by adding ‘NPV of market value of FCF’ (X) and ‘NPV of continuing value’ (Y). Both are stochastic variables, X is positive while Y can be positive or negative.  In the second case – (X – Y) – we want to calculate the value of equity by subtracting value of debt (Y) from entity value (X). Both X and Y are stochastic, positive variables.

    From statistics we know that for the joint distribution of (X ±Y) the expected value E(X ±Y) is E(X) ± E(Y) and that Var(X ± Y) is Var(X) + Var(Y) ± 2Cov(X,Y). Already from the expression for the joint variance we can see that this not necessarily will be true. However the expected value will be the same.

    We can demonstrate this by calculating a number of percentiles for two normal independent distributions (with Cov(X,Y)=0, to make it simple) and add (subtract) them and plot the result (red line) with the same percentiles from the joint distribution  – blue line for (X+Y) and green line for (X-Y).

    joint-distrib-1

    As we can see the lines for X+Y only coincides at the expected value and the deviation increases as we move out on the tails. For X-Y the deviation is even more pronounced:

    joint-distrib-2

    Plotting the deviation from the joint distribution as percentage from X Y, demonstrates very large relative deviations as we move out on the tails and that the sign of the numerical operator totally changes the direction of the deviations:

    pct_difference

    Add to this, a valuation analysis with a large number of:

    1. both correlated and auto-correlated stochastic variables,
    2. complex calculations,
    3. simultaneous equations,

    and there is no way of finding out where you are on the probability distribution – unless you do a complete Monte Carlo simulation. It is like being out in the woods at night without a map and compass – you know you are in the woods but not where.

    Some advocates scenario analysis to measure risk on an asset using the difference between the best-case and worst-case. Based on the above this can only be a very bad idea, since risk in the sense of loss is connected to the left tail where the deviation from the joint distribution can be expected to be the largest. This brings us to the next post in the series.

    References

    Rittel, H., and Webber, M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, Vol. 4, pp 155-169. Elsevier Scientific Publishing Company, Inc: Amsterdam.

    Conklin, Jeff (2001). Wicked Problems. Retrieved April 28, 2009, from CofNexus Institute Web site: http://www.cognexus.org/wpf/wickedproblems.pdf

     

  • Airport Simulation

    Airport Simulation

    This entry is part 1 of 4 in the series Airports

     

    The basic building block in airport simulation is the passenger (Pax) forecast. This is the basis for subsequent estimation of aircraft movements (ATM), investment in terminal buildings and airside installations, all traffic charges, tax free sales etc. In short it is the basic determinant of the airport’s economics.

    The forecast model is usually based on a logarithmic relation between Pax, GDP and airfare price movement. ((Manual on Air Traffic Forecasting. ICAO, 2006)), ((Howard, George P. et al. Airport Economic Planning. Cambridge: MIT Press, 1974.))

    There has been a large number of studies over time and across the world on Air Travel Demand Elasticities, a good survey is given in a Canadian study ((Gillen, David W.,William G. Morrison, Christopher Stewart . “Air Travel Demand Elasticities: Concepts, Issues and Measurement.” 24 Feb 2009 http://www.fin.gc.ca/consultresp/Airtravel/airtravStdy_-eng.asp)).

    In a recent project for an European airport – aimed at establishing an EBITDA model capable of simulating risk in its economic operations – we embedded the Pax forecast models in the EBITDA model. Since the seasonal variations in traffic are very pronounced and since the cycles are reverse for domestic and international traffic a good forecast model should attempt to forecast the seasonal variations for the different groups of travellers.

    int_dom-pax

    In the following graph we have done just that, by adding seasonal factors to the forecast model based on the relation between Pax and change in GDP and air fare cost. We have however accepted the fact that neither is the model specification complete, nor is the seasonal factors fixed and constant. We therefore apply Monte Carlo simulation using estimation and forecast errors as the stochastic parts. In the figure the green lines indicate the 95% limit, the blue the mean value and the red the 5% limit. Thus with 90% probability will the number of monthly Pax fall within these limits.

    pax

    From the graph we can clearly se the effects of estimation and forecast “errors” and the fact that it is international travel that increases most as GDP increases (summer effect).

    As an increase in GDP at this point of time is not exactly imminent we supply the following graph, displaying effects of different scenarios in growth in GDP and air fare cost.

    pax-gdp-and-price

    References