Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Simulation modeling – Strategy @ Risk

Tag: Simulation modeling

  • The implementation of the Norwegian Governmental Project Risk Assessment scheme

    The implementation of the Norwegian Governmental Project Risk Assessment scheme

    This entry is part 1 of 2 in the series The Norwegian Governmental Project Risk Assessment Scheme

    Introduction

    In Norway all public investment projects with an expected budget exceeding NOK 750 million have to undergo quality assurance ((The hospital sector has its own QA scheme.)) . The oil and gas sector, and state-owned companies with responsibility for their own investments, are exempt.

    The quality assurance scheme ((See, The Norwegian University of Science and Technology (NTNU): The Concept Research Programme)) consists of two parts: Quality assurance of the choice of concept – QA1 (Norwegian: KS1) ((The one page description for QA1 (Norwegian: KS1)have been taken from: NTNU’s Concept Research Programme)) and Quality assurance of the management base and cost estimates, including uncertainty analysis for the chosen project alternative – QA2 (Norwegian: KS2) ((The one page description for QA2 (Norwegian: KS2) have been taken from: NTNU’s Concept Research Programme))

    This scheme is similar too many other countries’ efforts to create better cost estimates for public projects. One such example is Washington State Department of Transportations’ Cost Risk Assessment (CRA) and Cost Estimate Validation Process (CEVP®) (WSDOT, 2014).

    One of the main purposes of QA2 is to set a cost frame for the project. This cost frame is to be approved by the government and is usually set to the 85% percentile (P85) of the estimated cost distribution. The cost frame for the responsible agency is usually set to the 50% percentile (P50). The difference between P50 and P85 is set aside as a contingency reserve for the project. This is reserves that ideally should remain unused.

    The Norwegian TV program “Brennpunkt” an investigative program sponsored by the state television channel NRK put the light on the effects of this scheme ((The article also contains the data used here)):

    The investigation concluded that the Ministry of Finance quality assurance scheme had not resulted in reduced project cost overruns and that the process as such had been very costly.

    This conclusion has of course been challenged.

    The total cost for doing the risk assessments of the 85 projects was estimated to approx. NOK 400 million or more that $60 million. In addition, in many cases, comes the cost of the quality assurance of choice of concept, a cost that probably is much higher.

    The Data

    The data was assembled during the investigation and consists of six setts where five have information giving the P50 and P85 percentiles. The last set gives data on 29 projects finished before the QA2 regime was implemented (the data used in this article can be found as an XLSX.file here):

    The P85 and P50 percentiles

    The first striking feature of the data is the close relation between the P85 and P50 percentiles:

    In the graph above we have only used 83 of the 85 projects with known P50 and P85. The two that are omitted are large military projects. If they had been included, all the details in the graph had disappeared. We will treat these two projects separately later in the article.

    A regression gives the relationship between P85 and P50 as:

    P85 = (+/- 0.0113+1.1001)* P50, with R= 0.9970

    The regression gives an exceptionally good fit. Even if the graph shows some projects deviating from the regression line, most falls on or close to the line.

    With 83 projects this can’t be coincidental, even if the data represents a wide variety of government projects spanning from railway and roads to military hardware like tanks and missiles.

    The Project Cost Distribution

    There is not much else to be inferred about the type of cost distribution from the graph. We do not know whether those percentiles came from fitted distributions or from estimated Pdf’s. This close relationship however leads us to believe that the individual projects cost distributions are taken from the same family of distributions.

    If this family of distributions is a two-parameter distribution, we can use the known P50 and P85 ((Most two-parameter families have sufficient flexibility to fit the P50 and P85 percentiles.)) percentiles to fit  a number of distributions to the data.

    This use of quantiles to estimate the parameters of an a priori distribution have been described as “quantile maximum probability estimation” (Heathcote & al., 2004). This can be done by fitting a number of different a priori distributions and then compare the sum log likelihoods of the resulting best fits for each distribution, to find the “best” family of distributions.

    Using this we anticipate finding cost distributions with the following properties:

    1. Nonsymmetrical, with a short left and a long right tail i.e. being positive skewed and looking something like the distribution below (taken from a real life project):

    2. The left tail we would expect to be short after the project has been run through the full QA1 and QA2 process. After two such encompassing processes we would believe that most, even if not all, possible avenues for cost reduction and grounds for miscalculations have been researched and exhausted – leaving little room for cost reduction by chance.

    3. The right tail we would expect to be long taking into account the possibility of adverse price movements, implementation problems, adverse events etc. and thus the possibility of higher costs. This is where the project risk lies and where budget overruns are born.

    4. The middle part should be quite steep indicating low volatility around “most probable cost”.

    Estimating the Projects Cost Distribution

    To simplify we will assume that the above relation between P50 and P85 holds, and that it can be used to describe the resulting cost distribution from the projects QA2 risk assessment work.  We will hence use the P85/P50 ratio ((If costs is normally distributed: C ∼ N (m, s2), then Z = C/m ∼ N (1, s2/ m2). If costs is gamma distributed: C ∼ Γ (a, λ) then Z = C/m ∼ Γ (1, λ).))  to study the cost distributions. This implies that we are looking for a family of distributions that have the probability of (X<1) =0.5 and the probability of (x<1.1) =0.85 and being positive skewed. This change of scale will not change the shape of the density function, but simply scale the graph horizontally.

    Fortunately the MD Anderson Cancer Centre has a program – Parameter Solver ((The software can be downloaded from: https://biostatistics.mdanderson.org/SoftwareDownload/SingleSoftware.aspx?Software_Id=6 )) – that can solve for the distribution parameters given the P50 and P85 percentiles (Cook, 2010). We can then use this to find the distributions that can replicate the P50 and P85 percentiles.

    We find that distributions from the Normal, Log Normal, Gamma, Inverse Gamma and Weibull families will fit to the percentiles. All the distributions however are close to being symmetric with the exception of the Weibull distribution that has a left tail. A left tail in a budgeted cost distribution usually indicates over budgeting with the aim of looking good after the project has been finished. We do not think that this would have passed the QA2 process – so we don’t think that it has been used.

    We believe that it is most likely that the distributions used are of the Normal, Gamma or of the Gamma derivative Erlang ((The Erlang distribution is a Gamma distribution with integer shape parameter.)) type, due to their convolution properties . That is, sums of independent identically distributed variables having one of these particular distributions come from the same distribution family. This makes it possible to simplify risk models of the cost only variety by just summing up the parameters ((For the Normal. Gamma and Erlang distributions this implies summing up the shape parameters of the individual cost elements distributions: If X and Y are normally distributed: X ∼ N (a, b2) and Y∼ N (d, e2) and X is independent of Y, then Z=X + Y is N (a + d, b2 + e2), and if k is a strictly positive constant then Z=k*X is N (k*a, k2* b2). If X and Y are gamma distributed: X ∼ Γ (a, λ) and Y∼ Γ (b, λ) and X is independent of Y, then X + Y is Γ (a +b, λ), and if k is a strictly positive constant then c*X is Γ (k*a, λ).)) of the cost elements to calculate the parameters of the total cost distribution.

    This have the benefit of giving the closed form for the total cost distribution compared to Monte Carlo simulation where the closed form of the distribution, if it exists, only can be found thru the exercise we have done here.

    This property can as well be a trap, as the adding up of cost items quickly gives the distribution of the sum symmetrical properties before it finally ends up as a Normal distribution ((The Central Limit Theorem gives the error in a normal approximation to the gamma distribution as n-1/2 as the shape parameter n grows large. For large k the gamma distribution X ∼ Γ (k, θ) converges to a normal distribution with mean µ = k*θ and variance s2= k*θ2. In practice it will approach a normal distribution with the shape parameter > 10.)).

    The figures in the graph below give the shapes for the Gamma and Normal distribution with the percentiles P50=1. and P85 = 1.1:

    The Normal distribution is symmetric and the Gamma distribution is also for all practical purposes symmetric. We therefore can conclude that the distributions for total project cost used in the 83 projects have been symmetric or close to symmetric distributions.
    This result is quite baffling; it is difficult to understand why the project cost distributions should be symmetric.

    The only economic explanation have to be that the expected cost of the projects are estimated with such precision that any positive or negative deviations are mere flukes and chance outside foreseeability and thus not included in the risk calculations.

    But is this possible?

    The two Large Military Projects

    The two projects omitted from the regression above: new fighter planes and frigates have values of the ratio P85/P50 as 1.19522 and 1.04543, compared to the regression estimate of 1.1001 for the 83 other projects. They are however not atypical, other among the 83 projects have both smaller (1.0310) and larger (1.3328) values for the P85/P50 ratio. Their sheer size however with a P85 of respective 68 and 18 milliard NOK, gives them a too high weight in a joint regression compared to the other projects.

    Never the less, the same comments made above for the other 83 projects apply for these two projects. A regression with the projects included would have given the relationship between P85 and P50 as:

    P85 = (+/- 0.0106+1.1751)* P50, with R= 0.9990.

    And as shown in the graph below:

    This graph again depicts the surprisingly low variation in all the projects P85/P50 ratios:

    The ratios have in point of fact a coefficient of variation of only 4.7% and a standard deviation of 0.052 – for the all the 85 projects.

    Conclusions

    The Norwegian quality assurance scheme is obviously a large step in the direction of reduced budget overruns in public projects. (See: Public Works Projects)

    Even if the final risk calculation somewhat misses the probable project cost distribution will the exercises described in the quality assurance scheme heighten both the risk awareness and the uncertainty knowingness. All, contributing to the common goal – reduced budget under- and overruns and reduced project cost.

    It is nevertheless important that all elements in the quality assurance process catches the project uncertainties in a correct way, describing each projects specific uncertainty and its possible effects on project cost and implementation (See: Project Management under Uncertainty).

    From what we have found: widespread use of symmetric cost distributions and possibly the same type of distributions across the projects, we are a little doubtful about the methods used for the risk calculations. The grounds for this are shown in the next two tables:

    The skewness ((The skewness is equal to two divided by the square root of the shape parameter.)) given in the table above depends only on the shape parameter. The Gamma distribution will approach a normal distribution when the parameter larger than ten. In this case all projects’ cost distributions approach a normal distribution – that is a symmetric distribution with zero skewness.

    To us, this indicates that the projects’ cost distribution reflects more the engineer’s normal calculation “errors” than the real risk for budget deviations due to implementation risk.

    The kurtosis (excess kurtosis) indicates the form of the peak of the distribution. Normal distributions have zero kurtosis (mesocurtic) while distributions with a high peak have a positive kurtosis (leptokurtic).

    It is stated in the QA2 that the uncertainty analysis shall have “special focus on … Event uncertainties represented by a binary probability distribution” If this part had been implemented we would have expected at least more flat-topped curves (platycurtic) with negative kurtosis or better not only unimodal distributions. It is hard to see traces of this in the material.

    So, what can we so far deduct that the Norwegian government gets from the effort they spend on risk assessment of their projects?

    First, since the cost distributions most probably are symmetric or near symmetric, expected cost will probably not differ significantly from the initial project cost estimate (the engineering estimate) adjusted for reserves and risk margins. We however need more data to substantiate this further.

    Second, the P85 percentile could have been found by multiplying the P50 percentile by 1.1. Finding the probability distribution for the projects’ cost has for the purpose of establishing the P85 cost figures been unnecessary.

    Third, the effect of event uncertainties seems to be missing.

    Fourth, with such a variety of projects, it seems strange that the distributions for total project cost ends up being so similar. There have to be differences in project risk from building a road compared to a new Opera house.

    Based on these findings it is pertinent to ask what went wrong in the implementation of QA2. The idea is sound, but the result is somewhat disappointing.

    The reason for this can be that the risk calculations are done just by assigning probability distributions to the “aggregated and adjusted engineering “cost estimates and not by developing a proper simulation model for the project, taking into consideration uncertainties in all factors like quantities, prices, exchange rates, project implementation etc.

    We will come back in a later post to the question if the risk assessment never the less reduces the budgets under- and overrun.

    References

    Cook, John D. (2010), Determining distribution parameters from quantiles. http://www.johndcook.com/quantiles_parameters.pdf

    Heathcote, A., Brown, S.& Cousineau, D. (2004). QMPE: estimating Lognormal, Wald, and Weibull RT distributions with a parameter-dependent lower bound. Journal of Behavior Research Methods, Instruments, and Computers (36), p. 277-290.

    Washington State Department of Transportation (WSDOT), (2014), Project Risk Management Guide, Nov 2014. http://www.wsdot.wa.gov/projects/projectmgmt/riskassessment

    Endnotes

  • Uncertainty modeling

    Uncertainty modeling

    This entry is part 2 of 3 in the series What We Do

    Prediction is very difficult, especially about the future.
    Niels Bohr. Danish physicist (1885 – 1962)

    Strategy @ Risks models provide the possibility to study risk and uncertainties related to operational activities;  cost, prices, suppliers,  markets, sales channels etc. financial issues like; interest rates risk, exchange rates risks, translation risk , taxes etc., strategic issues like investments in new or existing activities, valuation and M&As’ etc and for a wide range of budgeting purposes.

    All economic activities have an inherent volatility that is an integrated part of its operations. This means that whatever you do some uncertainty will always remain.

    The aim is to estimate the economic impact that such critical uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight: the ability to deal with uncertainties in an informed way and thus benefits above ordinary spread-sheet exercises.

    The results from these analyzes can be presented in form of B/S and P&L looking at the coming one to five (short term) or five to fifteen years (long term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ probabilities
    • Evaluate alternative strategic investment options at risk
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    S@R has set out to create models that can give answers to both deterministic and stochastic questions, by linking dedicated Ebitda models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or
    2. by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model – the ‘short cut’ method.

    The first approach implies setting up a dedicated Ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long term planning and strategy development.

    The second (‘the short cut’) uses coefficients of fabrications and their variations, and is a low effort (cost) alternative, usually using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans.  The data needed for the company’s economic environment (taxes, interest rates etc) will be the same in both alternatives:

    The ‘short cut’ approach is especially suited for quick appraisals of M&A cases where time and data is limited and where one wishes to limit efforts in an initial stage. Later the data and assumptions can be augmented to much more sophisticated analysis within the same ‘short cut’ framework. In this way analysis can be successively built in the direction the previous studies suggested.

    This also makes it a good tool for short-term (3-5 years) analysis and even for budget assessment. Since it will use a limited number of variables – usually less than twenty – describing the operations, it is easy to maintain and operate. The variables describing financial strategy and the economic environment come in addition, but will be easy to obtain.

    Used in budgeting it will give the opportunity to evaluate budget targets, their probable deviation from expected result and the probable upside or down side given the budget target (Upside/downside ratio).

    Done this way analysis can be run for subsidiaries across countries translating the P&L and Balance to any currency for benchmarking, investment appraisals, risk and opportunity assessments etc. The final uncertainty distributions can then be “aggregated’ to show global risk for the mother company.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    Since all runs (500 to 1000) in the simulation produces a complete P&L and Balance the uncertainty curve (distribution) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. can be produced.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic Ebitda model.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However the type of data will have to be agreed upon depending on the scope of analysis.

    Very often will key people from the controller group be adequate for this work and if they don’t have the direct knowledge they usually know who to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months.

    For S&R the time frame will depend on the availability of key personnel from the client and the availability of data. For the second alternative it can take from one to three weeks of normal work to three to six months for the first alternative for more complex models. The total time will also depend on the number of analysis that needs to be run and the type of reports that has to be delivered.

    S@R_ValueSim

    Selecting strategy

    Models like this are excellent for selection and assessment of strategies. Since we can find the probability distribution for equity value, changes in this brought by different strategies will form a basis for selection or adjustment of current strategy. Models including real option strategies are a natural extension of these simulation models:

    If there is a strategy with a curve to the right and under all other feasible strategies this will be the stochastic dominant one. If the curves crosses further calculations needs to be done before a stochastic dominant or preferable strategy can be found:

    Types of problems we aim to address:

    The effects of uncertainties on the P&L and Balance and the effects of the Boards strategies (market, hedging etc.) on future P&L and Balance sheets evaluating:

    • Market position and potential for growth
    • Effects of tax and capital cost
    • Strategies
    • Business units, country units or product lines –  capital allocation – compare risk, opportunity and expected profitability
    • Valuations, capital cost and debt requirements, individually and effect on company
    • The future cash-flow volatility of company and the individual BU’s
    • Investments, M&A actions, their individual value, necessary commitments and impact on company
    • Etc.

    The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against uncertain factors.

    Used in budgeting, this will improve budget stability through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.

    This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and Choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.

    A severe depression like that of 1920-1921 is outside the range of probability. –The Harvard Economic Society, 16 November 1929

  • Big Issues Needs Big Tools

    Big Issues Needs Big Tools

    This entry is part 3 of 3 in the series What We Do

     

    You can always amend a big plan, but you can never expand a little one. I don’t believe in little plans. I believe in plans big enough to meet a situation which we can’t possibly foresee now. Harry S. Truman : American statesman (33rd US president: 1945-53)

    We believe you know your business best and will in your capacity implement the necessary resources, competence, tools and methods for running a successful and efficient organization.

    Still issues related to uncertainty, whether it is finance, stakeholders, production , purchase or sale, has in most cases increased due to more complex business operational environment. Excellent systems for a range of processes; consolidation, Customer relationship, accounting has kept up with increasingly complex environments, and so has your most important tool – people.

    But we believe you do not possess the best available method and tool for bringing people – competence, experience, economic/financial facts, assumptions and economic/financial tools together.

    You know your budgets, valuations, projections and estimates, scenario analysis all are made and presented with valuable information regarding uncertainty left out on the way. Whether this is because human experience related to risk, or analyzing, understanding and projection macro or micro risks is hard to capture, or tools not designed to capture risk is the cause. It is a fact that most complex big issues important for companies are based on insufficient information, a portion of luck, gut feeling and believes in market turns /stability/cycles or other comforting assumptions shared by peers.

    Or you are restricted to giving guidelines, min/max orders, specifications and trust to third party experts that one hope are better capable of capturing risk and potential in a narrow area of expertise. Regardless of this risk spreading or differentiation works – you need the best assistance for setting your guidelines and road-map both for your internal as well as external resources.

    Systems and methods (( A Skeptic’s Guide to Computer Models (Pdf, pp 25) , by John D. Sterman. This paper is reprinted from Sterman, J. D. (1991). A Skeptic’s Guide to Computer Models. In Barney, G. O. et al., Managing a Nation: The Microcomputer Software Catalog. Boulder, CO: Westview Press, 209-229.)) are never better than human experience, knowledge and excellence, but if you want to look closer at a method/tool that can capture the best of your existing decision making process and bringing it to a new level. You should look closer at a stochastic complete p/L/balance simulation model for those big issues and big decisions.

    If you are not familiar with stochastic simulations and probability distributions, take a look at a report for the most likely outcome (Pdf, pp 32) from the simulations – similar reports could have been made for the outcomes you would not have liked to see, giving a heads up for the sources of downside risk, OR for outcomes you would have loved you see – explaining the generators of up-side possibilities.

    Endnotes

  • Stochastic Balance Simulation

    Stochastic Balance Simulation

    This entry is part 1 of 6 in the series Balance simulation

    Introduction

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on a single values forecasts; the expected or average value of the input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong (Savage, 2002).  In addition deterministic models will miss the important dimension of uncertainty – that gives both the different risks facing the company and the opportunities they produce.

    In contrast, a stochastic model will be calculated a large number of times with different values for the input variable drawn from all possible values of the individual variables. Each run will then give a probable realization of future cash flow or of the company’s equity value etc. With thousands of runs we can plot the relative frequencies of the calculated values:

    and thus, we have succeeded in generating the probability distribution for the company’s equity value. In insurance this type of technique is often called Dynamic Financial Analysis (DFA) which actually is a fitting name.

    The Balance Simulation Model

    The main tool in the S&R toolbox is the balance model. The starting point is the company’s balance, which is treated as the simulations opening balance. In the case of a greenfield project – new factories, power plants, airports, etc. built from scratch – the opening balance is empty.

    The successive balances are then built from the Profit & Loss, by simulation of the company’s operation thru an EBITDA model mimicking the real life operations. Investments can be driven by demand (capacity calculations) or by investment programs giving the necessary or planned production capacity. The model will throughout the simulation raise debt (short and/or long term) or equity (domestic or foreign) according to the financial strategy set out by the company and the difference between cash outflow and inflow adjusted for the minimum cash level.

    Since this is a dynamic model, it will raise equity when losses occur and/or the maximum Debt/equity ratio has been exceeded. On the other hand it will repay loans, pay dividend, repurchase shares or purchase excess marketable securities (cash above the need for the operations) – all in line with the board’s shareholder strategy.

    The ledger and Double-entry Bookkeeping

    The activity described in the EBITDA model; investments, purchase of raw materials, production, payment of wages, income from sales, payment of special taxes on investments etc. is registered as transactions in the ledger, following a standard chart of accounts with double-entry bookkeeping. In a similar fashion are all financial transactions; loans repayments, cash, taxes paid and deferred, Agio and Disagio, etc. posted in the ledger. Currently, approximately 400 accounts are in use.

    The Trial Balance and the Financial Statements

    The trial balance (Post-Closing) is compiled and checked for balance between total debts and total credits. The income statement is then prepared using revenue and expense accounts from the trial balance and the balance sheet is prepared from the asset and liability accounts by including net income with the other equity accounts – using the International Financial Reporting Standards (IFRS).

    The general purpose of producing the trial balance is to ensure that the entries in the ledger are mathematically correct. Have in mind that every run in a simulation will produce a number of entries in the ledger and that they might differ not only in size but also in type depending on the realized states of the company’s operations (see above). We therefore need to be sure that the final financial statements – for every run – are correctly produced, since they will be the basis for all further financial analysis of the company.

    There are of course other sources of errors in book keeping; compensating errors, errors of omission, errors of principle etc. but after many years of use – with millions of runs – we feel confident that the ledger and financial statements are produced correctly. The point is that serious problems need serious models.

    However there are more benefits to be had from simulating the ledger and trial balance:

    1. It increases the models transparency; the trial balance can be printed out and audited. Together with the models extensive reporting and error/consistency control, it is no longer a ‘black box’ to the user.
    2. It makes it easy to plug inn new EBITDA models for other types of industry giving an automated check for consistency with the main balance simulation model.
    3. It is used to ensure correct solving of all implicit equations in the model, the most obvious is of course the interest and bank balance equation (interest depends on the bank balance and the bank balance depends on the interest) but others like translation hedging and limits set by the company’s financial strategy, create large and complicated systems of simultaneous equations.
    4. The trial balance changes from year to year are also used to ensure correct year to year balance transition.

    Financial Analysis, Financial Measures and Valuation

    Given the framework described above financial analysis can be performed and the expected value, variability and probability distributions for the different types of ratios; profitability, liquidity, activity, debt and equity etc. can be calculated and given as graphs. All important measures are calculated at least twice from different starting points to ensure consistency and correct solving of implicit equations.

    The following table shows the reconciliation of Economic Profit, initially calculated from (ROIC-WACC) multiplied with Invested capital:

    The motivation for doing all these consistency controls – in all nearly one hundred – lies in previously experience from Cash Flow/ Valuation models written in Excel. The level of detail is more often than not so low that there is no way to establish if they are right or wrong.

    More interesting than ratios, are the yearly distributions for EBITDA, EBIT, NOPLAT, Profit (loss) for the period, Free cash Flow, Economic profit, ROIC, Wacc, Debt and Equity and Equity value etc. giving a visual picture of the uncertainties and risks the company faces:

    Financial analysis is the conversion of financial data into useful information for decision making. Therefore, virtually any use of financial statements or other financial data for some purpose is financial analysis and is the primary focus of accounting and finance. Financial analysis can be internal (e.g., decision analysis by a company using internal data to understand or improve management and operating results) or external (e.g., comprehensive analysis for the purposes of commercial lending, mergers and acquisition or investment activities). The key is how to analysis available data to make correct decisions.

     

    Input

    As input the model needs parameter values and operational data. The parameter values fall in seven groups:

    1. Parameters describing investors preferences; Market risk premium etc.
    2. Parameters describing the company’s financial strategy; Leverage, Long/Short-term Debt ratio, Expected Foreign/ Domestic Debt Ratio, Economic Depreciation, Maximum Dividend Pay-out Ratio, Translation Hedging Strategy etc.
    3. Parameters describing the economic regime under which it operates: Taxes, Depreciation Scheme etc.
    4. Opening Balance etc.

    Since the model have to produces stochastic forecasts of interest(s) and exchange rates it will need for every currency involved (included lower and upper 5% probability limit):

    1. The Yield curves,
    2. Expected yearly inflation
    3. Depending on the forecast method(s) chosen for the exchange rates; the different currencies expected risk premiums or real exchange rates etc.

    Since there is a large number of parameters they are usually read from an excel template but the program will if necessary ask for missing or report inconsistent values of the parameters.

    The company’s operations are best described through an EBITDA model even if prices, costs and production coefficients and their variability can be read from an excel template. A dedicated EBITDA model will always give the opportunity to give a more detailed and in some cases complex description of the operations, include forecast and demand models, ‘exotic’ taxes, real options strategies etc., etc.

    Output

    S@R has set out to create models that can give answers to both deterministic and stochastic questions the tables will answer most deterministic issues while graphs must be used to answer the risk and uncertainty related questions:

    [TABLE=6]

    1.    In all 27 different reports with more than 70 pages describing operations and the economics of operations.
    2.    In addition the probability distributions for all input and output variables are produced.

    Use

    By linking dedicated EBITDA models to holistic balance simulation, taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:
    1.    by a using a EBITDA model to describe the companies operations or
    2.    by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model.

    The first approach implies setting up a dedicated EBITDA performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.
    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modeling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    Strategy@Risk takes advantage of a program language developed and used for financial risk simulation. We have used the program language for over 25years, and developed a series of simulation models for industry, banks and financial institutions.

    The language has as one of its strengths, to be able to solve implicit equations in multiple dimensions. For the specific problems we seek to solve, this is a necessity that provides the necessary degrees of freedom to formulate the approach to problems.

    The Strategy@Risk tools have highly advance properties:

    • Using models written in dedicated financial simulation language (with code and data separated; see The risk of spreadsheet errors).
    • Solving implicit systems of equations giving unique WACC calculated for every period ensuring that “Free Cash Flow” always equals “Economic Profit” value.
    • Programs and models in “windows end-user” style.
    • Extended test for consistency in input, calculations and results.
    • Transparent reporting of assumptions and results.

    References

    Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November 2002, pp. 20-21

    Mukherjee, Mukherjee (2003). Financial Accounting. New York: Harper Perennial, ISBN 9780070581555.

  • A short presentation of S@R

    A short presentation of S@R

    This entry is part 1 of 4 in the series A short presentation of S@R

     

    My general view would be that you should not take your intuitions at face value; overconfidence is a powerful source of illusions. Daniel Kahneman (“Strategic decisions: when,” 2010)

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong. In addition deterministic models will miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they produce.

    S@R has set out to create models (See Pdf: Short presentation of S@R) that can give answers to both deterministic and stochastic questions, by linking dedicated EBITDA models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Generic Simulation_model

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or,
    2. by using coefficients of fabrications  as direct input to the balance model.

    The first approach implies setting up a dedicated ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    EBITDA_model

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.

    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modelling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analysing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    References

    Strategic decisions: when can you trust your gut?. (2010). McKinsey Quarterly, (March)

  • Airport Simulation

    Airport Simulation

    This entry is part 1 of 4 in the series Airports

     

    The basic building block in airport simulation is the passenger (Pax) forecast. This is the basis for subsequent estimation of aircraft movements (ATM), investment in terminal buildings and airside installations, all traffic charges, tax free sales etc. In short it is the basic determinant of the airport’s economics.

    The forecast model is usually based on a logarithmic relation between Pax, GDP and airfare price movement. ((Manual on Air Traffic Forecasting. ICAO, 2006)), ((Howard, George P. et al. Airport Economic Planning. Cambridge: MIT Press, 1974.))

    There has been a large number of studies over time and across the world on Air Travel Demand Elasticities, a good survey is given in a Canadian study ((Gillen, David W.,William G. Morrison, Christopher Stewart . “Air Travel Demand Elasticities: Concepts, Issues and Measurement.” 24 Feb 2009 http://www.fin.gc.ca/consultresp/Airtravel/airtravStdy_-eng.asp)).

    In a recent project for an European airport – aimed at establishing an EBITDA model capable of simulating risk in its economic operations – we embedded the Pax forecast models in the EBITDA model. Since the seasonal variations in traffic are very pronounced and since the cycles are reverse for domestic and international traffic a good forecast model should attempt to forecast the seasonal variations for the different groups of travellers.

    int_dom-pax

    In the following graph we have done just that, by adding seasonal factors to the forecast model based on the relation between Pax and change in GDP and air fare cost. We have however accepted the fact that neither is the model specification complete, nor is the seasonal factors fixed and constant. We therefore apply Monte Carlo simulation using estimation and forecast errors as the stochastic parts. In the figure the green lines indicate the 95% limit, the blue the mean value and the red the 5% limit. Thus with 90% probability will the number of monthly Pax fall within these limits.

    pax

    From the graph we can clearly se the effects of estimation and forecast “errors” and the fact that it is international travel that increases most as GDP increases (summer effect).

    As an increase in GDP at this point of time is not exactly imminent we supply the following graph, displaying effects of different scenarios in growth in GDP and air fare cost.

    pax-gdp-and-price

    References