Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
featured – Strategy @ Risk

Tag: featured

  • Budgeting Revisited

    Budgeting Revisited

    This entry is part 2 of 2 in the series Budgeting

     

    Introduction

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Good budgeting practices are structured to minimize errors and inconsistencies, drawing in all the necessary participants to contribute their business experience and the perspective of each department. Best practice in budgeting entails a mixture of top-down guidelines and standards, combined with bottom-up individual knowledge and experience.

    Excel, the de facto tool for budgeting, is a powerful personal productivity tool. Its current capabilities, however, are often inadequate to support the critical nature of budgeting and forecasting. There will come a point when a company’s reliance on spreadsheets for budgeting leads to severely ineffective decision-making, lost productivity and lost opportunities.

    Spreadsheets can accommodate many tasks – but, over time, some of the models running in Excel may grow too big for the spreadsheet application. Programming in a spreadsheet model often requires embedded assumptions, complex macros, creating opportunities for formula errors and broken links between workbooks.

    It is common for spreadsheet budget models and their intricacies to be known and maintained by a single person who becomes a vulnerability point with no backup. And there are other maintenance and usage issues:

    A.    Spreadsheet budget models are difficult to distribute and even more difficult to collect and consolidate.
    B.    Data confidentiality is almost impossible to maintain in spreadsheets, which are not designed to hide or expose data based upon each user’s role.
    C.    Financial statements are usually not fully integrated leaving little basis for decision making.

    These are serious drawbacks for corporate governance and make the audit process more difficult.

    This is a few of many reasons why we use a dedicated simulation language for our models that specifically do not mix data and code.

    The budget model

    In practice budgeting can be performed on different levels:
    1.    Cash Flow
    2.    EBITDA
    3.    EBIT
    4.    Profit or
    5.    Company value.

    The most efficient is on EBITDA level, since taxes, depreciation and amortization on the short-term is mostly given. This is also the level where consolidation of daughter companies easiest is achieved. An EBITDA model describing the firm’s operations can again be used as a subroutine for more detailed and encompassing analysis thru P&L and Balance simulation.

    The aim will then to estimate of the firm’s equity value and is probability distribution. This can again be used for strategy selection etc.

    Forecasting

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the budget process.

    Forecasting or predictive analytics can best be described as statistic modeling enabling prediction of future events or results, using present and past information and data.

    1. Forecasts must integrate both external and internal cost and value drivers of the business.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5.  Never relay on single point or scenario forecasting.

    All uncertainty about the market sizes, market shares, cost and prices, interest rates, exchange rates and taxes etc. – and their correlation will finally end up contributing to the uncertainty in the firm’s budget forecasts.

    The EBITDA model

    The EBITDA model have to be detailed enough to capture all important cost and value drivers, but simple enough to be easy to update with new data and assumptions.

    Input to the model can come from different sources; any internal reporting system or spread sheet. The easiest way to communicate with the model is by using Excel  spread sheet – templates.

    Such templates will be pre-defined in the sense that the information the model needs is on a pre-determined place in the workbook.  This makes it easy if the budgets for daughter companies is reported (and consolidated) in a common system (e.g. SAP) and can ‘dump’ onto an excel spread sheet. If the budgets are communicated directly to head office or the mother company then they can be read directly by the model.

    Standalone models and dedicated subroutines

    We usually construct our EBITDA models so that they can be used both as a standalone model and as a subroutine for balance simulation. The model can then be used both for short term budgeting and long-term EBITDA forecasting and simulation and for short/long term balance forecasting and simulation. This means that the same model can be efficiently reused in different contexts.
    Rolling budgets and forecast

    The EBITDA model can be constructed to give rolling forecast based on updated monthly or quarterly values, taking into consideration the seasonality of the operations. This will give new forecasts (new budget) for the remaining of the year and/or the next twelve month. By forecasts we again mean the probability distributions for the budget variables.

    Even if the variables have not changed, the fact that we move towards the end of the year will reduce the uncertainty of if the end year results and also for the forecast for the next twelve month.

    Uncertainty

    The most important part of budgeting with Monte Carlo simulation is assessment of the uncertainty in the budgeted (forecasted) cost and value drivers. This uncertainty is given as the most likely value (usually the budget figure) and the interval where it is assessed with a high degree of confidence (approx. 95%) to fall.

    We will then use these lower and upper limits (5% and 95%) for sales, prices and other budget items and the budget values as indicators of the shape of the probability distributions for the individual budget items. Together they described the range and uncertainty in the EBITDA forecasts.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – by a large number of runs of the model, usually 1000 – of net revenue, operating expenses and finally EBITDA. This again will give us their probability distributions

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall. It is not based on any precise computation but is a reasonable assessment by knowledgeable persons. Selecting the budget value however is more difficult. Should it be the “mean”
    or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average .

    This is therefore where the individual mangers intent and culture will be manifested, and it is here the greatest learning effect for both the managers and the mother company will be, as under-budgeting  and overconfidence  will stand out as excessive large deviations from the model calculated expected value (probability weighted average over the interval).

    Output

    The output from the Monte Carlo simulation will be in the form of graphs that puts all run’s in the simulation together to form the cumulative distribution for the operating expenses (red line):

    In the figure we have computed the frequencies of observed (simulated) values for operating expenses (blue frequency plot) – the x-axis gives the operating expenses and the left y-axis the frequency. By summing up from left to right we can compute the cumulative probability curve. The s-shaped curve (red) gives for every point the probability (on the right y-axis) for having an operating expenses less than the corresponding point on the x-axis. The shape of this curve and its range on the x-axis gives us the uncertainty in the forecasts.

    A steep curve indicates little uncertainty and a flat curve indicates greater uncertainty.  The curve is calculated from the uncertainties reported in the reporting package or templates.

    Large uncertainties in the reported variables will contribute to the overall uncertainty in the EBITDA forecast and thus to a flatter curve and contrariwise. If the reported uncertainty in sales and prices has a marked downside and the costs a marked upside the resulting EBITDA distribution might very well have a portion on the negative side on the x-axis – that is, with some probability the EBITDA might end up negative.

    In the figure below the lines give the expected EBITDA and the budget value. The expected EBIT can be found by drawing a horizontal line from the 0.5 (50%) point on the y-axis to the curve and a vertical line from this point on the curve to the x-axis. This point gives us the expected EBITDA value – the point where it is 50% probability of having a value of EBITDA below and 100%-50%=50% of having it above.

    The second set of lines give the budget figure and the probability that it will end up lower than budget. In this case it is almost a 100% probability that it will be much lower than the management have expected.

    This distributions location on the EBITDA axis (x-axis) and its shape gives a large amount of information of what we can expect of possible results and their probability.

    The following figure that gives the EBIT distributions for a number of subsidiaries exemplifies this. One wills most probable never earn money (grey), three is cash cows (blue, green and brown) and the last (red) can earn a lot of money:

    Budget revisions and follow up

    Normally – if something extraordinary does not happen – we would expect both the budget and the actual EBITDA to fall somewhere in the region of the expected value. We have however to expect some deviation both from budget and expected value due to the nature of the industry.  Having in mind the possibility of unanticipated events or events “outside” the subsidiary’s budget responsibilities, but affecting the outcome this implies that:

    • Having the actual result deviating from budget is not necessary a sign of bad budgeting.
    • Having the result close to or on budget is not necessary a sign of good budgeting.

    However:

    •  Large deviations between budget and actual result needs looking into – especially if the deviation to expected value also is large.
    • Large deviation between budget and expected value can imply either that the limits are set “wrong” or that the budget EBITDA is not reflecting the downside risk or upside opportunity expressed by the limits.

    Another way of looking at the distributions is by the probabilities of having the actual result below budget that is how far off line the budget ended up. In the graph below, country #1’s budget came out with a probability of 72% of having the actual result below budget.  It turned out that the actual figure with only 36% probability would have been lower. The length of the bars thus indicates the budget discrepancies.

    For country# 2 it is the other way around: the probability of having had a result lower than the final result is 88% while the budgeted figure had a 63% probability of having been too low. In this case the market was seriously misjudged.

    In the following we have measured the deviation of the actual result both from the budget values and from the expected values. In the figures the left axis give the deviation from expected value and the bottom axis the deviation from budget value.

    1.  If the deviation for a country falls in the upper right quadrant the deviation are positive for both budget and expected value – and the country is overachieving.
    2. If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the country is underachieving.
    3. If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – and the country is overachieving but has had a to high budget.

    With a left skewed EBITDA distribution there should not be any observations in the lower right quadrant that will only happen when the distribution is skewed to the right – and then there will not be any observations in the upper left quadrant:

    As the manager’s gets more experienced in assessing the uncertainty they face, we see that the budget figures are more in line with the expected values and that the interval’s given is shorter and better oriented.

    If the budget is in line with expected value given the described uncertainty, the upside potential ratio should be approx. one. A high value should indicate a potential for higher EBITDA and vice versa. Using this measure we can numerically describe the managements budgeting behavior:

    Rolling budgets

    If the model is set up to give rolling forecasts of the budget EBITDA as new and in this case monthly data, we will get successive forecast as in the figure below:

    As data for new month are received, the curve is getting steeper since the uncertainty is reduced. From the squares on the lines indicating expected value we see that the value is moving slowly to the right and higher EBITDA values.

    We can of course also use this for long term forecasting as in the figure below:

    As should now be evident; the EBITDA Monte Carlo model have multiple fields of use and all of them will increases the managements possibilities of control and foresight giving ample opportunity for prudent planning for the future.

     

     

  • Forecasting sales and forecasting uncertainty

    Forecasting sales and forecasting uncertainty

    This entry is part 1 of 4 in the series Predictive Analytics

     

    Introduction

    There are a large number of methods used for forecasting ranging from judgmental (expert forecasting etc.) thru expert systems and time series to causal methods (regression analysis etc.).

    Most are used to give single point forecast or at most single point forecasts for a limited number of scenarios.  We will in the following take a look at the un-usefulness of such single point forecasts.

    As example we will use a simple forecast ‘model’ for net sales for a large multinational company. It turns out that there is a good linear relation between the company’s yearly net sales in million euro and growth rates (%) in world GDP:

    with a correlation coefficient R= 0.995. The relation thus accounts for almost 99% of the variation in the sales data. The observed data is given as green dots in the graph below, and the regression as the green line. The ‘model’ explains expected sales as constant equal 1638M and with 53M in increased or decreased sales per percent increase or decrease in world GDP:

    The International Monetary Fund (IMF) that kindly provided the historical GDP growth rates also gives forecasts for expected future change in the World GDP growth rate (WEO, April 2012) – for the next five years. When we put these forecasts into the ‘model’ we ends up with forecasts for net sales for 2012 to 2016 as depicted by the yellow dots in the graph above.

    So mission accomplished!  …  Or is it really?

    We know that the probability for getting a single-point forecast right is zero even when assuming that the forecast of the GDP growth rate is correct – so the forecasts we so far have will certainly be wrong, but how wrong?

    “Some even persist in using forecasts that are manifestly unreliable, an attitude encountered by the future Nobel laureate Kenneth Arrow when he was a young statistician during the Second World War. When Arrow discovered that month-long weather forecasts used by the army were worthless, he warned his superiors against using them. He was rebuffed. “The Commanding General is well aware the forecasts are no good,” he was told. “However, he needs them for planning purposes.” (Gardner & Tetlock, 2011)

    Maybe we should take a closer look at possible forecast errors, input data and the final forecast.

    The prediction band

    Given the regression we can calculate a forecast band for future observations of sales given forecasts of the future GDP growth rate. That is the region where we with a certain probability will expect new values of net sales to fall. In the graph below the green area give the 95% forecast band:

    Since the variance of the predictions increases the further new forecasts for the GDP growth rate lies from the mean of the sample values (used to compute the regression), the band will widen as we move to either side of this mean. The band will also widen with decreasing correlation (R) and sample size (the number of observations the regression is based on).

    So even if the fit to the data is good, our regression is based on a very small sample giving plenty of room for prediction errors. In fact a 95% confidence interval for 2012, with an expected GDP growth rate of 3.5%, is net sales 1824M plus/minus 82M. Even so the interval is still only approx. 9% of the expected value.

    Now we have shown that the model gives good forecasts, calculated the confidence interval(s) and shown that the expected relative error(s) with high probability will be small!

    So the mission is finally accomplished!  …  Or is it really?

    The forecasts we have made is based on forecasts of future world GDP growth rates, but how certain are they?

    The GDP forecasts

    Forecasting the future growth in GDP for any country is at best difficult and much more so for the GDP growth for the entire world. The IMF has therefore supplied the baseline forecasts with a fan chart ((  The Inflation Report Projections: Understanding the Fan Chart By Erik Britton, Paul Fisher and John Whitley, BoE Quarterly Bulletin, February 1998, pages 30-37.)) picturing the uncertainty in their estimates.

    This fan chart ((Figure 1.12. from:, World Economic Outlook (April 2012), International Monetary Fund, Isbn  9781616352462))  shows as blue colored bands the uncertainty around the WEO baseline forecast with 50, 70, and 90 percent confidence intervals ((As shown, the 70 percent confidence interval includes the 50 percent interval, and the 90 percent confidence interval includes the 50 and 70 percent intervals. See Appendix 1.2 in the April 2009 World Economic Outlook for details.)) :

    There is also another band on the chart, implied but un-seen, indicating a 10% chance of something “unpredictable”. The fan chart thus covers only 90% of the IMF’s estimates of the future probable growth rates.

    The table below shows the actual figures for the forecasted GDP growth (%) and the limits of the confidence intervals:

    Lower

    Baseline

    Upper

    90%

    70%

    50%

    50%

    70%

    90%

    2012

    2.5

    2.9

    3.1

    .5

    3.8

    4.0

    4.3

    2013

    2.1

    2.8

    3.3

    4.1

    4.8

    5.2

    5.9

    The IMF has the following comments to the figures:

    “Risks around the WEO projections have diminished, consistent with market indicators, but they remain large and tilted to the downside. The various indicators do not point in a consistent direction. Inflation and oil price indicators suggest downside risks to growth. The term spread and S&P 500 options prices, however, point to upside risks.”

    Our approximation of the distribution that can have produced the fan chart for 2012 as given in the World Economic Outlook for April 2012 is shown below:

    This distribution has:  mean 3.43%, standard deviation 0.54, minimum 1.22 and maximum 4.70 – it is skewed with a left tail. The distribution thus also encompasses the implied but un-seen band in the chart.

    Now we are ready for serious forecasting!

    The final sales forecasts

    By employing the same technique that we used to calculate the forecast band we can by Monte Carlo simulation compute the 2012 distribution of net sales forecasts, given the distribution of GDP growth rates and by using the expected variance for the differences between forecasts using the regression and new observations. The figure below describes the forecast process:

    We however are not only using the 90% interval for The GDP growth rate or the 95% forecast band, but the full range of the distributions. The final forecasts of net sales are given as a histogram in the graph below:

    This distribution of forecasted net sales has:  mean sales 1820M, standard deviation 81, minimum sales 1590M and maximum sales 2055M – and it is slightly skewed with a left tail.

    So what added information have we got from the added effort?

    Well, we now know that there is only a 20% probability for net sales to be lower than 1755 or above 1890. The interval from 1755M to 1890M in net sales will then with 60% probability contain the actual sales in 2012 – see graph below giving the cumulative sales distribution:

    We also know that we with 90% probability will see actual net sales in 2012 between 1720M and 1955M.But most important is that we have visualized the uncertainty in the sales forecasts and that contingency planning for both low and high sales should be performed.

    An uncertain past

    The Bank of England’s fan chart from 2008 showed a wide range of possible futures, but it also showed the uncertainty about where we were then – see that the black line showing National Statistics data for the past has probability bands around it:

    This indicates that the values for past GDP growth rates are uncertain (stochastic) or contains measurement errors. This of course also holds for the IMF historic growth rates, but they are not supplying this type of information.

    If the growth rates can be considered stochastic the results above will still hold, if the conditional distribution for net sales given the GDP growth rate still fulfills the standard assumptions for using regression methods. If not other methods of estimation must be considered.

    Black Swans

    But all this uncertainty was still not enough to contain what was to become reality – shown by the red line in the graph above.

    How wrong can we be? Often more wrong than we like to think. This is good – as in useful – to know.

    “As Donald Rumsfeld once said: it’s not only what we don’t know – the known unknowns – it’s what we don’t know we don’t know.”

    While statistic methods may lead us to a reasonably understanding of some phenomenon that does not always translate into an accurate practical prediction capability. When that is the case, we find ourselves talking about risk, the likelihood that some unfavorable or favorable event will take place. Risk assessment is then necessitated and we are left only with probabilities.

    A final word

    Sales forecast models are an integrated part of our enterprise simulation models – as parts of the models predictive analytics. Predictive analytics can be described as statistic modeling enabling the prediction of future events or results ((in this case the probability distribution of future net sales)) , using present and past information and data.

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the management process. The ability to quickly and accurately detect changes in key external and internal variables and adjust tactics accordingly can make all the difference between success and failure:

    1. Forecasts must integrate both external and internal drivers of business and the financial results.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5. Never relay on single point or scenario forecasting.

    The forecasts are usually done in three stages, first by forecasting the market for that particular product(s), then the firm’s market share(s) ending up with a sales forecast. If the firm has activities in different geographic markets then the exercise has to be repeated in each market, having in mind the correlation between markets:

    1. All uncertainty about the different market sizes, market shares and their correlation will finally end up contributing to the uncertainty in the forecast for the firm’s total sales.
    2. This uncertainty combined with the uncertainty from other forecasted variables like interest rates, exchange rates, taxes etc. will eventually be manifested in the probability distribution for the firm’s equity value.

    The ‘model’ we have been using in the example have never been tested out of sample. Its usefulness as a forecast model is therefore still debatable.

    References

    Gardner, D & Tetlock, P., (2011), Overcoming Our Aversion to Acknowledging Our Ignorance, http://www.cato-unbound.org/2011/07/11/dan-gardner-and-philip-tetlock/overcoming-our-aversion-to-acknowledging-our-ignorance/

    World Economic Outlook Database, April 2012 Edition; http://www.imf.org/external/pubs/ft/weo/2012/01/weodata/index.aspx

    Endnotes

     

     

  • Introduction to Simulation Models

    Introduction to Simulation Models

    This entry is part 4 of 6 in the series Balance simulation

     

    Simulation models sets out to mimic real life company operations, that is describing the transformation of raw materials and labor to finished products in such a way that it can be used as support for strategic decision making.

    A full simulation model will usually consist of two separate models:

    1. an EBITDA model that describes the particular firm’s operations and
    2. an generic P&L and Balance simulation model (PL&B).

     

     

    The EBITDA model ladder

    Both the deterministic and stochastic balance simulation can be approached as a ladder with two steps, where the first is especially well suited as an introduction to risk simulation and the second gives a full blown risk analysis. In these successive steps the EBITDA calculations will be based on:

    1. financial information only, by using coefficients of fabrications and unit prices (e.g. kg flour per 1000 bread and cost of flour per kg, etc.) as direct input to the balance model – the direct method and
    2. EBITDA models to give a detailed technical description of the company’s operations.

    The first step uses coefficients of fabrications and their variations give a low effort (cost) alternative, usually using the internal accounting as basis. In many cases, this will often give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans. The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    This step is especially well suited for introduction to risk simulation and the art of communicating risk and uncertainty throughout the firm. It can also profitably be used in cases where time and data is limited and where one wishes to limit efforts in an initial stage. Data and assumptions can later be augmented to much more sophisticated analyses within the same framework. This way the analysis can be successively built in the direction the previous studies suggested.

    The second step implies setting up a dedicated EBITDA subroutine to the balance model. This can then give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long-term planning and strategy development.

    The EBITDA model can both be a stand-alone model and a subroutine to the PL&B model. The stand-alone EBITDA model can be used to in detail study the firm’s operations and how different operational strategies will or can affect EBITDA outcomes and distribution.

    When connected to the PL&B model it will act as a subroutine giving the necessary information to produce the P&L and ultimately the Balance and the – outcomes distributions.

    This gives great flexibility in model formulations and the opportunity to fit models to different industries and accommodate for the data available.

    P&L and Balance simulation

    The generic PL&B model – based on the IFRS standard – can be used for a wide range of business activities both:

    1. describes the firm’s financial environment (taxes, interest rates, currency etc.) and
    2. acts as a testing bed for financial strategies (hedging, translation risk, etc.)

    Since S@R has set out to create models that can give answers to both deterministic and stochastic questions thus the PL&B model is a real balance simulation model – not a simple cash flow forecast model.

    All runs in the simulation produces a complete P&L and Balance it enables uncertainty curves (distributions) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. to be produced.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    The results from these analyses can be presented in different forms from detailed traditional financial reports to graphs describing the range of possible outcomes for all items in the P&L and Balance (+ much more) looking at the coming one to five (short term) or five to fifteen years (long term) and showing the impacts to e.g. equity value, company value, operating income etc.

    The goal is to find the distribution for the firm’s equity value which will incorporate all uncertainty facing the firm.

    This uncertainty gives both shape and location of the equity value distribution, and this is what we – if possible – are aiming to change:

    1. reducing downside risk by reducing the left tail (blue curve)
    2. increasing expected company value by moving the curve to the right (green curve)
    3. increasing the upside potential by  increasing the right tail (red curve) etc.

     

    The Data

    To be able to simulate the operations we need to put into the model all variables that will affect the firm’s net yearly result. Most of these will be collected by S@R from outside sources like central banks, local authorities and others, but some will have to be collected from the firm.

    The production and firm specific variables are related to every day’s activities in the firm. Their historic values can be collected from internal accounts or from production reports.  Someone in the procurement-, production- or sales department will have their records and most always the controllers.  The rest will be variables inside the domain of the CEO and the company treasurer.

    The variables fall in five groups:

    i.      general  variables describing the firm’s financial environment ,
    ii.      variables describing the firms strategy,
    iii.      general variables used for forecasting purposes,
    iv.      direct problem related variables and
    v.      the firm specific:
    a.  production coefficients  and
    b.  cost of raw materials and labor related variables.

    The first group will contain – for all countries either delivering raw materials or buying the finished product (s) – variables like: taxes, spot exchange rates etc.  For the firm’s domestic country it will in addition contain variables like: Vat rates, taxes on investments and dividend income, depreciation rates and method, initial tax allowances, overdraft interest rates etc.

    The second group will contain variables like: minimum cash levels, debt distribution on short and long term loans and currencies, hedge ratios, targeted leverage, economic depreciation etc.

    The third group will contain variables needed for forecasting purposes: yield curves, inflation forecasts, GDP forecasts etc. The expected values and their 5 % and 95 % probability limits will be used to forecast exchange rates, interest rates, demand etc. They will be collected by S@R.

    The fourth group will contain variables related to sales forecasts: yearly air temperature profiles (and variation) for forecasting beer sales and yearly water temperature profiles (and variation) for forecasting increase in biomass in fish farming.

    The fifth group will contain variables that specify the production and costs of production. They will vary according to the type of operations e.g.: operating rate (%), max days of production, tools maintenance (h per 10.000 units) , error rate (errors per 1000 units), waste (% of weight of prod unit), cycle time (units per min), number of machines per shift (#), concession density (kg per m3), feed rates (%), mortality rates (%) etc., etc.. This variable specifies the production and will they be stochastic in the sense that they are not constant but will vary inside a given – theoretical or historic – range.

    To simulate costs of production we use the coefficients of fabrication and their unit costs. Both the coefficients and their unit costs will always be of stochastic nature and they can vary with capacity utilization:  energy per unit produced (kwh/unit) and energy price (cost per Kwh), malt use (kg per hectoliter), malt price (per kg), maximum takeoff weight (ton), takeoff charge (per ton), specific consumption of wood, (m3/Adt), cost of round wood (per m3), etc., etc.

    The uncertainty (and risk) stemming from all groups of variables will be propagated through the P&L and down to the Balance, ending up as volatility in the equity distribution.

    The aim is to estimate the economic impact that such uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight, and the ability to deal with uncertainties in an informed way – and thus benefits above ordinary spread-sheet exercises.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    Most companies have some sort of model describing the company’s operations. They are used mostly for budgeting, but in some cases also for forecasting cash flow and other important performance measures.

    If the client already has spread sheet models describing the operations, we can build on this. There is no reason to reinvent what has already been done – thus saving time and resources that can be better utilized in other phases of the project.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    In some cases, we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.

    The first approach can also be considered as an introduction and stepping-stone to a more complete EBITDA model and detailed simulations.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However, the type of data will have to be agreed upon depending on the scope of analysis.

    Very often, key people from the controller group will be adequate for this work and if they do not have the direct knowledge, they usually know whom to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months – depending on the scope of the project.

    For S&R, the period will depend on the availability of key personnel from the client and the availability of data. For the second alternative, it can take from one to three weeks of normal work to three to six months for the second alternative for more complex models. The total time will also depend on the number of analyses that needs to be run and the type of reports that has to be delivered.

    The team’s participation in the project also makes communication of the results up or down in the system simpler. Since the input data is collected by templates this gives the responsible departments and persons, ownership to assumptions, data and results. These templates thus visualize the flow of data thru the organization and the interdependence between the departments – facilitating the communication of risk and the different strategies both reviewed and selected.

    No knowledge or expertize on uncertainty calculations or statistical methods is required from the clients side. The team will thru ‘osmosis’ acquires the necessary knowledge. Usually the team finds this as an exciting experience.

  • Plans based on average assumptions ……

    Plans based on average assumptions ……

    This entry is part 3 of 4 in the series The fallacies of scenario analysis

     

    The Flaw of Averages states that: Plans based on the assumption that average conditions will occur are usually wrong. (Savage, 2002)

    Many economists use what they believe to be most likely ((Most likely estimates are often made in-house based on experience and knowledge about their operations.)) or average values ((Forecasts for many types of variable can be bought from suppliers of ‘consensus forecasts’.))  (Timmermann, 2006) (Gavin & Pande, 2008) as input for the exogenous variables in their spreadsheet calculations.

    We know however that:

    1. the probability for any variable to have outcomes equal to any of these values is close to zero,
    1. and that the probability of having outcomes for all the (exogenous) variables in the spreadsheet model equal to their average is virtually zero.

    So why do they do it? They obviously lack the necessary tools to calculate with uncertainty!

    But if a small deviation from the most likely value is admissible, how often will the use of a single estimate like the most probable value be ‘correct’?

    We can try to answer that by looking at some probability distributions that may represent the ‘mechanism’ generating some of these variables.

    Let’s assume that we are entering into a market with a new product, we know of course the maximum upper and lower limit of our future possible market share, but not the actual number so we guess it to be the average value = 0,5. Since we have no prior knowledge we have to assume that the market share is uniformly distributed between zero and one:

    If we then plan sales and production for a market share between 0, 4 and 0, 5 – we would out of a hundred trials only have guessed the market share correctly 13 times. In fact we would have overestimated the market share 31 times and underestimated it 56 times.

    Let’s assume a production process where the acceptable deviation from some fixed measurement is 0,5 mm and where the actual deviation have a normal distribution with expected deviation equal to zero, but with a standard deviation of one:

    Using the average deviation to calculate the expected error rate will falsely lead to us to believe it to be zero, while it in fact in the long run will be 64 %.

    Let’s assume that we have a contract for drilling a tunnel, and that the cost will depend on the hardness of the rock to be drilled. The contract states that we will have to pay a minimum of $ 0.5M and a maximum of $ 2M, with the most likely cost being $ 1M. The contract and our imperfect knowledge of the geology make us assume the cost distribution to be triangular:

    Using the average ((The bin containing the average in the histogram.)) as an estimate for expected cost will give a correct answer in only 14 out of a 100 trials, with cost being lower in 45 and higher in 41.

    Now, let’s assume that we are performing deep sea drilling for oil and that we have a single estimate for the cost to be $ 500M. However we expect the cost deviation to be distributed as in the figure below, with a typical small negative cost deviation and on average a small positive deviation:

    So, for all practical purposes this is considered as a low economic risk operation. What they have failed to do is to look at the tails of the cost deviation distribution that turns out to be Cauchy distributed with long tails, including the possibility of catastrophic events:

    The event far out on the right tail might be considered a Black Swan (Taleb, 2007), but as we now know they happen from time to time.

    So even more important than the fact that using a single estimate will prove you wrong most of the times it will also obscure what you do not know – the risk of being wrong.

    Don’t worry about the average, worry about how large the variations are, how frequent they occur and why they exists. (Fung, 2010)

    Rather than “Give me a number for my report,” what every executive should be saying is “Give me a distribution for my simulation.”(Savage, 2002)

    References

    Gavin,W.,T. & Pande,G.(2008), FOMC Consensus Forecasts, Federal Reserve Bank of St. Louis Review, May/June 2008, 90(3, Part 1), pp. 149-63.

    Fung, K., (2010). Numbers Rule Your World. New York: McGraw-Hill.

    Savage, L., S.,(2002). The Flaw of Averages. Harvard Business Review, (November), 20-21.

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Taleb, N., (2007). The Black Swan. New York: Random House.

    Timmermann, A.,(2006).  An Evaluation of the World Economic Outlook Forecasts, IMF Working Paper WP/06/59, www.imf.org/external/pubs/ft/wp/2006/wp0659.pdf

    Endnotes

  • Budgeting

    Budgeting

    This entry is part 1 of 2 in the series Budgeting

     

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Admittedly a number of large public building projects are calculated this way, but more often than not is the aim only to calculate some percentile (usually 85%) as expected budget cost.

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall.  A manager’s subjective probability describes his personal judgement ebitabout how likely a particular event is to occur. It is not based on any precise computation but is a reasonable assessment by a knowledgeable person. Selecting the budget value however is more difficult. Should it be the “mean” or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average ((Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November (2002): 20-21.))

    When judging probability, people can locate the source of the uncertainty either in their environment or in their own imperfect knowledge ((Kahneman D, Tversky A . ” On the psychology of prediction.” Psychological Review 80(1973): 237-251)). When assessing uncertainty, people tend to underestimate it – often called overconfidence and hindsight bias.

    Overconfidence bias concerns the fact that people overestimate how much they actually know: when they are p percent sure that they have predicted correctly, they are in fact right on average less than p percent of the time ((Keren G.  “Calibration and probability judgments: Conceptual and methodological issues”. Acta Psychologica 77(1991): 217-273.)).

    Hindsight bias concerns the fact that people overestimate how much they would have known had they not possessed the correct answer: events which are given an average probability of p percent before they have occurred, are given, in hindsight, probabilities higher than p percent ((Fischhoff B.  “Hindsight=foresight: The effect of outcome knowledge on judgment under uncertainty”. Journal of Experimental Psychology: Human Perception and Performance 1(1975) 288-299.)).

    We will however not endeavor to ask for the managers subjective probabilities only ask for the range of possible values (5-95%) and their best guess of the most likely value. We will then use this to generate an appropriate log-normal distribution for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate we will use rectangular distributions.

    We will then proceed as if the distributions where known (Keynes):

    [Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.  ((John Maynard Keynes. ” General Theory of Employment, Quarterly Journal of Economics (1937))

    budget_actual_expected

    The data collection can easily be embedded in the ordinary budget process, by asking the managers to set the lower and upper 5% values for all variables demining the budget, and assuming that the budget figures are the most likely values.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – usually 1000 – of net revenue, operating expenses and finally EBIT (DA).

    In this case the budget was optimistic with ca 84% probability of having an outcome below and only with 26% probability of having an outcome above. The accounts also proved it to be high (actual) with final EBIT falling closer to the expected value. In our experience expected value is a better estimator for final result than the budget  EBIT.

    However, the most important part of this exercise is the shape of the cumulative distribution curve for EBIT. The shape gives a good picture of the uncertainty the company faces in the year to come, a flat curve indicates more uncertainty both in the budget forecast and the final result than a steeper curve.

    Wisely used the curve (distribution) can be used both to inform stakeholders about risk being faced and to make contingency plans foreseeing adverse events.percieved-uncertainty-in-ne

    Having the probability distributions for net revenue and operating expenses we can calculate and plot the manager’s perceived uncertainty by using coefficients of variation.

    In our material we find on average twice as much uncertainty in the forecasts for net revenue than for operating expenses.

    As many often have budget values above expected value they are exposing a downward risk. We can measure this risk by the Upside Potential Ratio, which is the expected return above budget value per unit of downside risk. It can be found using the upper and lower moments calculated at budget value.

    References