Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Other topics – Page 3 – Strategy @ Risk

Category: Other topics

  • Inventory Management: Is profit maximization right for you?

    Inventory Management: Is profit maximization right for you?

    This entry is part 2 of 4 in the series Predictive Analytics

     

    Introduction

    In the following we will exemplify how sales forecasts can be used to set inventory levels in single or in multilevel warehousing. By inventory we will mean a stock or store of goods; finished goods, raw materials, purchased parts, and retail items. Since the problem discussed is the same for both production and inventory, the two terms will be used interchangeably.

    Good inventory management is essential to the successful operation for most organizations both because of the amount of money the inventory represents and the impact that inventories have on the daily operations.

    An inventory can have many purposes among them the ability:

    1. to support independence of operations,
    2. to meet both anticipated and variation in demand,
    3. to decouple components of production and allow flexibility in production scheduling and
    4. to hedge against price increases, or to take advantage of quantity discounts.

    The many advantages of stock keeping must however be weighted against the costs of keeping the inventory. This can best be described as the “too much/too little problem”; order too much and inventory is left over or order too little and sales are lost.

    This can be as a single-period (a onetime purchasing decision) or a multi-period problem, involving a single warehouse or multilevel warehousing geographically dispersed. The task can then be to minimize the organizations total cost, maximize the level of customer service, minimize ‘loss’ or maximize profit etc.

    Whatever the purpose, the calculation will have to be based on knowledge of the sales distribution. In addition, sales will usually have a seasonal variance creating a balance act between production, logistic and warehousing costs. In the example given below the sales forecasts will have to be viewed as a periodic forecast (month, quarter, etc.).

    We have intentionally selected a ‘simple problem’ to highlight the optimization process and the properties of the optimal solution. The last is seldom described in the standard texts.

    The News-vendor problem

    The news-vendor is facing a onetime purchasing decision; to maximize expected profit so that the expected loss on the Qth unit equals the expected gain on the Qth unit:

    I.  Co * F(Q) = Cu * (1-F(Q)) , where

    Co = The cost of ordering one more unit than what would have been ordered if demand had been known – or the increase in profit enjoyed by having ordered one fewer unit,

    Cu = The cost of ordering one fewer unit than what would have been ordered if demand had been known  – or the increase in profit enjoyed by having ordered one more unit, and

    F(Q) = Demand Probability for q<= Q. By rearranging terms in the above equation we find:

    II.  F(Q) = Cu/{Co+Cu}

    This ratio is often called the critical ratio (CR). The usual way of solving this is to assume that the demand is normal distributed giving Q as:

    III.    Q = m + z * s, where: z = {Q-m}/s , is normal distributed with zero mean and variance equal  one.

    Demand unfortunately, rarely haves a normal distribution and to make things worse we usually don’t know the exact distribution at all. We can only ‘find’ it by Monte Carlo simulation and thus have to find the Q satisfying the equation (I) by numerical methods.

    For the news-vendor the inventory level should be set to maximize profit given the sales distribution. This implies that the cost of lost sales will have to be weighed against the cost of adding more to the stock.

    If we for the moment assume that all these costs can be regarded as fixed and independent of the inventory level, then the product markup (% of cost) will determine the optimal inventory level:

    IV. Cu= Co * (1+ {Markup/100}) 

    In the example given here the critical ratio is approx. 0.8.  The question then is if the inventory levels indicated by that critical ratio always will be the best for the organization.

    Expected demand

    The following graph indicates the news-vendors demand distribution. Expected demand is 2096 units ((Median demand is 1819 units and the demand lies most typically in the range of 1500 to 2000 units)), but the distribution is heavily skewed to the right ((The demand distribution has a skewness of 0.78., with a coefficient of variation of 0.45, a lower quartile of 1432 units and an upper quartile of 2720 units.))  so there is a possibility of demand exceeding the expected demand:

    By setting the product markup – in the example below it is set to 300% – we can calculate profit and loss based on the demand forecast.

    Profit and Loss (of opportunity)

    In the following we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum  ¤4963 at a level of 2729 units:

    At that point we can expect to have some excess stock and in some cases also lost sales. But regardless, it is at this point that expected profit is maximized, so this gives the optimal stock level.

    Since we include both costs of sold and unsold items, the point giving expected maximum profit will be below the point minimizing expected loss –¤1460 at a production level of 2910 units.

    Given the optimal inventory level (2729 units) we find the actual sales frequency distribution as shown in the graph below. At this level we expect an average sale of 1920 units – ranging from 262 to 2729 units ((Having a lower quartile of 1430 units and an upper quartile of 2714 units.)).

    The graph shows that the distribution possesses two different modes ((The most common value in a set of observations.)) or two local maxima. This bimodality is created by the fact that the demand distribution is heavily skewed to the right so that demand exceeding 2729 units will imply 2729 units sold with the rest as lost sales.

    This bimodality will of course be reflected in the distribution of realized profits. Have in mind that the line (blue) giving maximum profit is an average of all realized profits during the Monte Carlo simulation given the demand distribution and the selected inventory level. We can therefore expect realized profit both below and above this average (¤4963) – as shown in the frequency graph below:

    Expected (average) profit is ¤4963, with a minimum of ¤1681 and a maximum of ¤8186, the range of realized profits is therefore very large ((Having a lower quartile of ¤2991 and an upper quartile of ¤8129.)) ¤9867.

    So even if we maximize profit we can expect a large variation in realized profits, there is no way that the original uncertainty in the demand distribution can be reduced or removed.

    Risk and Reward

    Increased profit comes at a price: increased risk. The graph below describes the situation; the blue curve shows how expected profit increases with the production or inventory (service) level. The spread between the green and red curves indicates the band where actual profit with 80% probability will fall. As is clear from the graph, this band increases in width as we move to the right indicating an increased upside (area up to the green line) but also an increased probability for a substantial downside (area down to the red line:

    For some companies – depending on the shape of the demand distribution – other concerns than profit maximization might therefore be of more importance – like predictability of results (profit). The act of setting inventory or production levels should accordingly be viewed as an element for the boards risk assessments.

    On the other hand will the uncertainty band around loss as the service level increases decrease. This of course lies in the fact that loss due to lost sales diminishes as the service level increases and the that the high markup easily covers the cost of over-production.

    Thus a strategy of ‘loss’ minimization will falsely give a sense of ‘risk minimization’, while it in reality increases the uncertainty of future realized profit.

    Product markup

    The optimal stock or production level will be a function of the product markup. A high markup will give room for a higher level of unsold items while a low level will necessitate a focus on cost reduction and the acceptance of stock-out:

    The relation between markup (%) and the production level is quadratic ((Markup (%) = 757.5 – 0.78*production level + 0.00023*production level2))  implying that markup will have to be increasingly higher, the further out on the right tail we fix the production level.

    The Optimal inventory (production) level

    If we put it all together we get the chart below. In this the green curve is the accumulated sales giving the probability of the level of sales and the brown curve the optimal stock or production level given the markup.

    The optimal stock level is then found by drawing a line from the right markup axis (right y-axis) to the curve (red) for optimal stock level, and down to the x-axis giving the stock level. By continuing the line from the markup axis to the probability axis (left y-axis) we find the probability level for stock-out (1-the cumulative probability) and the probability for having a stock level in excess of demand:

    By using the sales distribution we can find the optimal stock/production level given the markup and this would not have been possible with single point sales forecasts – that could have ended up almost anywhere on the curve for forecasted sales.

    Even if a single point forecast managed to find expected sales – as mean, mode or median – it would have given wrong answers about the optimal stock/production level, since the shape of the sales distribution would have been unknown.

    In this case with the sales distribution having a right tail the level would have been to low – or with low markup, to high. With a left skewed sales distribution the result would have been the other way around: The level would have been too high and with low markup probably too low.

    In the case of multilevel warehousing, the above analyses have to be performed on all levels and solved as a simultaneous system.

    The state of affairs at the point of maximum

    To have the full picture of the state of affairs at the point of maximum we have to take a look at what we can expect of over- and under-production. At the level giving maximum expected profit we will on

    average have an underproduction of 168 units, ranging from zero to nearly 3000 ((Having a coefficient of variation of almost 250%)). On the face of it this could easily be interpreted as having set the level to low, but as we shall see that is not the case.

    Since we have a high markup, lost sales will weigh heavily in the profit maximization and as a result of this we can expect to have unsold items in our stock at the end of the period. On average we will have a little over 800 units left in stock, ranging from zero to nearly 2500. The lower quartile is 14 units and the upper is 1300 units so in 75% of the cases we will have an overproduction of less than 1300 units. However in 25% of the cases the overproduction will be in the range from 1300 to 2500 units.

    Even with the possibility of ending up at the end of the period with a large number of unsold units, the strategy of profit maximization will on average give the highest profit. However, as we have seen, with a very high level of uncertainty about the actual profit being realized.

    Now, since a lower inventory level in this case only will reduce profit by a small amount but lower the confidence limit by a substantial amount, other strategies giving more predictability for the actual result should be considered.

  • Budgeting Revisited

    Budgeting Revisited

    This entry is part 2 of 2 in the series Budgeting

     

    Introduction

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Good budgeting practices are structured to minimize errors and inconsistencies, drawing in all the necessary participants to contribute their business experience and the perspective of each department. Best practice in budgeting entails a mixture of top-down guidelines and standards, combined with bottom-up individual knowledge and experience.

    Excel, the de facto tool for budgeting, is a powerful personal productivity tool. Its current capabilities, however, are often inadequate to support the critical nature of budgeting and forecasting. There will come a point when a company’s reliance on spreadsheets for budgeting leads to severely ineffective decision-making, lost productivity and lost opportunities.

    Spreadsheets can accommodate many tasks – but, over time, some of the models running in Excel may grow too big for the spreadsheet application. Programming in a spreadsheet model often requires embedded assumptions, complex macros, creating opportunities for formula errors and broken links between workbooks.

    It is common for spreadsheet budget models and their intricacies to be known and maintained by a single person who becomes a vulnerability point with no backup. And there are other maintenance and usage issues:

    A.    Spreadsheet budget models are difficult to distribute and even more difficult to collect and consolidate.
    B.    Data confidentiality is almost impossible to maintain in spreadsheets, which are not designed to hide or expose data based upon each user’s role.
    C.    Financial statements are usually not fully integrated leaving little basis for decision making.

    These are serious drawbacks for corporate governance and make the audit process more difficult.

    This is a few of many reasons why we use a dedicated simulation language for our models that specifically do not mix data and code.

    The budget model

    In practice budgeting can be performed on different levels:
    1.    Cash Flow
    2.    EBITDA
    3.    EBIT
    4.    Profit or
    5.    Company value.

    The most efficient is on EBITDA level, since taxes, depreciation and amortization on the short-term is mostly given. This is also the level where consolidation of daughter companies easiest is achieved. An EBITDA model describing the firm’s operations can again be used as a subroutine for more detailed and encompassing analysis thru P&L and Balance simulation.

    The aim will then to estimate of the firm’s equity value and is probability distribution. This can again be used for strategy selection etc.

    Forecasting

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the budget process.

    Forecasting or predictive analytics can best be described as statistic modeling enabling prediction of future events or results, using present and past information and data.

    1. Forecasts must integrate both external and internal cost and value drivers of the business.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5.  Never relay on single point or scenario forecasting.

    All uncertainty about the market sizes, market shares, cost and prices, interest rates, exchange rates and taxes etc. – and their correlation will finally end up contributing to the uncertainty in the firm’s budget forecasts.

    The EBITDA model

    The EBITDA model have to be detailed enough to capture all important cost and value drivers, but simple enough to be easy to update with new data and assumptions.

    Input to the model can come from different sources; any internal reporting system or spread sheet. The easiest way to communicate with the model is by using Excel  spread sheet – templates.

    Such templates will be pre-defined in the sense that the information the model needs is on a pre-determined place in the workbook.  This makes it easy if the budgets for daughter companies is reported (and consolidated) in a common system (e.g. SAP) and can ‘dump’ onto an excel spread sheet. If the budgets are communicated directly to head office or the mother company then they can be read directly by the model.

    Standalone models and dedicated subroutines

    We usually construct our EBITDA models so that they can be used both as a standalone model and as a subroutine for balance simulation. The model can then be used both for short term budgeting and long-term EBITDA forecasting and simulation and for short/long term balance forecasting and simulation. This means that the same model can be efficiently reused in different contexts.
    Rolling budgets and forecast

    The EBITDA model can be constructed to give rolling forecast based on updated monthly or quarterly values, taking into consideration the seasonality of the operations. This will give new forecasts (new budget) for the remaining of the year and/or the next twelve month. By forecasts we again mean the probability distributions for the budget variables.

    Even if the variables have not changed, the fact that we move towards the end of the year will reduce the uncertainty of if the end year results and also for the forecast for the next twelve month.

    Uncertainty

    The most important part of budgeting with Monte Carlo simulation is assessment of the uncertainty in the budgeted (forecasted) cost and value drivers. This uncertainty is given as the most likely value (usually the budget figure) and the interval where it is assessed with a high degree of confidence (approx. 95%) to fall.

    We will then use these lower and upper limits (5% and 95%) for sales, prices and other budget items and the budget values as indicators of the shape of the probability distributions for the individual budget items. Together they described the range and uncertainty in the EBITDA forecasts.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – by a large number of runs of the model, usually 1000 – of net revenue, operating expenses and finally EBITDA. This again will give us their probability distributions

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall. It is not based on any precise computation but is a reasonable assessment by knowledgeable persons. Selecting the budget value however is more difficult. Should it be the “mean”
    or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average .

    This is therefore where the individual mangers intent and culture will be manifested, and it is here the greatest learning effect for both the managers and the mother company will be, as under-budgeting  and overconfidence  will stand out as excessive large deviations from the model calculated expected value (probability weighted average over the interval).

    Output

    The output from the Monte Carlo simulation will be in the form of graphs that puts all run’s in the simulation together to form the cumulative distribution for the operating expenses (red line):

    In the figure we have computed the frequencies of observed (simulated) values for operating expenses (blue frequency plot) – the x-axis gives the operating expenses and the left y-axis the frequency. By summing up from left to right we can compute the cumulative probability curve. The s-shaped curve (red) gives for every point the probability (on the right y-axis) for having an operating expenses less than the corresponding point on the x-axis. The shape of this curve and its range on the x-axis gives us the uncertainty in the forecasts.

    A steep curve indicates little uncertainty and a flat curve indicates greater uncertainty.  The curve is calculated from the uncertainties reported in the reporting package or templates.

    Large uncertainties in the reported variables will contribute to the overall uncertainty in the EBITDA forecast and thus to a flatter curve and contrariwise. If the reported uncertainty in sales and prices has a marked downside and the costs a marked upside the resulting EBITDA distribution might very well have a portion on the negative side on the x-axis – that is, with some probability the EBITDA might end up negative.

    In the figure below the lines give the expected EBITDA and the budget value. The expected EBIT can be found by drawing a horizontal line from the 0.5 (50%) point on the y-axis to the curve and a vertical line from this point on the curve to the x-axis. This point gives us the expected EBITDA value – the point where it is 50% probability of having a value of EBITDA below and 100%-50%=50% of having it above.

    The second set of lines give the budget figure and the probability that it will end up lower than budget. In this case it is almost a 100% probability that it will be much lower than the management have expected.

    This distributions location on the EBITDA axis (x-axis) and its shape gives a large amount of information of what we can expect of possible results and their probability.

    The following figure that gives the EBIT distributions for a number of subsidiaries exemplifies this. One wills most probable never earn money (grey), three is cash cows (blue, green and brown) and the last (red) can earn a lot of money:

    Budget revisions and follow up

    Normally – if something extraordinary does not happen – we would expect both the budget and the actual EBITDA to fall somewhere in the region of the expected value. We have however to expect some deviation both from budget and expected value due to the nature of the industry.  Having in mind the possibility of unanticipated events or events “outside” the subsidiary’s budget responsibilities, but affecting the outcome this implies that:

    • Having the actual result deviating from budget is not necessary a sign of bad budgeting.
    • Having the result close to or on budget is not necessary a sign of good budgeting.

    However:

    •  Large deviations between budget and actual result needs looking into – especially if the deviation to expected value also is large.
    • Large deviation between budget and expected value can imply either that the limits are set “wrong” or that the budget EBITDA is not reflecting the downside risk or upside opportunity expressed by the limits.

    Another way of looking at the distributions is by the probabilities of having the actual result below budget that is how far off line the budget ended up. In the graph below, country #1’s budget came out with a probability of 72% of having the actual result below budget.  It turned out that the actual figure with only 36% probability would have been lower. The length of the bars thus indicates the budget discrepancies.

    For country# 2 it is the other way around: the probability of having had a result lower than the final result is 88% while the budgeted figure had a 63% probability of having been too low. In this case the market was seriously misjudged.

    In the following we have measured the deviation of the actual result both from the budget values and from the expected values. In the figures the left axis give the deviation from expected value and the bottom axis the deviation from budget value.

    1.  If the deviation for a country falls in the upper right quadrant the deviation are positive for both budget and expected value – and the country is overachieving.
    2. If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the country is underachieving.
    3. If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – and the country is overachieving but has had a to high budget.

    With a left skewed EBITDA distribution there should not be any observations in the lower right quadrant that will only happen when the distribution is skewed to the right – and then there will not be any observations in the upper left quadrant:

    As the manager’s gets more experienced in assessing the uncertainty they face, we see that the budget figures are more in line with the expected values and that the interval’s given is shorter and better oriented.

    If the budget is in line with expected value given the described uncertainty, the upside potential ratio should be approx. one. A high value should indicate a potential for higher EBITDA and vice versa. Using this measure we can numerically describe the managements budgeting behavior:

    Rolling budgets

    If the model is set up to give rolling forecasts of the budget EBITDA as new and in this case monthly data, we will get successive forecast as in the figure below:

    As data for new month are received, the curve is getting steeper since the uncertainty is reduced. From the squares on the lines indicating expected value we see that the value is moving slowly to the right and higher EBITDA values.

    We can of course also use this for long term forecasting as in the figure below:

    As should now be evident; the EBITDA Monte Carlo model have multiple fields of use and all of them will increases the managements possibilities of control and foresight giving ample opportunity for prudent planning for the future.

     

     

  • Forecasting sales and forecasting uncertainty

    Forecasting sales and forecasting uncertainty

    This entry is part 1 of 4 in the series Predictive Analytics

     

    Introduction

    There are a large number of methods used for forecasting ranging from judgmental (expert forecasting etc.) thru expert systems and time series to causal methods (regression analysis etc.).

    Most are used to give single point forecast or at most single point forecasts for a limited number of scenarios.  We will in the following take a look at the un-usefulness of such single point forecasts.

    As example we will use a simple forecast ‘model’ for net sales for a large multinational company. It turns out that there is a good linear relation between the company’s yearly net sales in million euro and growth rates (%) in world GDP:

    with a correlation coefficient R= 0.995. The relation thus accounts for almost 99% of the variation in the sales data. The observed data is given as green dots in the graph below, and the regression as the green line. The ‘model’ explains expected sales as constant equal 1638M and with 53M in increased or decreased sales per percent increase or decrease in world GDP:

    The International Monetary Fund (IMF) that kindly provided the historical GDP growth rates also gives forecasts for expected future change in the World GDP growth rate (WEO, April 2012) – for the next five years. When we put these forecasts into the ‘model’ we ends up with forecasts for net sales for 2012 to 2016 as depicted by the yellow dots in the graph above.

    So mission accomplished!  …  Or is it really?

    We know that the probability for getting a single-point forecast right is zero even when assuming that the forecast of the GDP growth rate is correct – so the forecasts we so far have will certainly be wrong, but how wrong?

    “Some even persist in using forecasts that are manifestly unreliable, an attitude encountered by the future Nobel laureate Kenneth Arrow when he was a young statistician during the Second World War. When Arrow discovered that month-long weather forecasts used by the army were worthless, he warned his superiors against using them. He was rebuffed. “The Commanding General is well aware the forecasts are no good,” he was told. “However, he needs them for planning purposes.” (Gardner & Tetlock, 2011)

    Maybe we should take a closer look at possible forecast errors, input data and the final forecast.

    The prediction band

    Given the regression we can calculate a forecast band for future observations of sales given forecasts of the future GDP growth rate. That is the region where we with a certain probability will expect new values of net sales to fall. In the graph below the green area give the 95% forecast band:

    Since the variance of the predictions increases the further new forecasts for the GDP growth rate lies from the mean of the sample values (used to compute the regression), the band will widen as we move to either side of this mean. The band will also widen with decreasing correlation (R) and sample size (the number of observations the regression is based on).

    So even if the fit to the data is good, our regression is based on a very small sample giving plenty of room for prediction errors. In fact a 95% confidence interval for 2012, with an expected GDP growth rate of 3.5%, is net sales 1824M plus/minus 82M. Even so the interval is still only approx. 9% of the expected value.

    Now we have shown that the model gives good forecasts, calculated the confidence interval(s) and shown that the expected relative error(s) with high probability will be small!

    So the mission is finally accomplished!  …  Or is it really?

    The forecasts we have made is based on forecasts of future world GDP growth rates, but how certain are they?

    The GDP forecasts

    Forecasting the future growth in GDP for any country is at best difficult and much more so for the GDP growth for the entire world. The IMF has therefore supplied the baseline forecasts with a fan chart ((  The Inflation Report Projections: Understanding the Fan Chart By Erik Britton, Paul Fisher and John Whitley, BoE Quarterly Bulletin, February 1998, pages 30-37.)) picturing the uncertainty in their estimates.

    This fan chart ((Figure 1.12. from:, World Economic Outlook (April 2012), International Monetary Fund, Isbn  9781616352462))  shows as blue colored bands the uncertainty around the WEO baseline forecast with 50, 70, and 90 percent confidence intervals ((As shown, the 70 percent confidence interval includes the 50 percent interval, and the 90 percent confidence interval includes the 50 and 70 percent intervals. See Appendix 1.2 in the April 2009 World Economic Outlook for details.)) :

    There is also another band on the chart, implied but un-seen, indicating a 10% chance of something “unpredictable”. The fan chart thus covers only 90% of the IMF’s estimates of the future probable growth rates.

    The table below shows the actual figures for the forecasted GDP growth (%) and the limits of the confidence intervals:

    Lower

    Baseline

    Upper

    90%

    70%

    50%

    50%

    70%

    90%

    2012

    2.5

    2.9

    3.1

    .5

    3.8

    4.0

    4.3

    2013

    2.1

    2.8

    3.3

    4.1

    4.8

    5.2

    5.9

    The IMF has the following comments to the figures:

    “Risks around the WEO projections have diminished, consistent with market indicators, but they remain large and tilted to the downside. The various indicators do not point in a consistent direction. Inflation and oil price indicators suggest downside risks to growth. The term spread and S&P 500 options prices, however, point to upside risks.”

    Our approximation of the distribution that can have produced the fan chart for 2012 as given in the World Economic Outlook for April 2012 is shown below:

    This distribution has:  mean 3.43%, standard deviation 0.54, minimum 1.22 and maximum 4.70 – it is skewed with a left tail. The distribution thus also encompasses the implied but un-seen band in the chart.

    Now we are ready for serious forecasting!

    The final sales forecasts

    By employing the same technique that we used to calculate the forecast band we can by Monte Carlo simulation compute the 2012 distribution of net sales forecasts, given the distribution of GDP growth rates and by using the expected variance for the differences between forecasts using the regression and new observations. The figure below describes the forecast process:

    We however are not only using the 90% interval for The GDP growth rate or the 95% forecast band, but the full range of the distributions. The final forecasts of net sales are given as a histogram in the graph below:

    This distribution of forecasted net sales has:  mean sales 1820M, standard deviation 81, minimum sales 1590M and maximum sales 2055M – and it is slightly skewed with a left tail.

    So what added information have we got from the added effort?

    Well, we now know that there is only a 20% probability for net sales to be lower than 1755 or above 1890. The interval from 1755M to 1890M in net sales will then with 60% probability contain the actual sales in 2012 – see graph below giving the cumulative sales distribution:

    We also know that we with 90% probability will see actual net sales in 2012 between 1720M and 1955M.But most important is that we have visualized the uncertainty in the sales forecasts and that contingency planning for both low and high sales should be performed.

    An uncertain past

    The Bank of England’s fan chart from 2008 showed a wide range of possible futures, but it also showed the uncertainty about where we were then – see that the black line showing National Statistics data for the past has probability bands around it:

    This indicates that the values for past GDP growth rates are uncertain (stochastic) or contains measurement errors. This of course also holds for the IMF historic growth rates, but they are not supplying this type of information.

    If the growth rates can be considered stochastic the results above will still hold, if the conditional distribution for net sales given the GDP growth rate still fulfills the standard assumptions for using regression methods. If not other methods of estimation must be considered.

    Black Swans

    But all this uncertainty was still not enough to contain what was to become reality – shown by the red line in the graph above.

    How wrong can we be? Often more wrong than we like to think. This is good – as in useful – to know.

    “As Donald Rumsfeld once said: it’s not only what we don’t know – the known unknowns – it’s what we don’t know we don’t know.”

    While statistic methods may lead us to a reasonably understanding of some phenomenon that does not always translate into an accurate practical prediction capability. When that is the case, we find ourselves talking about risk, the likelihood that some unfavorable or favorable event will take place. Risk assessment is then necessitated and we are left only with probabilities.

    A final word

    Sales forecast models are an integrated part of our enterprise simulation models – as parts of the models predictive analytics. Predictive analytics can be described as statistic modeling enabling the prediction of future events or results ((in this case the probability distribution of future net sales)) , using present and past information and data.

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the management process. The ability to quickly and accurately detect changes in key external and internal variables and adjust tactics accordingly can make all the difference between success and failure:

    1. Forecasts must integrate both external and internal drivers of business and the financial results.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5. Never relay on single point or scenario forecasting.

    The forecasts are usually done in three stages, first by forecasting the market for that particular product(s), then the firm’s market share(s) ending up with a sales forecast. If the firm has activities in different geographic markets then the exercise has to be repeated in each market, having in mind the correlation between markets:

    1. All uncertainty about the different market sizes, market shares and their correlation will finally end up contributing to the uncertainty in the forecast for the firm’s total sales.
    2. This uncertainty combined with the uncertainty from other forecasted variables like interest rates, exchange rates, taxes etc. will eventually be manifested in the probability distribution for the firm’s equity value.

    The ‘model’ we have been using in the example have never been tested out of sample. Its usefulness as a forecast model is therefore still debatable.

    References

    Gardner, D & Tetlock, P., (2011), Overcoming Our Aversion to Acknowledging Our Ignorance, http://www.cato-unbound.org/2011/07/11/dan-gardner-and-philip-tetlock/overcoming-our-aversion-to-acknowledging-our-ignorance/

    World Economic Outlook Database, April 2012 Edition; http://www.imf.org/external/pubs/ft/weo/2012/01/weodata/index.aspx

    Endnotes

     

     

  • “How can you be better than us understand our business risk?”

    “How can you be better than us understand our business risk?”

    This is a question we often hear and the simple answer is that we don’t! But by using our methods and models we can use your knowledge in such a way that it can be systematically measured and accumulated throughout the business and be presented in easy to understand graphs to the management and board.

    The main reason for this lies in how we can treat uncertainties ((Variance is used as measure of uncertainty or risk.)) in the variables and in the ability to handle uncertainties stemming from variables from different departments simultaneously.

    Risk is usually compartmentalized in “silos” and regarded as proprietary to the department and – not as a risk correlated or co-moving with other risks in the company caused by common underlying events influencing their outcome:

    When Queen Elizabeth visited the London School of Economics in autumn 2008 she asked why no one had foreseen the crisis. The British Academy Forum replied to the Queen in a letter six months later. Included in the letter was the following:

    One of our major banks, now mainly in public ownership, reputedly had 4000 risk managers. But the difficulty was seeing the risk to the system as a whole rather than to any specific financial instrument or loan (…) they frequently lost sight of the bigger picture ((The letter from the British Academy to the Queen is available at: http://media.ft.com/cms/3e3b6ca8-7a08-11de-b86f-00144feabdc0.pdf)).

    To be precise we are actually not simulating risk in and of itself, risk just is a bi-product from simulation of a company’s financial and operational (economic) activities. Since the variables describing these activities is of stochastic nature, which is to say contains uncertainty, all variables in the P&L and Balance sheet will contain uncertainty. They can as such best be described by the shape of their frequency distribution – found after thousands of simulations. And it is the shape of these distributions that describes the uncertainty in the variables.

    Most ERM activities are focused on changing the left or downside tail – the tail that describes what normally is called risk.

    We however are also interested in the right tail or upside tail, the tail that describes possible outcomes increasing company value. Together they depict the uncertainty the company faces:

    S@R thus treats company risk holistic by modeling risks (uncertainty) as parts of the overall operational and financial activities. We are thus able to “add up” the risks – to a consolidated level.

    Having the probability distribution for e.g. the company’s equity value gives us the opportunity to apply risk measures to describe the risk facing the shareholders or the risk added or subtracted by different strategies like investments or risk mitigation tools.

    Since this can’t be done with ordinary addition (( The variance of the sum of two stochastic variables is the sum of their variance plus the covariance between them.)) (or subtraction) we have to use Monte Carlo simulation.

    The value added by this are:

    1.  A method for assessing changes in strategy; investments, new markets, new products etc.
    2. A heightening of risk awareness in management across an organization’s diverse businesses.
    3. A consistent measure of risk allowing executive management and board reporting and response across a diverse organization.
    4. A measure of risk (including credit and market risk) for the organization that can be compared with capital required by regulators, rating agencies and investors.
    5. A measure of risk by organization unit, product, channel and customer segment which allows risk adjusted returns to be assessed, and scarce capital to be rationally allocated.
    6.  A framework from which the organization can decide its risk mitigation requirements rationally.
    7. A measure of risk versus return that allows businesses and in particular new businesses (including mergers and acquisitions) to be assessed in terms of contribution to growth in shareholder value.

    The independent risk experts are often essential for consistency and integrity. They can also add value to the process by sharing risk and risk management knowledge gained both externally and elsewhere in the organization. This is not just a measurement exercise, but an investment in risk management culture.

    Forecasting

    All business planning are built on forecasts of market sizes, market shares, prices and costs. They are usually given as low, mean and high scenarios without specifying the relationship between the variables. It is easy to show that when you combine such forecasts you can end up very wrong (( https://www.strategy-at-risk.com/2009/05/04/the-fallacies-of-scenario-analysis/)). However the 5 %, 50 % and 95 % values from the scenarios can be used to produce a probability distribution for the variable and the simultaneous effect of these distributions can be calculated using Monte Carlo simulation, giving for instance the probability distribution for profit or cash flow from that market. This can again be used to consolidate the company’s cash flow or profit etc.

    Controls and Mitigation

    Controls and mitigation play a significant part in reducing the likelihood of a risk event or the amount of loss should one occur. They however have a material cost. One of the drivers of measuring risk is to support a more rational analysis of the costs and benefits of controls and.
    The result after controls and mitigation becomes the final or residual risk distribution for the company.

    Distributing Diversification Benefits

    At each level of aggregation within a business diversification benefits accrue, representing the capacity to leverage the risk capital against a larger range of non-perfectly correlated risks. How should these diversification benefits be distributed to the various businesses?

    This is not an academic matter, as the residual risk capital ((Bodoff, N. M.,  Capital Allocation by Percentile Layer VOLUME 3/ISSUE 1 CASUALTY ACTUARIAL SOCIETY, pp 13-30, http://www.variancejournal.org/issues/03-01/13.pdf

    Erel, Isil, Myers, Stewart C. and Read, James, Capital Allocation (May 28, 2009). Fisher College of Business Working Paper No. 2009-03-010. Available at SSRN: http://ssrn.com/abstract=1411190 or fttp://dx.doi.org/10.2139/ssrn.1411190))  attributed to each business segment is critical in determining its shareholder value creation and thus its strategic worth to the enterprise. Getting this wrong could lead the organization to discourage its better value creating segments and encourage ones that dissipate shareholder value.

    The simplest is the pro-rata approach which distributes the diversification benefits on a pro-rata basis down the various segment hierarchies (organizational unit, product, customer segment etc.).

    A more right approach that can be built into the Monte Carlo simulation is the contributory method which takes into account the extent to which a segment of the organization’s business is correlated with or contrary to the major risks that make up the company’s overall risk. This rewards counter cyclical businesses and others that diversify the company’s risk profile.

    Aggregation with market & credit risk

    For many parts of an organization there may be no market or credit risk – for areas, such as sales and manufacturing, operational and business risk covers all of their risks.

    But at the company level the operational and business risk needs to be integrated with market and credit risk to establish the overall measure of risk being run by the company. And it is this combined risk capital measure that needs to be apportioned out to the various businesses or segments to form the basis for risk adjusted performance measures.

    It is not enough just to add the operational, credit and market risks together. This would over count the risk – the risk domains are by no means perfectly correlated, which a simple addition would imply. A sharp hit in one risk domain does not imply equally sharp hits in the others.

    Yet they are not independent either. A sharp economic downturn will affect credit and many operational risks and probably a number of market risks as well.

    The combination of these domains can be handled in a similar way to correlations within operational risk, provided aggregate risk distributions and correlation factors can be estimated for both credit and market risk.

    Correlation risk

    Markets that are part of the same sector or group are usually very highly correlated or move together. Correlation risk is the risk associated with having several positions in too many similar markets. By using Monte Carlo simulation as described above this risk can be calculated and added to the company’s risks distribution that will take part in forming the company’s yearly profit or equity value distribution. And this is the information that the management and board will need.

    Decision making

    The distribution for equity value (see above) can then be used for decision purposes. By making changes to the assumptions about the variables distributions (low, medium and high values) or production capacities etc. this new equity distribution can be compared with the old to find the changes created by the changes in assumptions etc.:

    A versatile tool

    This is not only a tool for C-level decision-making but also for controllers, treasury, budgeting etc.:

    The results from these analyses can be presented in form of B/S and P&L looking at the coming one to five (short-term) or five to fifteen years (long-term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ Evaluate alternative strategic investment options
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    If you always have a picture of what really can happen you are forewarned and thus forearmed to adverse events and better prepared to take advantage of favorable events.go-on-look-behind-the-curtainFrom Indexed: Go-on-look-behind-the-curtain ((From Indexed: http://thisisindexed.com/2012/02/go-on-look-behind-the-curtain/))

     Footnotes

  • Be prepared for a bumpy ride

    Be prepared for a bumpy ride

    Imagine you’re nicely settled down in your airline seat on a transatlantic flight – comfort-able, with a great feeling. Then the captain comes on and welcomes everybody on board and continues, “It’s the first time I fly this type of machine, so wish me luck!” Still feeling great? ((Inspired by an article from BTS: http://www.bts.com/news-insights/strategy-execution-blog/Why_are_Business_Simulations_so_Effective.aspx))

    Running a company in today’s interconnected and volatile world has become extremely complicated; surely far more than flying an airliner. You probably don’t have all the indicators, dashboard system and controls as on a flight deck. And business conditions are likely to change for more than flight conditions ever will. Today we live with an information overload. Data streaming at us almost everywhere we turn. How can we cope? How do we make smart decisions?

    Pilots train over and over again. They spend hour after hour in flight simulators before being allowed to sit as co-pilots on a real passenger flight. Fortunately, for us passengers, flight hours normally pass by, day after day, without much excitement. Time to hit the simulator again and train engine fires, damaged landing gear, landing on water, passenger evacuation etc. becoming both mentally and practically prepared to manage the worst.

    Why aren’t we running business simulations to the same extent? Accounting, financial models and budgeting is more an art than science, many times founded on theories from the last century. (Not to mention Pacioli’s Italian accounting from 1491.) While the theory of behavioural economics progresses we must use the best tools we can get to better understand financial risks and opportunities and how to improve and refine value creation. The true job we’re set to do.

    How is it done? Like Einstein – seeking simplicity, as far as it goes. Finding out which pieces of information that is most crucial to the success and survival of the business. For major corporations these can be drawn down from the hundreds to some twenty key variables. (These variables are not set in stone once and for all, but need to be redefined in accordance with the business situation we foresee in the near future.)

    At Allevo our focal point is on Risk Governance at large and helping organisations implement Enterprise Risk Management (ERM) frame¬works and processes, specifically assisting boards and executive management to exercise their Risk Oversight duties. Fundamental to good risk management practice is to understand end articulate the organisation’s (i.e. the Board’s) appetite for risk. Without understanding the appetite and tolerance levels for various risks it’s hard to measure, aggregate and prioritize them. How much are we willing to spend on new ventures and opportunities? How much can we afford to lose? How do we calculate the trade-offs?

    There are two essential elements of Risk Appetite: risk capacity and risk capability.

    By risk capacity we mean the financial ability to take on new opportunities with their inherent risks (i.e. availability of cash and funding across the strategy period). By risk capability is meant the non-financial resources of the organisation. Do we have the know¬ledge and resources to take on new ventures? Cash and funding is fundamental and comes first.

    Does executive management and the board really understand the strengths and vulnerabilities hiding in the balance sheet or in the P&L-account? Many may have a gut feeling, mostly the CFO and the treasury department. But shouldn’t the executive team and the board (including the Audit Committee, and the Risk Committee if there is one) also really know?

    At Allevo we have aligned with Strategy@Risk Ltd to do business simulations. They have experiences from all kinds of industries; especially process industries where they even helped optimize manufacturing processes. They have simulated airports and flight patterns for a whole country. For companies with high level of raw material and commodity risks they simulate optimum hedging strategies. But their main contribution, in our opinion, is their ability to simulate your organisation’s balance sheet and P&L accounts. They have created a simulation tool that can be applied to a whole corporation. It needs only to be adjusted to your specific operations and business environ¬ments, which is done through inter-views and a few workshops with your own people that have the best knowledge of your business (operations, finances, markets, strategy etc.).

    When the key variables have been identified, it’s time to run the first Monte Carlo simulations to find out if the model fits with recent actual experiences and otherwise feels reliable.

    No model can ever predict the future. What we want to do is to find the key strengths and weaknesses in your operations and in your balance sheet. By running sensitivity analysis we can first of all understand which the key variables are. We want to focus what’s important, and leave alone those variables that have little effect on outcomes.

    Now, it’s time for the most important part. Considering how the selected variables can vary and interact over time. The future contains an inconceivable amount of different outcomes ((There are probably more different futures than ways of dealing 52 playing cards. Don’t you think? Well there are only 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 ways to shuffle a deck of 52 cards (8.1 x 1067 ))). What does that say about budgeting with discrete numbers?)). The question is how can we achieve the outcomes that we desire and avoid the ones that we dread the most?

    Running 10,000 simulations (i.e. closing each and every annual account over 10,000 years) we can stop the simulation when reaching a desired level of outcome and investigate the position of the key variables. Likewise when nasty results appear, we stop again and recording the underlying position of each variable.

    The simulations generate an 80-page standard report (which, once again, can feel like information overload). But once you’ve got a feeling for the sensitivity of the business you could instead do specific “what if?” analysis of scenarios of special interest to yourself, the executive team or to the board.

    Finally, the model equates the probability distribution of the organisation’s Enterprise Value going forward. The key for any business is to grow Enterprise Value.

    Simulations show how the likelihood of increasing or losing value varies with different strategies. This part of the simulation tool could be extremely important in strategy selection.

    If you wish to go into more depth on how simulations can support you and your organisation, please visit

    www.allevo.se or www.strategy-at-risk.com

    There you’ll find a great depth of material to chose from; or call us direct and we’ll schedule a quick on-site presentation.

    Have a good flight, and …

    Happy landing!

  • M&A: When two plus two is five or three or …

    M&A: When two plus two is five or three or …

    When two plus two is five (Orwell, 1949)

    Introduction

    Mergers & Acquisitions (M&A) is a way for companies to expand rapidly and much faster than organic growth – that is coming from existing businesses – would have allowed. M&A’s have for decades been a trillion-dollar business, but empirical studies reports that a significant proportion must be considered as failures.

    The conventional wisdom – is that the majority of deals fail to add shareholder value to the acquiring company. According to this research, only 30-50% of deals are considered to be successful (See Bruner, 2002).

    If most deals fail, why do companies keep doing them? Is it because they think the odds won’t apply to them, or are executives more concerned with extending its influence and company growth (empire building) and not with increasing their shareholder (s) value?

    Many writers argue that these are the main reasons driving the M&A activities, with the implication that executives are basically greedy (because their compensation is often tied to the size of the company) – or incompetent.

    To be able to create shareholder value the M&A must give rise to some forms of synergy. Synergy is the ability of the merged companies to generate higher shareholder value (wealth) than the standalone entities. That is; that the whole will be greater than the sum it’s of parts.

    For many of the observed M&A’s however, the opposite have been the truth – value have been destroyed; the whole have turned out to be less than the sum of its parts (dysergy).

    “When asked to name just one big merger that had lived up to expectations, Leon Cooperman, former co-chairman of Goldman Sachs’ Investment Policy Committee, answered: I’m sure there are success stories out there, but at this moment I draw a blank.” (Sirower, 1997)

    The “apparent” M&A failures have also been attributed to both methodological and measurement problems, stating that evidence – as cost saving or revenue enhancement brought by the M&A is difficult to obtain after the fact. This might also apply to some of the success stories.

    What is surprising in most (all?) of the studies of M&A success and failures is the lack understanding of the stochastic nature of business activities. For any company it is impossible to estimate with certainty its equity value, the best we can do is to estimate a range of values and the probability that the true value will fall inside this range. The merger two companies amplify this, and the discussion of possible synergies or dysergies can only be understood in the context of randomness (stochasticity) ((See: the IFA.com – Probability Machine, Galton Board, Randomness and Fair Price Simulator, Quincunx at http://www.youtube.com/watch?v=AUSKTk9ENzg)).

    [tube] http://www.youtube.com/watch?v=AUSKTk9ENzg, 400,300 [/tube]

    The M&A cases

    Let’s assume that we have two companies A and B that are proposed merged. We have the distribution for each company’s equity value (shareholders value) for both companies and we can calculate the equity distribution for the merged company. Company A’s value is estimated to be in the range of 0 to 150M with expected value 90M. Company B’s value is estimated to be in the range of -40 to 200M with expected value 140M. (See figure below)

    If we merge the two companies assuming no synergy or dysergy we get the value (shareholder) distribution shown by the green curve in the figure. The merged company will have a value in the range of 65 to 321M, with an expected value of 230M. Since there is no synergy/dysergy no value have been created or destroyed by the merger.

    For company B no value would be added in the merger if A was bought at a price equal to or higher than the expected value of the company.  If it was bought at a price less than expected value, then there is a probability that the wealth of the shareholders of company B will increase. But even then it is not with certainty. All increase of wealth to the shareholders of company B will be at the expenses of the shareholders of company A and vice versa.

    Case 1

    If we assume that there is a “connection” between the companies, such that an increase in one of the company’s revenues also will increase the revenues in the other, we will have a synergy that can be exploited.

    This situation is depicted in the figure below. The green curve gives the case with no synergy and the blue the case described above. The difference between them is the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is high and even negative (dysergy) when revenues is low.

    If we produce a frequency diagram of the sizes of the possible synergies it will look as the diagram below. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Case 2

    If we assume that the “connection” between the companies is such that a reduction in one of the company’s revenues streams will reduce the total production costs, we again have a synergy that can be exploited.
    This situation is depicted in the figure below. The green curve gives the case with no synergy and the red the case described above. The difference between them is again the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is lower and even negative (dysergy) when revenues is high.

    In this case, the merger acts as a hedge against revenue losses at the cost of parts of the upside created by the merger. This should not deter the participants from a merger since there is only a 30 % probability that this will happen.

    The graph above again gives the frequency diagram for the sizes of the possible synergies. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Conclusion

    The elusiveness of synergies in many M&A cases can be explained by the natural randomness in business activities. The fact that a merger can give rise to large synergies does not guarantee that it will occur, only that there is a probability that it will occur. Spread sheet exercises in valuation can lead to disaster if the stochastic nature of the involved companies is not taken into account. AND basing the pricing of the M&A candidate on expected synergies is pure foolishness.

    References

    Bruner, Robert F. (2002), Does M&A Pay? A Survey of Evidence for the Decision-Maker. Journal of Applied Finance, Vol. 12, No. 1. Available at SSRN: http://ssrn.com/abstract=485884

    Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg.

    The whole is more than the sum of its parts. Aristotle, Metaphysica

     

    Sirower, M. (1997) The Synergy Trap: How Companies Lose the Acquisition Game. New York. The Free Press.