Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Monte Carlo simulation – Page 2 – Strategy @ Risk

Tag: Monte Carlo simulation

  • Inventory management – Some effects of risk pooling

    Inventory management – Some effects of risk pooling

    This entry is part 3 of 4 in the series Predictive Analytics

    Introduction

    The newsvendor described in the previous post has decided to branch out having news boys placed at strategic corners in the neighborhood. He will first consider three locations, but have six in his sights.

    The question to be pondered is how many of the newspaper he should order for these three locations and the possible effects on profit and risk (Eppen, 1979) and (Chang & Lin, 1991).

    He assumes that the demand distribution he experienced at the first location also will apply for the two others and that all locations (point of sales) can be served from a centralized inventory. For the sake of simplicity he further assumes that all points of sales can be restocked instantly (i.e. zero lead time) at zero cost, if necessary or advantageous by shipment from one of the other locations and that the demand at the different locations will be uncorrelated. The individual point of sales will initially have a working stock, but will have no need of safety stock.

    In short is this equivalent to having one inventory serve newspaper sales generated by three (or six) copies of the original demand distribution:

    The aggregated demand distribution for the three locations is still positively skewed (0.32) but much less than the original (0.78) and has a lower coefficient of variation – 27% – against 45% for the original ((The quartile variation has been reduced by 37%.)):

    The demand variability has thus been substantially reduced by this risk pooling ((We distinguish between ten main types of risk pooling that may reduce total demand and/or lead time variability (uncertainty): capacity pooling, central ordering, component commonality, inventory pooling, order splitting, postponement, product pooling, product substitution, transshipments, and virtual pooling. (Oeser, 2011)))  and the question now is how this will influence the vendor’s profit.

    Profit and Inventory level with Risk Pooling

    As in the previous post we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum – ¤16541 at a level of 7149 units:

    Compared to the point of maximum profit for a single warehouse (profit ¤4963 at a level of 2729 units, see previous post), have this risk pooling increased the vendors profit by 11.1% while reducing his inventory by 12.7%. Centralization of the three inventories has thus been a successful operational hedge ((Risk pooling can be considered as a form of operational hedging. Operational hedging is risk mitigation using operational instruments.))  for our newsvendor by mitigating some, but not all, of the demand uncertainty.

    Since this risk mitigation was a success the newsvendor wants to calculate the possible benefits from serving six newsboys at different locations from the same inventory.

    Under the same assumptions, it turns out that this gives an even better result, with an increase in profit of almost 16% and at the same time reducing the inventory by 15%:

    The inventory ‘centralization’ has then both increased profit and reduced inventory level compared to a strategy with inventories held at each location.

    Centralizing inventory (inventory pooling) in a two-echelon supply chain may thus reduce costs and increase profits for the newsvendor carrying the inventory, but the individual newsboys may lose profits due to the pooling. On the other hand, the newsvendor will certainly lose profit if he allows the newsboys to decide the level of their own inventory and the centralized inventory.

    One of the reasons behind this conflict of interests is that each of the newsvendor and newsboys will benefit one-sidedly from shifting the demand risk to another party even though the performance may suffer as a result (Kemahloğlu-Ziya, 2004) and (Anupindi and Bassok 1999).

    In real life, the actual risk pooling effects would depend on the correlations between each locations demand. A positive correlation would reduce the effect while a negative correlation would increase the effects. If all locations were perfectly correlated (positive) the effect would be zero and a correlation coefficient of minus one would maximize the effects.

    The third effect

    The third direct effect of risk pooling is the reduced variability of expected profit. If we plot the profit variability, measured by its coefficient of variation (( The coefficient of variation is defined as the ratio of the standard deviation to the mean – also known as unitized risk.)) (CV) for the three sets of strategies discussed above; one single inventory (warehouse), three single inventories versus all three inventories centralized and six single inventories versus all six centralized.

    The graph below depicts the situation. The three curves show the CV for corporate profit given the three alternatives and the vertical lines the point of profit for each alternative.

    The angle of inclination for each curve shows the profits sensitivity for changes in the inventory level and the location each strategies impact on the predictability of realized profit.

    A single warehouse strategy (blue) gives clearly a much less ability to predict future profit than the ‘six centralized warehouse’ (purple) while the ‘three centralized warehouse’ (green) fall somewhere in between:

    So in addition to reduced costs and increased profits centralization, also gives a more predictable result, and lower sensitivity to inventory level and hence a greater leeway in the practical application of different policies for inventory planning.

    Summary

    We have thus shown through Monte-Carlo simulations, that the benefits of pooling will increase with the number of locations and that the benefits of risk pooling can be calculated without knowing the closed form ((In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution.

    Since we do not need the closed form of the demand distribution, we are not limited to low demand variability or the possibility of negative demand (Normal distributions etc.). Expanding the scope of analysis to include stochastic supply, supply disruptions, information sharing, localization of inventory etc. is natural extensions of this method ((We will return to some of these issues in later posts.)).

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    References

    Anupindi, R. & Bassok, Y. (1999). Centralization of stocks: Retailers vs. manufacturer.  Management Science 45(2), 178-191. doi: 10.1287/mnsc.45.2.178, accessed 09/12/2012.

    Chang, Pao-Long & Lin, C.-T. (1991). Centralized Effect on Expected Costs in a Multi-Location Newsboy Problem. Journal of the Operational Research Society of Japan, 34(1), 87–92.

    Eppen,G.D. (1979). Effects of centralization on expected costs in a multi-location newsboy problem. Management Science, 25(5), 498–501.

    Kemahlioğlu-Ziya, E. (2004). Formal methods of value sharing in supply chains. PhD thesis, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, July 2004. http://smartech.gatech.edu/bitstream/1853/4965/1/kemahlioglu ziya_eda_200407_phd.pdf, accessed 09/12/2012.

    OESER, G. (2011). Methods of Risk Pooling in Business Logistics and Their Application. Europa-Universität Viadrina Frankfurt (Oder). URL: http://opus.kobv.de/euv/volltexte/2011/45, accessed 09/12/2012.

    Endnotes

  • Inventory Management: Is profit maximization right for you?

    Inventory Management: Is profit maximization right for you?

    This entry is part 2 of 4 in the series Predictive Analytics

     

    Introduction

    In the following we will exemplify how sales forecasts can be used to set inventory levels in single or in multilevel warehousing. By inventory we will mean a stock or store of goods; finished goods, raw materials, purchased parts, and retail items. Since the problem discussed is the same for both production and inventory, the two terms will be used interchangeably.

    Good inventory management is essential to the successful operation for most organizations both because of the amount of money the inventory represents and the impact that inventories have on the daily operations.

    An inventory can have many purposes among them the ability:

    1. to support independence of operations,
    2. to meet both anticipated and variation in demand,
    3. to decouple components of production and allow flexibility in production scheduling and
    4. to hedge against price increases, or to take advantage of quantity discounts.

    The many advantages of stock keeping must however be weighted against the costs of keeping the inventory. This can best be described as the “too much/too little problem”; order too much and inventory is left over or order too little and sales are lost.

    This can be as a single-period (a onetime purchasing decision) or a multi-period problem, involving a single warehouse or multilevel warehousing geographically dispersed. The task can then be to minimize the organizations total cost, maximize the level of customer service, minimize ‘loss’ or maximize profit etc.

    Whatever the purpose, the calculation will have to be based on knowledge of the sales distribution. In addition, sales will usually have a seasonal variance creating a balance act between production, logistic and warehousing costs. In the example given below the sales forecasts will have to be viewed as a periodic forecast (month, quarter, etc.).

    We have intentionally selected a ‘simple problem’ to highlight the optimization process and the properties of the optimal solution. The last is seldom described in the standard texts.

    The News-vendor problem

    The news-vendor is facing a onetime purchasing decision; to maximize expected profit so that the expected loss on the Qth unit equals the expected gain on the Qth unit:

    I.  Co * F(Q) = Cu * (1-F(Q)) , where

    Co = The cost of ordering one more unit than what would have been ordered if demand had been known – or the increase in profit enjoyed by having ordered one fewer unit,

    Cu = The cost of ordering one fewer unit than what would have been ordered if demand had been known  – or the increase in profit enjoyed by having ordered one more unit, and

    F(Q) = Demand Probability for q<= Q. By rearranging terms in the above equation we find:

    II.  F(Q) = Cu/{Co+Cu}

    This ratio is often called the critical ratio (CR). The usual way of solving this is to assume that the demand is normal distributed giving Q as:

    III.    Q = m + z * s, where: z = {Q-m}/s , is normal distributed with zero mean and variance equal  one.

    Demand unfortunately, rarely haves a normal distribution and to make things worse we usually don’t know the exact distribution at all. We can only ‘find’ it by Monte Carlo simulation and thus have to find the Q satisfying the equation (I) by numerical methods.

    For the news-vendor the inventory level should be set to maximize profit given the sales distribution. This implies that the cost of lost sales will have to be weighed against the cost of adding more to the stock.

    If we for the moment assume that all these costs can be regarded as fixed and independent of the inventory level, then the product markup (% of cost) will determine the optimal inventory level:

    IV. Cu= Co * (1+ {Markup/100}) 

    In the example given here the critical ratio is approx. 0.8.  The question then is if the inventory levels indicated by that critical ratio always will be the best for the organization.

    Expected demand

    The following graph indicates the news-vendors demand distribution. Expected demand is 2096 units ((Median demand is 1819 units and the demand lies most typically in the range of 1500 to 2000 units)), but the distribution is heavily skewed to the right ((The demand distribution has a skewness of 0.78., with a coefficient of variation of 0.45, a lower quartile of 1432 units and an upper quartile of 2720 units.))  so there is a possibility of demand exceeding the expected demand:

    By setting the product markup – in the example below it is set to 300% – we can calculate profit and loss based on the demand forecast.

    Profit and Loss (of opportunity)

    In the following we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum  ¤4963 at a level of 2729 units:

    At that point we can expect to have some excess stock and in some cases also lost sales. But regardless, it is at this point that expected profit is maximized, so this gives the optimal stock level.

    Since we include both costs of sold and unsold items, the point giving expected maximum profit will be below the point minimizing expected loss –¤1460 at a production level of 2910 units.

    Given the optimal inventory level (2729 units) we find the actual sales frequency distribution as shown in the graph below. At this level we expect an average sale of 1920 units – ranging from 262 to 2729 units ((Having a lower quartile of 1430 units and an upper quartile of 2714 units.)).

    The graph shows that the distribution possesses two different modes ((The most common value in a set of observations.)) or two local maxima. This bimodality is created by the fact that the demand distribution is heavily skewed to the right so that demand exceeding 2729 units will imply 2729 units sold with the rest as lost sales.

    This bimodality will of course be reflected in the distribution of realized profits. Have in mind that the line (blue) giving maximum profit is an average of all realized profits during the Monte Carlo simulation given the demand distribution and the selected inventory level. We can therefore expect realized profit both below and above this average (¤4963) – as shown in the frequency graph below:

    Expected (average) profit is ¤4963, with a minimum of ¤1681 and a maximum of ¤8186, the range of realized profits is therefore very large ((Having a lower quartile of ¤2991 and an upper quartile of ¤8129.)) ¤9867.

    So even if we maximize profit we can expect a large variation in realized profits, there is no way that the original uncertainty in the demand distribution can be reduced or removed.

    Risk and Reward

    Increased profit comes at a price: increased risk. The graph below describes the situation; the blue curve shows how expected profit increases with the production or inventory (service) level. The spread between the green and red curves indicates the band where actual profit with 80% probability will fall. As is clear from the graph, this band increases in width as we move to the right indicating an increased upside (area up to the green line) but also an increased probability for a substantial downside (area down to the red line:

    For some companies – depending on the shape of the demand distribution – other concerns than profit maximization might therefore be of more importance – like predictability of results (profit). The act of setting inventory or production levels should accordingly be viewed as an element for the boards risk assessments.

    On the other hand will the uncertainty band around loss as the service level increases decrease. This of course lies in the fact that loss due to lost sales diminishes as the service level increases and the that the high markup easily covers the cost of over-production.

    Thus a strategy of ‘loss’ minimization will falsely give a sense of ‘risk minimization’, while it in reality increases the uncertainty of future realized profit.

    Product markup

    The optimal stock or production level will be a function of the product markup. A high markup will give room for a higher level of unsold items while a low level will necessitate a focus on cost reduction and the acceptance of stock-out:

    The relation between markup (%) and the production level is quadratic ((Markup (%) = 757.5 – 0.78*production level + 0.00023*production level2))  implying that markup will have to be increasingly higher, the further out on the right tail we fix the production level.

    The Optimal inventory (production) level

    If we put it all together we get the chart below. In this the green curve is the accumulated sales giving the probability of the level of sales and the brown curve the optimal stock or production level given the markup.

    The optimal stock level is then found by drawing a line from the right markup axis (right y-axis) to the curve (red) for optimal stock level, and down to the x-axis giving the stock level. By continuing the line from the markup axis to the probability axis (left y-axis) we find the probability level for stock-out (1-the cumulative probability) and the probability for having a stock level in excess of demand:

    By using the sales distribution we can find the optimal stock/production level given the markup and this would not have been possible with single point sales forecasts – that could have ended up almost anywhere on the curve for forecasted sales.

    Even if a single point forecast managed to find expected sales – as mean, mode or median – it would have given wrong answers about the optimal stock/production level, since the shape of the sales distribution would have been unknown.

    In this case with the sales distribution having a right tail the level would have been to low – or with low markup, to high. With a left skewed sales distribution the result would have been the other way around: The level would have been too high and with low markup probably too low.

    In the case of multilevel warehousing, the above analyses have to be performed on all levels and solved as a simultaneous system.

    The state of affairs at the point of maximum

    To have the full picture of the state of affairs at the point of maximum we have to take a look at what we can expect of over- and under-production. At the level giving maximum expected profit we will on

    average have an underproduction of 168 units, ranging from zero to nearly 3000 ((Having a coefficient of variation of almost 250%)). On the face of it this could easily be interpreted as having set the level to low, but as we shall see that is not the case.

    Since we have a high markup, lost sales will weigh heavily in the profit maximization and as a result of this we can expect to have unsold items in our stock at the end of the period. On average we will have a little over 800 units left in stock, ranging from zero to nearly 2500. The lower quartile is 14 units and the upper is 1300 units so in 75% of the cases we will have an overproduction of less than 1300 units. However in 25% of the cases the overproduction will be in the range from 1300 to 2500 units.

    Even with the possibility of ending up at the end of the period with a large number of unsold units, the strategy of profit maximization will on average give the highest profit. However, as we have seen, with a very high level of uncertainty about the actual profit being realized.

    Now, since a lower inventory level in this case only will reduce profit by a small amount but lower the confidence limit by a substantial amount, other strategies giving more predictability for the actual result should be considered.

  • Budgeting Revisited

    Budgeting Revisited

    This entry is part 2 of 2 in the series Budgeting

     

    Introduction

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Good budgeting practices are structured to minimize errors and inconsistencies, drawing in all the necessary participants to contribute their business experience and the perspective of each department. Best practice in budgeting entails a mixture of top-down guidelines and standards, combined with bottom-up individual knowledge and experience.

    Excel, the de facto tool for budgeting, is a powerful personal productivity tool. Its current capabilities, however, are often inadequate to support the critical nature of budgeting and forecasting. There will come a point when a company’s reliance on spreadsheets for budgeting leads to severely ineffective decision-making, lost productivity and lost opportunities.

    Spreadsheets can accommodate many tasks – but, over time, some of the models running in Excel may grow too big for the spreadsheet application. Programming in a spreadsheet model often requires embedded assumptions, complex macros, creating opportunities for formula errors and broken links between workbooks.

    It is common for spreadsheet budget models and their intricacies to be known and maintained by a single person who becomes a vulnerability point with no backup. And there are other maintenance and usage issues:

    A.    Spreadsheet budget models are difficult to distribute and even more difficult to collect and consolidate.
    B.    Data confidentiality is almost impossible to maintain in spreadsheets, which are not designed to hide or expose data based upon each user’s role.
    C.    Financial statements are usually not fully integrated leaving little basis for decision making.

    These are serious drawbacks for corporate governance and make the audit process more difficult.

    This is a few of many reasons why we use a dedicated simulation language for our models that specifically do not mix data and code.

    The budget model

    In practice budgeting can be performed on different levels:
    1.    Cash Flow
    2.    EBITDA
    3.    EBIT
    4.    Profit or
    5.    Company value.

    The most efficient is on EBITDA level, since taxes, depreciation and amortization on the short-term is mostly given. This is also the level where consolidation of daughter companies easiest is achieved. An EBITDA model describing the firm’s operations can again be used as a subroutine for more detailed and encompassing analysis thru P&L and Balance simulation.

    The aim will then to estimate of the firm’s equity value and is probability distribution. This can again be used for strategy selection etc.

    Forecasting

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the budget process.

    Forecasting or predictive analytics can best be described as statistic modeling enabling prediction of future events or results, using present and past information and data.

    1. Forecasts must integrate both external and internal cost and value drivers of the business.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5.  Never relay on single point or scenario forecasting.

    All uncertainty about the market sizes, market shares, cost and prices, interest rates, exchange rates and taxes etc. – and their correlation will finally end up contributing to the uncertainty in the firm’s budget forecasts.

    The EBITDA model

    The EBITDA model have to be detailed enough to capture all important cost and value drivers, but simple enough to be easy to update with new data and assumptions.

    Input to the model can come from different sources; any internal reporting system or spread sheet. The easiest way to communicate with the model is by using Excel  spread sheet – templates.

    Such templates will be pre-defined in the sense that the information the model needs is on a pre-determined place in the workbook.  This makes it easy if the budgets for daughter companies is reported (and consolidated) in a common system (e.g. SAP) and can ‘dump’ onto an excel spread sheet. If the budgets are communicated directly to head office or the mother company then they can be read directly by the model.

    Standalone models and dedicated subroutines

    We usually construct our EBITDA models so that they can be used both as a standalone model and as a subroutine for balance simulation. The model can then be used both for short term budgeting and long-term EBITDA forecasting and simulation and for short/long term balance forecasting and simulation. This means that the same model can be efficiently reused in different contexts.
    Rolling budgets and forecast

    The EBITDA model can be constructed to give rolling forecast based on updated monthly or quarterly values, taking into consideration the seasonality of the operations. This will give new forecasts (new budget) for the remaining of the year and/or the next twelve month. By forecasts we again mean the probability distributions for the budget variables.

    Even if the variables have not changed, the fact that we move towards the end of the year will reduce the uncertainty of if the end year results and also for the forecast for the next twelve month.

    Uncertainty

    The most important part of budgeting with Monte Carlo simulation is assessment of the uncertainty in the budgeted (forecasted) cost and value drivers. This uncertainty is given as the most likely value (usually the budget figure) and the interval where it is assessed with a high degree of confidence (approx. 95%) to fall.

    We will then use these lower and upper limits (5% and 95%) for sales, prices and other budget items and the budget values as indicators of the shape of the probability distributions for the individual budget items. Together they described the range and uncertainty in the EBITDA forecasts.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – by a large number of runs of the model, usually 1000 – of net revenue, operating expenses and finally EBITDA. This again will give us their probability distributions

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall. It is not based on any precise computation but is a reasonable assessment by knowledgeable persons. Selecting the budget value however is more difficult. Should it be the “mean”
    or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average .

    This is therefore where the individual mangers intent and culture will be manifested, and it is here the greatest learning effect for both the managers and the mother company will be, as under-budgeting  and overconfidence  will stand out as excessive large deviations from the model calculated expected value (probability weighted average over the interval).

    Output

    The output from the Monte Carlo simulation will be in the form of graphs that puts all run’s in the simulation together to form the cumulative distribution for the operating expenses (red line):

    In the figure we have computed the frequencies of observed (simulated) values for operating expenses (blue frequency plot) – the x-axis gives the operating expenses and the left y-axis the frequency. By summing up from left to right we can compute the cumulative probability curve. The s-shaped curve (red) gives for every point the probability (on the right y-axis) for having an operating expenses less than the corresponding point on the x-axis. The shape of this curve and its range on the x-axis gives us the uncertainty in the forecasts.

    A steep curve indicates little uncertainty and a flat curve indicates greater uncertainty.  The curve is calculated from the uncertainties reported in the reporting package or templates.

    Large uncertainties in the reported variables will contribute to the overall uncertainty in the EBITDA forecast and thus to a flatter curve and contrariwise. If the reported uncertainty in sales and prices has a marked downside and the costs a marked upside the resulting EBITDA distribution might very well have a portion on the negative side on the x-axis – that is, with some probability the EBITDA might end up negative.

    In the figure below the lines give the expected EBITDA and the budget value. The expected EBIT can be found by drawing a horizontal line from the 0.5 (50%) point on the y-axis to the curve and a vertical line from this point on the curve to the x-axis. This point gives us the expected EBITDA value – the point where it is 50% probability of having a value of EBITDA below and 100%-50%=50% of having it above.

    The second set of lines give the budget figure and the probability that it will end up lower than budget. In this case it is almost a 100% probability that it will be much lower than the management have expected.

    This distributions location on the EBITDA axis (x-axis) and its shape gives a large amount of information of what we can expect of possible results and their probability.

    The following figure that gives the EBIT distributions for a number of subsidiaries exemplifies this. One wills most probable never earn money (grey), three is cash cows (blue, green and brown) and the last (red) can earn a lot of money:

    Budget revisions and follow up

    Normally – if something extraordinary does not happen – we would expect both the budget and the actual EBITDA to fall somewhere in the region of the expected value. We have however to expect some deviation both from budget and expected value due to the nature of the industry.  Having in mind the possibility of unanticipated events or events “outside” the subsidiary’s budget responsibilities, but affecting the outcome this implies that:

    • Having the actual result deviating from budget is not necessary a sign of bad budgeting.
    • Having the result close to or on budget is not necessary a sign of good budgeting.

    However:

    •  Large deviations between budget and actual result needs looking into – especially if the deviation to expected value also is large.
    • Large deviation between budget and expected value can imply either that the limits are set “wrong” or that the budget EBITDA is not reflecting the downside risk or upside opportunity expressed by the limits.

    Another way of looking at the distributions is by the probabilities of having the actual result below budget that is how far off line the budget ended up. In the graph below, country #1’s budget came out with a probability of 72% of having the actual result below budget.  It turned out that the actual figure with only 36% probability would have been lower. The length of the bars thus indicates the budget discrepancies.

    For country# 2 it is the other way around: the probability of having had a result lower than the final result is 88% while the budgeted figure had a 63% probability of having been too low. In this case the market was seriously misjudged.

    In the following we have measured the deviation of the actual result both from the budget values and from the expected values. In the figures the left axis give the deviation from expected value and the bottom axis the deviation from budget value.

    1.  If the deviation for a country falls in the upper right quadrant the deviation are positive for both budget and expected value – and the country is overachieving.
    2. If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the country is underachieving.
    3. If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – and the country is overachieving but has had a to high budget.

    With a left skewed EBITDA distribution there should not be any observations in the lower right quadrant that will only happen when the distribution is skewed to the right – and then there will not be any observations in the upper left quadrant:

    As the manager’s gets more experienced in assessing the uncertainty they face, we see that the budget figures are more in line with the expected values and that the interval’s given is shorter and better oriented.

    If the budget is in line with expected value given the described uncertainty, the upside potential ratio should be approx. one. A high value should indicate a potential for higher EBITDA and vice versa. Using this measure we can numerically describe the managements budgeting behavior:

    Rolling budgets

    If the model is set up to give rolling forecasts of the budget EBITDA as new and in this case monthly data, we will get successive forecast as in the figure below:

    As data for new month are received, the curve is getting steeper since the uncertainty is reduced. From the squares on the lines indicating expected value we see that the value is moving slowly to the right and higher EBITDA values.

    We can of course also use this for long term forecasting as in the figure below:

    As should now be evident; the EBITDA Monte Carlo model have multiple fields of use and all of them will increases the managements possibilities of control and foresight giving ample opportunity for prudent planning for the future.

     

     

  • Forecasting sales and forecasting uncertainty

    Forecasting sales and forecasting uncertainty

    This entry is part 1 of 4 in the series Predictive Analytics

     

    Introduction

    There are a large number of methods used for forecasting ranging from judgmental (expert forecasting etc.) thru expert systems and time series to causal methods (regression analysis etc.).

    Most are used to give single point forecast or at most single point forecasts for a limited number of scenarios.  We will in the following take a look at the un-usefulness of such single point forecasts.

    As example we will use a simple forecast ‘model’ for net sales for a large multinational company. It turns out that there is a good linear relation between the company’s yearly net sales in million euro and growth rates (%) in world GDP:

    with a correlation coefficient R= 0.995. The relation thus accounts for almost 99% of the variation in the sales data. The observed data is given as green dots in the graph below, and the regression as the green line. The ‘model’ explains expected sales as constant equal 1638M and with 53M in increased or decreased sales per percent increase or decrease in world GDP:

    The International Monetary Fund (IMF) that kindly provided the historical GDP growth rates also gives forecasts for expected future change in the World GDP growth rate (WEO, April 2012) – for the next five years. When we put these forecasts into the ‘model’ we ends up with forecasts for net sales for 2012 to 2016 as depicted by the yellow dots in the graph above.

    So mission accomplished!  …  Or is it really?

    We know that the probability for getting a single-point forecast right is zero even when assuming that the forecast of the GDP growth rate is correct – so the forecasts we so far have will certainly be wrong, but how wrong?

    “Some even persist in using forecasts that are manifestly unreliable, an attitude encountered by the future Nobel laureate Kenneth Arrow when he was a young statistician during the Second World War. When Arrow discovered that month-long weather forecasts used by the army were worthless, he warned his superiors against using them. He was rebuffed. “The Commanding General is well aware the forecasts are no good,” he was told. “However, he needs them for planning purposes.” (Gardner & Tetlock, 2011)

    Maybe we should take a closer look at possible forecast errors, input data and the final forecast.

    The prediction band

    Given the regression we can calculate a forecast band for future observations of sales given forecasts of the future GDP growth rate. That is the region where we with a certain probability will expect new values of net sales to fall. In the graph below the green area give the 95% forecast band:

    Since the variance of the predictions increases the further new forecasts for the GDP growth rate lies from the mean of the sample values (used to compute the regression), the band will widen as we move to either side of this mean. The band will also widen with decreasing correlation (R) and sample size (the number of observations the regression is based on).

    So even if the fit to the data is good, our regression is based on a very small sample giving plenty of room for prediction errors. In fact a 95% confidence interval for 2012, with an expected GDP growth rate of 3.5%, is net sales 1824M plus/minus 82M. Even so the interval is still only approx. 9% of the expected value.

    Now we have shown that the model gives good forecasts, calculated the confidence interval(s) and shown that the expected relative error(s) with high probability will be small!

    So the mission is finally accomplished!  …  Or is it really?

    The forecasts we have made is based on forecasts of future world GDP growth rates, but how certain are they?

    The GDP forecasts

    Forecasting the future growth in GDP for any country is at best difficult and much more so for the GDP growth for the entire world. The IMF has therefore supplied the baseline forecasts with a fan chart ((  The Inflation Report Projections: Understanding the Fan Chart By Erik Britton, Paul Fisher and John Whitley, BoE Quarterly Bulletin, February 1998, pages 30-37.)) picturing the uncertainty in their estimates.

    This fan chart ((Figure 1.12. from:, World Economic Outlook (April 2012), International Monetary Fund, Isbn  9781616352462))  shows as blue colored bands the uncertainty around the WEO baseline forecast with 50, 70, and 90 percent confidence intervals ((As shown, the 70 percent confidence interval includes the 50 percent interval, and the 90 percent confidence interval includes the 50 and 70 percent intervals. See Appendix 1.2 in the April 2009 World Economic Outlook for details.)) :

    There is also another band on the chart, implied but un-seen, indicating a 10% chance of something “unpredictable”. The fan chart thus covers only 90% of the IMF’s estimates of the future probable growth rates.

    The table below shows the actual figures for the forecasted GDP growth (%) and the limits of the confidence intervals:

    Lower

    Baseline

    Upper

    90%

    70%

    50%

    50%

    70%

    90%

    2012

    2.5

    2.9

    3.1

    .5

    3.8

    4.0

    4.3

    2013

    2.1

    2.8

    3.3

    4.1

    4.8

    5.2

    5.9

    The IMF has the following comments to the figures:

    “Risks around the WEO projections have diminished, consistent with market indicators, but they remain large and tilted to the downside. The various indicators do not point in a consistent direction. Inflation and oil price indicators suggest downside risks to growth. The term spread and S&P 500 options prices, however, point to upside risks.”

    Our approximation of the distribution that can have produced the fan chart for 2012 as given in the World Economic Outlook for April 2012 is shown below:

    This distribution has:  mean 3.43%, standard deviation 0.54, minimum 1.22 and maximum 4.70 – it is skewed with a left tail. The distribution thus also encompasses the implied but un-seen band in the chart.

    Now we are ready for serious forecasting!

    The final sales forecasts

    By employing the same technique that we used to calculate the forecast band we can by Monte Carlo simulation compute the 2012 distribution of net sales forecasts, given the distribution of GDP growth rates and by using the expected variance for the differences between forecasts using the regression and new observations. The figure below describes the forecast process:

    We however are not only using the 90% interval for The GDP growth rate or the 95% forecast band, but the full range of the distributions. The final forecasts of net sales are given as a histogram in the graph below:

    This distribution of forecasted net sales has:  mean sales 1820M, standard deviation 81, minimum sales 1590M and maximum sales 2055M – and it is slightly skewed with a left tail.

    So what added information have we got from the added effort?

    Well, we now know that there is only a 20% probability for net sales to be lower than 1755 or above 1890. The interval from 1755M to 1890M in net sales will then with 60% probability contain the actual sales in 2012 – see graph below giving the cumulative sales distribution:

    We also know that we with 90% probability will see actual net sales in 2012 between 1720M and 1955M.But most important is that we have visualized the uncertainty in the sales forecasts and that contingency planning for both low and high sales should be performed.

    An uncertain past

    The Bank of England’s fan chart from 2008 showed a wide range of possible futures, but it also showed the uncertainty about where we were then – see that the black line showing National Statistics data for the past has probability bands around it:

    This indicates that the values for past GDP growth rates are uncertain (stochastic) or contains measurement errors. This of course also holds for the IMF historic growth rates, but they are not supplying this type of information.

    If the growth rates can be considered stochastic the results above will still hold, if the conditional distribution for net sales given the GDP growth rate still fulfills the standard assumptions for using regression methods. If not other methods of estimation must be considered.

    Black Swans

    But all this uncertainty was still not enough to contain what was to become reality – shown by the red line in the graph above.

    How wrong can we be? Often more wrong than we like to think. This is good – as in useful – to know.

    “As Donald Rumsfeld once said: it’s not only what we don’t know – the known unknowns – it’s what we don’t know we don’t know.”

    While statistic methods may lead us to a reasonably understanding of some phenomenon that does not always translate into an accurate practical prediction capability. When that is the case, we find ourselves talking about risk, the likelihood that some unfavorable or favorable event will take place. Risk assessment is then necessitated and we are left only with probabilities.

    A final word

    Sales forecast models are an integrated part of our enterprise simulation models – as parts of the models predictive analytics. Predictive analytics can be described as statistic modeling enabling the prediction of future events or results ((in this case the probability distribution of future net sales)) , using present and past information and data.

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the management process. The ability to quickly and accurately detect changes in key external and internal variables and adjust tactics accordingly can make all the difference between success and failure:

    1. Forecasts must integrate both external and internal drivers of business and the financial results.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5. Never relay on single point or scenario forecasting.

    The forecasts are usually done in three stages, first by forecasting the market for that particular product(s), then the firm’s market share(s) ending up with a sales forecast. If the firm has activities in different geographic markets then the exercise has to be repeated in each market, having in mind the correlation between markets:

    1. All uncertainty about the different market sizes, market shares and their correlation will finally end up contributing to the uncertainty in the forecast for the firm’s total sales.
    2. This uncertainty combined with the uncertainty from other forecasted variables like interest rates, exchange rates, taxes etc. will eventually be manifested in the probability distribution for the firm’s equity value.

    The ‘model’ we have been using in the example have never been tested out of sample. Its usefulness as a forecast model is therefore still debatable.

    References

    Gardner, D & Tetlock, P., (2011), Overcoming Our Aversion to Acknowledging Our Ignorance, http://www.cato-unbound.org/2011/07/11/dan-gardner-and-philip-tetlock/overcoming-our-aversion-to-acknowledging-our-ignorance/

    World Economic Outlook Database, April 2012 Edition; http://www.imf.org/external/pubs/ft/weo/2012/01/weodata/index.aspx

    Endnotes

     

     

  • Corn and ethanol futures hedge ratios

    Corn and ethanol futures hedge ratios

    This entry is part 2 of 2 in the series The Bio-ethanol crush margin

     

    A large amount of literature has been published discussing hedging techniques and a number of different hedging models and statistical refinements to the OLS model that we will use in the following. For a comprehensive review see “Futures hedge ratios: a review,” (Chen et al., 2003).

    We are here looking for hedge models and hedge ratio estimations techniques that are “good enough” and that can fit into valuation models using Monte Carlo simulation.

    The ultimately purpose is to study hedging strategies using P&L and Balance simulation to forecast the probability distribution for the company’s equity value. By comparing the distributions for the different strategies, we will be able to select the hedging strategy that best fits the boards risk appetite /risk aversion and that at the same time “maximizes” the company value.

    Everything should be made as simple as possible, but not simpler. – Einstein, Reader’s Digest. Oct. 1977.

    To use futures contracts for hedging we have to understand the objective: a futures contract serves as a price-fixing mechanism. In their simplest form, futures prices are prices set today to be paid in the future for goods. If properly designed and implemented, hedge profits will offset the loss from an adverse price moves. In a like fashion, hedge losses will also eliminate effects of a favorable price change. Ultimately, the success of any hedge program rests on the implementation of a correctly sized futures position.

    The minimum variation hedge

    This is often referred to as – the volatility-minimizing hedge for one unit of exposure. It can be found by minimizing the variance of the hedge payoff at maturity.

    For an ideal hedge, we would like the change in the futures price (Delta F) to match as exactly as possible the change in the value of the asset (Delta S) we wish to hedge, i.e.:

    Delta S = Delta F

    The expected payoff from the hedge will be equal to the value of the cash position at maturity plus the payoff of the hedge (Johnson, 1960) or:

    E(H) = X_S delim{[} {E (S2)-S1} {]} + X_F delim{[} {E (F2)-F1 }{]}

    With spot position XS, a short futures market holding XF, spot price S1 and expected spot price at maturity E (S2), current future contract price F1 andexpected future price E (F2) – excluding transaction costs.

    What we want is to find the value of the futures position that reduces the variability of price changes to the lowest possible level.

    The minimum-variance hedge ratio is then defined as the number of futures per unit of the spot asset that will minimize the variance of the hedged portfolio returns.

    The variance of the portfolio return is: ((The variance of the un-hedged position is: Var (U) =X^2_S Var (Delta S))):

    Var (H) =X^2_ S Var (Delta S) + X^2_F Var (Delta F) + 2 X_S X_F Covar (Delta S, Delta F)

    Where Var (Delta S) is the variance in the future price change, Var (Delta F) is the variance of the change in the spot price and Covar (Delta S, Delta F) the covariance between the spot and future price changes. Letting h =  X_F/X_S represent the proportion of the spot position hedged, minimum value of Var (H) can then be found ((by minimizing Var (H) as a function of h)) as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)} or equivalently as: h*= {Corr (Delta S, Delta F)} {Var(Delta S)}/{Var (Delta F)}

    Where Corr (Delta S, Delta F) is the correlation between the spot and future price changes while  assuming that XS is exogenous determined or fixed.

    Estimating the hedge coefficient

    It is also possible to estimate the optimal hedge (h*) using regression analysis. The basic equation is:

    Delta S = a + h Delta F + varepsilon

    with varepsilon as the change in spot price not explained by the regression model. Since the basic OLS regression for this equation estimates the value of h* as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)}

    we can use this regression to find the solution that minimizes the objective function E(H). This is one of the reasons that use of the objective function E (H) is so appealing. ((Note that other and very different objective functions could have chosen.))

    We can then use the coefficient of determination, or R^2 , as an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position – the hedge effectiveness. (Ederington, 1979) ((Not taking into account variation margins etc.)).

    The basis

    Basis is defined as the difference between the spot price (S) and the futures price (F). When the expected change in the future contract price is equal to the expected change in the spot price, the optimal variance minimizing strategy is to set  h*=1. However, for most future contract markets the future price does not perfectly parallel the spot price, causing an element of basis risk to directly affect the hedging decision.

    A negative basis is called contango and a positive basis backwardation:

    1. When the spot price increases by more than the futures price, the basis increases and is said to “strengthen the basis” (when unexpected, this is favorable for a short hedge and unfavorable for a long hedge).
    2. When the futures price increases by more than the spot price, the basis declines and this is said to “weaken the basis” (when unexpected, this is favorable for a long hedge and unfavorable for a short hedge).

    There will usually be a different basis for each contract.

    The number of futures contracts

    The variance minimizing number of futures contracts N* will be:

    N*=h*{ X_S}/{Q_F}

    Where Q_F  is the size of one futures market contract. Since futures contracts are marked to market every day, the daily losses are debited and daily gains credited the parties accounts – settlement variations – i.e. the contracts are closed every day. The account will have to be replenished if the account falls below the maintenance margin (margin call). If the account is above the initial margin withdrawals can be made from the account.

    Ignoring the incremental income effects from investing variation margin gains (or borrowing to cover variation margin losses), we want the hedge to generate h*Delta F. Appreciating that there is an incremental effect, we want to accrue interest on a “tailed” hedge such that (Kawaller, 1997):

    h*Delta F =Delta F  (1+r)^n  or
    ĥ = h*/(1+r)^n  or h*/(1+ r*n/365) if time to maturity is less than one year.

    Where:
    r = interest rate and
    n = number of days remaining to maturity of the futures contract.

    This amounts to adjusting the hedge by a present value factor. Tailing converts the futures position into a forward position. It negates the effect of daily resettlement, in which profits and losses are realized before the day the hedge is lifted.

    For constant interest rates the tailed hedge (for h* < 1.) rises over time to reach the exposure at the maturity of the hedge. Un-tailed the hedge will over-hedge the exposure and increase the hedger’s risk.  Tailing the hedge is especially of importance when the interest rate is high and the time to maturity long.

    An appropriate interest rate would be one that reflects the average of the firm’s cost of capital (WACC) and the rate it would earn on its investments (ROIC) both which will be stochastic variable in the simulation. The first would be relevant in cases when the futures contracts generate losses, while the second when the futures contracts generate gains. In practice some average of these rates are used. ((See FAS 133 and later amendments))
    There are traditionally two approaches to tailing:

    1. Re-balance the tail each day. In this case the tailed hedge ratio is adjusted each day to maturity of the futures contract. In this approach the adjustment declines each day, until at expiration there is no adjustment.
    2. Use a constant tail (average): ĥ= h*/(1 + 0.5*r*N /365) where N is the original number of days remaining to maturity. In this shortcut, the adjustment is made at the time the hedge is put on, and not changed. The hedge will start with being too big and ends with being too small, but will on average be correct.

    For investors where trading is active, the first approach is more convenient, for inactive traders, the second is often used.

    Since our models always incorporate stochastic interest rates, hedges discounted with the appropriate rates are calculated. This amounts to solving the set of stochastic simultaneous equations created by the hedge and the WACC/ROIC calculations since the hedges will change their probability distributions. Note that the tailed hedge ratio will be a stochastic variable, and that minimizing the variance of the hedge will not necessarily maximize the company value. The value of – ĥ – that maximizes company value can only be found by simulation given the board’s risk appetite / risk aversion.

    The Spot and Futures Price movements

    At any time there are a number of futures contracts for the same commodity simultaneously being priced. The only difference between them is the delivery month. A continuous contract takes the individual contracts in the futures market and splices them together. The resulting continuous series ((The simplest method of splicing is to tack successive delivery months onto each other. Although the prices in the history are real, the chart will also preserve the price gaps that are present between expiring deliveries and those that replace them.)) allows us to study the price history in the market from a single chart. The following graphs show the price movements ((To avoid price gap problems, many prefer to base analysis on adjusted contracts that eliminate roll-over gaps. There are two basic ways to adjust a series.
    Forward-adjusting works by beginning with the true price for the first delivery and then adjusting each successive set of prices up or down depending on whether the roll-over gap is positive or negative.
    Back-adjusting reverses the process. Current price are always real but historical prices are adjusted up or down. This is the often preferred method, since the series always will show the latest actual price. However, there is no perfect method producing a continuous price series satisfying all needs.)) for the spliced corn contracts C-2010U to 2011N and the spliced ethanol contracts EH-2010U to 2011Q.

    In the graphs the spot price is given by the blue line and the corresponding futures price by the red line.

    For the corn futures, we can see that there is a difference between the spot and the futures price – the basis ((The reasons for the price difference are transportation costs between delivery locations, storage costs and availability, and variations between local and worldwide supply and demand of a given commodity. In any event, this difference in price plays an important part in what is being actually pay for the commodity when you hedge.))  – but that the price movements of the futures follow the spot price closely or – vice versa.

    The spliced contracts for bioethanol are a little different from the corn contracts. The delivery location is the same and the curves are juxtaposed very close to each other. Here are however other differences.

    The regression – the futures assay

    The selected futures contracts give us five parallel samples for the relation between the corn spot and futures price, and six for the relation between the ethanol spot and ethanol futures price. For every day in the period 8/2/2010 to 7/14/2011 we have from one to five observations of the corn relation (five replications) and from 8/5/2010 to 8/3/2011 we have one to twelve observations of the ethanol relation. Since we follow a set of contracts, the number of daily observations of the corn futures prices starts with five (twelve for the ethanol futures) and ends with only one as the contracts matures.  We could of course also have selected a sample giving an equal number of observations every day.

    There are three likely models which could be fit:

    1. Simple regression on the individual data points,
    2. Simple regression on the daily means,and
    3. Weighted regression on the daily means using the number of observations as the weight.

    When the number of daily observations is equal all three models will have the same parameter estimates. The weighted and individual regressions will always have the same parameter estimates, but when the sample sizes are unequal these will be different from the unweighted means regression. Whether the weighted or unweighted model should be used when the number of daily observations is unequal will depend on the situation.

    Since we now have replications of the relation between spot and the futures price we have the opportunity to test for lack of fit from the straight line model.

    In our case using this approach have a small drawback. We are looking for the regression of the spot price changes against the price changes in the futures contract. This model however will give us the inverse: the regression of the price changes in the futures contract against the changes in spot price. The inverse of the slope of this regression, which is what we are looking for, will in general not give the correct answer (Thonnard, 2006).  So we will use this approach (model#3) to test for linearity and then model #1 with all data for estimation of the slope.

    Ideally we would like to find stable (efficient) hedge ratios in the sense that they can be used for more than one hedge and over a longer period of time, thus greatly simplifying the workload for ethanol producing companies.

    All prices, both spot and futures in the following, have been converted from $/gallon (ethanol) or $/bushel (corn) to $/kg.

    The Corn hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the corn futures prices on the changes in corn spot prices (model#3):

    The analysis of variance cautions us that the lack of fit to a linear model for all contracts is significant. However the sum of squares due to this is very small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a linear function of the changes in the spot prices and the hedge ratios found as efficient. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    Nevertheless, this linear model will have to be monitored closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:

    Delta S = 0.0001 + 1.0073 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0073

    First, since the adjusted  R-square value (0.9838) is an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position, a hedge based on this regression coefficient (slope) should be highly effective.

    The ratio of the variance of the hedged position and the un-hedged position is equal to 1-R2.  The variance of a hedged position based on this hedge ratio will be 12.7 % of the unhedged position.

    We have thus eliminated 87.3 % of the variance of the unhedged position. For a simple model like this this can be considered as a good result.

    In the figure the thick black line gives the 95% confidence limits and the yellow area the 95% prediction limits. As we can see, the relationship between the daily price changes is quite tight thus promising the possibility of effective hedges.

    Second, due to the differencing the basis caused by the difference in delivery location have disappeared, and even if the constant term is significant, it is so small that it with little loss can be considered zero.

    The R-square values would have been higher for the regressions on the means than for the regression above. This is because the total variability in the data would have been reduced by using means (note that the total degrees of freedom is reduced for the regressions on means).  A regression on the means will thus always suggest greater predictive ability than a regression on individual data because it predicts mean values, not individual values.

    The Ethanol hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the ethanol futures prices on the changes in ethanol spot prices (model#3):

    The analysis of variance again cautions us that the lack of fit to a linear model for all contracts is significant.  In this case it is approximately ten times higher than for the corn contracts.

    However the sum of squares due to this is small small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a close to linear function of the changes in the spot prices and the hedge ratios found as “good enough”. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    In this graph we can clearly see the deviation from a strictly linear model. The assumption of a linear model for the changes in ethanol spot and futures prices will have to be monitored very closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:
    Delta S = 0.0135 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0135

    The adjusted  R-square value (0.8105) estimating the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position is high even with the “lack of linearity”. A hedge based on this regression coefficient (slope) should then still be highly effective.

    The variance of a hedged position based on this hedge ratio will be 43.7 % of the unhedged position. It is not as good as for the corn contracts, but will still give a healthy reduction in the ethanol price risk facing the company.

    As this turned out, we can use both of these estimation methods for the hedge ratio as basis for strategy simulations, but one question remains unanswered: will this minimize the variance of the crush  ratio?

    References

    Understanding Basis, Chicago Board of Trade, 2004.  http://www.gofutures.com/pdfs/Understanding-Basis.pdf

    http://www.cmegroup.com/trading/agricultural/files/AC-406_DDG_CornCrush_042010.pdf

    Bond, Gary E. (1984). “The Effects of Supply and Interest Rate Shocks in Commodity Futures Markets,” American Journal of Agricultural Economics, 66, pp. 294-301.

    Chen, S. Lee, C.F. and Shrestha, K (2003) “Futures hedge ratios: a review,” The Quarterly Review of Economics and Finance, 43 pp. 433–465

    Ederington, Louis H. (1979). “The Hedging Performance of the New Futures Markets,” Journal of Finance, 34, pp. 157-70

    Einstein, Albert (1923). Sidelights on Relativity (Geometry and Experience). P. Dutton., Co.

    Figlewski, S., Lanskroner, Y. and Silber, W. L. (1991) “Tailing the Hedge: Why and How,” Journal of Futures Markets, 11: pp. 201-212.

    Johnson, Leland L.  (1960). ” The Theory of Hedging and Speculation in Commodity Futures,” Review of Economic Studies, 27, pp. 139-51.

    Kawaller, I. G. (1997 ) ”Tailing Futures Hedges/Tailing Spreads,” The Journal of Derivatives, Vol. 5, No. 2, pp. 62-70.

    Li, A. and Lien, D. D. (2003) “Futures Hedging Under Mark-to-Market Risk,” Journal of Futures Markets, Vol. 23, No. 4.

    Myers Robert J. and Thompson Stanley R. (1989) “Generalized Optimal Hedge Ratio Estimation,” American Journal of Agricultural Economics, Vol. 71, No. 4, pp. 858-868.

    Thonnard, M., (2006), Confidence Intervals in Inverse Regression. Diss. Technische Universiteit Eindhoven, Department of Mathematics and Computer Science, Web. 5 Apr. 2013. <http://alexandria.tue.nl/extra1/afstversl/wsk-i/thonnard2006.pdf>.

    Stein, Jerome L.  (1961). “The Simultaneous Determination of Spot and Futures Prices,” American Economic Review, 51, pp. 1012-25.

    Endnotes

  • The probability distribution of the bioethanol crush margin

    The probability distribution of the bioethanol crush margin

    This entry is part 1 of 2 in the series The Bio-ethanol crush margin

    A chain is no stronger than its weakest link.

    Introduction

    Producing bioethanol is a high risk endeavor with adverse price development and crumbling margins.

    In the following we will illustrate some of the risks the bioethanol producer is facing using corn  as feedstock. However, these risks will persist regardless of the feedstock and production process chosen. The elements in the discussion below can therefore be applied to any and all types of bioethanol production:

    1.    What average yield (kg ethanol per kg feedstock) can we expect?  And  what is the shape of the yield distribution?
    2.    What will the future price ratio of feedstock to ethanol be? And what volatility can we expect?

    The crush margin ((The relationship between prices in the cash market is commonly referred to as the Gross Production Margin.))  measures the difference between the sales proceeds of finished bioethanol and its feedstock ((It can also be considered as the productions throughput; the rate at which the system converts raw materials to money. Throughput is net sales less variable cost, generally the cost of the most important raw materials. (see: Throughput Accounting)).

    With current technology, one bushel of corn can be converted into approx. 2.75 gallons of corn and 17 pounds of DDG (distillers’ dried grains). The crush margin (or gross processing margin) is then:

    1. Crush margin = 0.0085 x DDG price + 2.8 x ethanol price – corn price

    Since from 65 % to 75 % of the variable cost in bioethanol production is cost of corn, the crush margin is an important metric especially since the margin in addition shall cover all other expenses like energy, electricity, interest, transportation, labor etc. and – in the long term the facility’s fixed costs.

    The following graph taken from the CME report: Trading the corn for ethanol crush, (CME, 2010) gives the margin development in 2009 and the first months of 2010:

    This graph gives a good picture of the uncertainties that faces the bioethanol producers, and can be a helpful tool when hedging purchases of corn and sale of the products ((The historical chart going back to APR 2005 is available at the CBOT web site)).

    The Crush Spread, Crush Profit Margin and Crush Ratio

    There are a number of other ways to formulate the crush risk (CME, July 11. 2011):

    The CBOT defines the “Crush Spread” as the Estimated Gross Margin per Bushel of Corn. It is calculated as follows:

    2. Crush Spread = (Ethanol price per gallon X 2.8) – Corn price per bushel, or as

    3. Crush Profit margin = Ethanol price – (Corn price/2.8).

    Understanding these relationships is invaluable in trading ethanol stocks ((We will return to this in a later post.)).

    By rearranging the crush spread equation, we can express the spread as its ratio to the product price (simplifying by keeping bi-products like DDG etc. out of the equation):

    4. Crush ratio = Crush spread/Ethanol price = y – p,

    Where: y = EtOH Yield (gal)/ bushel corn and p = Corn price/Ethanol price.

    We will in the following look at the stochastic nature of y and p and thus the uncertainty in forecasting the crush ratio.

    The crush spread and thus the crush ratio is calculated using data from the same period. They therefore give the result of an unhedged operation. Even if the production period is short – two to three days – it will be possible to hedge both the corn and ethanol prices. But to do that in a consistent and effective way we have to look into the inherent volatility in the operations.

    Ethanol yield

    The ethanol yield is usually set to 2.682 gal/bushel corn, assuming 15.5 % humidity. The yield is however a stochastic variable contributing to the uncertainty in the crush ratio forecasts. As only starch in corn can be converted to ethanol we need to know the content of extractable starch in a standard bushel of corn – corrected for normal loss and moisture.  In the following we will lean heavily on the article: “A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch”, by Tad W. Patzek (Patzek, 2006) which fits our purpose perfectly. All relevant references can be found in the article.

    The aim of his article was to establish the mean extractable starch in hybrid corn and the mean highest possible yield of ethanol from starch. We however are also interested in the probability distributions for these variables – since no production company will ever experience the mean values (ensembles) and since the average return over time always will be less than the return using ensemble means ((We will return to this in a later post))  (Peters, 2010).

    The purpose of this exercise is after all to establish a model that can be used as support for decision making in regard to investment and hedging in the bioethanol industry over time.

    From (Patzek, 2006) we have that the extractable starch (%) can be described as approx. having a normal distribution with mean 66.18 % and standard deviation of 1.13:

    The nominal grain loss due to dirt etc. can also be described as approx. having a normal distribution with mean 3 % and a standard deviation of 0.7:

    The probability distribution for the theoretical ethanol yield (kg/kg corn) can then be found by Monte Carlo simulation ((See formula #3 in (Patzek, 2006))  as:

    – having an approx. normal distribution with mean 0.364 kg EtHO/kg of dry grain and standard deviation of 0.007. On average we will need 2.75 kg of clean dry grain to produce one kilo or 1.74 liter of ethanol ((With a specific density of 0.787 kg/l)).

    Since we now have a distribution for ethanol yield (y) as kilo of ethanol per kilo of corn we will in the following use price per kilo both for ethanol and corn, adjusting for the moisture (natural logarithm of moisture in %) in corn:

    We can also use this to find the EtHO yield starting with wet corn and using gal/bushel corn as unit (Patzek, 2006):

    giving as theoretical value a mean of 2.64 gal/wet bushel with a standard deviation of 0.05 – which is significantly lower than the “official” figure of 2.8 gal/wet bushel used in the CBOT calculations. More important to us however is the fact that we easily can get yields much lower than expected and thus a real risk of lower earnings than expected. Have in mind that to get a yield above 2.64 gallons of ethanol per bushel of corn all steps in the process must continuously be at or close to their maximum efficiency – which with high probability never will happen.

    Corn and ethanol prices

    Looking at the price developments since 2005 it is obvious that both the corn and ethanol prices have a large variability ($/kg and dry corn):

    The long term trends show a disturbing development with decreasing ethanol price, increasing corn prices  and thus an increasing price ratio:

    “Risk is like fire: If controlled, it will help you; if uncontrolled, it will rise up and destroy you.”

    Theodore Roosevelt

    The unhedged crush ratio

    Since the crush ratio on average is:

    Crush ratio = 0.364 – p, where:
    0.364 = Average EtOH Yield (kg EtHO/kg of dry grain) and
    p = Corn price/Ethanol price

    The price ratio (p) has to be less than 0.364 for the crush ratio in the outset to be positive. As of January 2011 the price ratios has overstepped that threshold and have for the first months of 2011 stayed above that.

    To get a picture of the risk an unhedged bioethanol producer faces only from normal variation in yield and forecasted variation in the price ratio we will make a simple forecast for April 2011 using the historic time series information on trend and seasonal factors:

    The forecasted probability distribution for the April price ratio is given in the frequency graph below:

    This represents the price risk the producer will face. We find that the mean value for the price ratio will be 0.323 with a standard deviation of 0.043. By using this and the distribution for ethanol yield we can by Monte Carlo simulation forecast the April distribution for the crush ratio:

    As we see, will negative values for the crush ratio be well inside the field of possible outcomes:

    The actual value of the average price ratio for April turned out to be 0.376 with a daily maximum of 0.384 and minimum of 0.363. This implies that the April crush ratio with 90 % probability would have been between -0.005 and -0.199, with only the income from DDGs to cover the deficit and all other costs.

    Hedging the crush ratio

    The distribution for the price ratio forecast above clearly points out the necessity of price ratio hedging (Johnson, 1960) and (Stein, 1961).
    The time series chart above shows both a negative trend development and seasonal variations in the price ratio. In the short run there is nothing much to do about the trend development, but in the longer run will probably other feedstock and better processes change the trend development (Shapouri et al., 2002).

    However, what immediately stand out are the possibilities to exploit the seasonal fluctuations in both markets:

    Ideally, raw material is purchased in the months seasonal factors are low and ethanol sold the months seasonal factor are high. In practice, this is not possible, restrictions on manufacturing; warehousing, market presence, liquidity, working capital and costs set limits to the producer’s degrees of freedom (Dalgran, 2009).

    Fortunately, there are a number of tools in both the physical and financial markets available to manage price risks; forwards and futures contracts, options, swaps, cash-forward, and index and basis contracts. All are available for the producers who understand financial hedging instruments and are willing to participate in this market. See: (Duffie, 1989), (Hull, 2003) and (Bjørk, 2009).

    The objective is to change the margin distributions shape (red) from having a large part of its left tail on the negative part of the margin axis to one resembling the green curve below where the negative part have been removed, but most of the upside (right tail) has been preserved, that is to: eliminate negative margins, reduce variability, maintain the upside potential and thus reduce the probability of operating at a net loss:

    Even if the ideal solution does not exist, large number of solutions through combinations of instruments can provide satisfactory results. In principle, it does not matter where these instruments exist, since both the commodity and financial markets are interconnected to each other. From a strategic standpoint, the purpose is to exploit fluctuations in the market to capture opportunities while mitigating unwanted risks (Mallory, et al., 2010).

    Strategic Risk Management

    To manage price risk in commodity markets is a complex topic. There are many strategic, economic and technical factors that must be understood before a hedging program can be implemented.

    Since all the hedging instruments have a cost and since only future outcomes ranges and not exact prices, can be forecasted in the individual markets, costs and effectiveness is uncertain.

    In addition, the degrees of desired protection have to be determined. Are we seeking to ensure only a positive margin, or a positive EBITDA, or a positive EBIT? With what probability and to what cost?

    A systematic risk management process is required to tailor an integrated risk management program for each individual bioethanol plant:

    The choice of instruments will define different strategies that will affect company liquidity and working capital and ultimately company value. Since the effect of each of these strategies will be of stochastic nature it will only be possible to distinguish between them using the concept of stochastic dominance. (selecting strategy)

    Models that can describe the business operations and underlying risk can be a starting point, to such an understanding. Linked to balance simulation they will provide invaluable support to decisions on the scope and timing of hedging programs.

    It is only when the various hedging strategies are simulated through the balance so that the effect on equity value can be considered that the best strategy with respect to costs and security level can be determined – and it is with this that S@R can help.

    References

    Bjørk, T.,(2009). Arbitrage Theory in Continuous Time. Oxford University Press, Oxford.

    CME Group., (2010).Trading the corn for ethanol crush,
    http://www.cmegroup.com/trading/agricultural/corn-for-ethanol-crush.html

    CME Group., (July 11. 2011). Ethanol Outlook Report, , http://cmegroup.barchart.com/ethanol/

    Dalgran, R.,A., (2009) Inventory and Transformation Hedging Effectiveness in Corn Crushing. Journal of Agricultural and Resource Economics 34 (1): 154-171.

    Duffie, D., (1989). Futures Markets. Prentice Hall, Englewood Cliffs, NJ.

    Hull, J. (2003). Options, Futures, and Other Derivatives (5th edn). Prentice Hall, Englewood Cliffs, N.J.

    Johnson, L., L., (1960). The Theory of Hedging and Speculation in Commodity Futures, Review of Economic Studies , XXVII, pp. 139-151.

    Mallory, M., L., Hayes, D., J., & Irwin, S., H. (2010). How Market Efficiency and the Theory of Storage Link Corn and Ethanol Markets. Center for Agricultural and Rural Development Iowa State University Working Paper 10-WP 517.

    Patzek, T., W., (2004). Sustainability of the Corn-Ethanol Biofuel Cycle, Department of Civil and Environmental Engineering, U.C. Berkeley, Berkeley, CA.

    Patzek, T., W., (2006). A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch, Natural Resources Research, Vol. 15, No. 3.

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338.

    Shapouri,H., Duffield,J.,A., & Wang, M., (2002). The Energy Balance of Corn Ethanol: An Update. U.S. Department of Agriculture, Office of the Chief Economist, Office of Energy Policy and New Uses. Agricultural Economic Report No. 814.

    Stein, J.L. (1961). The Simultaneous Determination of Spot and Futures Prices. American Economic Review, vol. 51, p.p. 1012-1025.

    Footnotes