Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Articles – Page 4 – Strategy @ Risk

Blog

  • Forecasting sales and forecasting uncertainty

    Forecasting sales and forecasting uncertainty

    This entry is part 1 of 4 in the series Predictive Analytics

     

    Introduction

    There are a large number of methods used for forecasting ranging from judgmental (expert forecasting etc.) thru expert systems and time series to causal methods (regression analysis etc.).

    Most are used to give single point forecast or at most single point forecasts for a limited number of scenarios.  We will in the following take a look at the un-usefulness of such single point forecasts.

    As example we will use a simple forecast ‘model’ for net sales for a large multinational company. It turns out that there is a good linear relation between the company’s yearly net sales in million euro and growth rates (%) in world GDP:

    with a correlation coefficient R= 0.995. The relation thus accounts for almost 99% of the variation in the sales data. The observed data is given as green dots in the graph below, and the regression as the green line. The ‘model’ explains expected sales as constant equal 1638M and with 53M in increased or decreased sales per percent increase or decrease in world GDP:

    The International Monetary Fund (IMF) that kindly provided the historical GDP growth rates also gives forecasts for expected future change in the World GDP growth rate (WEO, April 2012) – for the next five years. When we put these forecasts into the ‘model’ we ends up with forecasts for net sales for 2012 to 2016 as depicted by the yellow dots in the graph above.

    So mission accomplished!  …  Or is it really?

    We know that the probability for getting a single-point forecast right is zero even when assuming that the forecast of the GDP growth rate is correct – so the forecasts we so far have will certainly be wrong, but how wrong?

    “Some even persist in using forecasts that are manifestly unreliable, an attitude encountered by the future Nobel laureate Kenneth Arrow when he was a young statistician during the Second World War. When Arrow discovered that month-long weather forecasts used by the army were worthless, he warned his superiors against using them. He was rebuffed. “The Commanding General is well aware the forecasts are no good,” he was told. “However, he needs them for planning purposes.” (Gardner & Tetlock, 2011)

    Maybe we should take a closer look at possible forecast errors, input data and the final forecast.

    The prediction band

    Given the regression we can calculate a forecast band for future observations of sales given forecasts of the future GDP growth rate. That is the region where we with a certain probability will expect new values of net sales to fall. In the graph below the green area give the 95% forecast band:

    Since the variance of the predictions increases the further new forecasts for the GDP growth rate lies from the mean of the sample values (used to compute the regression), the band will widen as we move to either side of this mean. The band will also widen with decreasing correlation (R) and sample size (the number of observations the regression is based on).

    So even if the fit to the data is good, our regression is based on a very small sample giving plenty of room for prediction errors. In fact a 95% confidence interval for 2012, with an expected GDP growth rate of 3.5%, is net sales 1824M plus/minus 82M. Even so the interval is still only approx. 9% of the expected value.

    Now we have shown that the model gives good forecasts, calculated the confidence interval(s) and shown that the expected relative error(s) with high probability will be small!

    So the mission is finally accomplished!  …  Or is it really?

    The forecasts we have made is based on forecasts of future world GDP growth rates, but how certain are they?

    The GDP forecasts

    Forecasting the future growth in GDP for any country is at best difficult and much more so for the GDP growth for the entire world. The IMF has therefore supplied the baseline forecasts with a fan chart ((  The Inflation Report Projections: Understanding the Fan Chart By Erik Britton, Paul Fisher and John Whitley, BoE Quarterly Bulletin, February 1998, pages 30-37.)) picturing the uncertainty in their estimates.

    This fan chart ((Figure 1.12. from:, World Economic Outlook (April 2012), International Monetary Fund, Isbn  9781616352462))  shows as blue colored bands the uncertainty around the WEO baseline forecast with 50, 70, and 90 percent confidence intervals ((As shown, the 70 percent confidence interval includes the 50 percent interval, and the 90 percent confidence interval includes the 50 and 70 percent intervals. See Appendix 1.2 in the April 2009 World Economic Outlook for details.)) :

    There is also another band on the chart, implied but un-seen, indicating a 10% chance of something “unpredictable”. The fan chart thus covers only 90% of the IMF’s estimates of the future probable growth rates.

    The table below shows the actual figures for the forecasted GDP growth (%) and the limits of the confidence intervals:

    Lower

    Baseline

    Upper

    90%

    70%

    50%

    50%

    70%

    90%

    2012

    2.5

    2.9

    3.1

    .5

    3.8

    4.0

    4.3

    2013

    2.1

    2.8

    3.3

    4.1

    4.8

    5.2

    5.9

    The IMF has the following comments to the figures:

    “Risks around the WEO projections have diminished, consistent with market indicators, but they remain large and tilted to the downside. The various indicators do not point in a consistent direction. Inflation and oil price indicators suggest downside risks to growth. The term spread and S&P 500 options prices, however, point to upside risks.”

    Our approximation of the distribution that can have produced the fan chart for 2012 as given in the World Economic Outlook for April 2012 is shown below:

    This distribution has:  mean 3.43%, standard deviation 0.54, minimum 1.22 and maximum 4.70 – it is skewed with a left tail. The distribution thus also encompasses the implied but un-seen band in the chart.

    Now we are ready for serious forecasting!

    The final sales forecasts

    By employing the same technique that we used to calculate the forecast band we can by Monte Carlo simulation compute the 2012 distribution of net sales forecasts, given the distribution of GDP growth rates and by using the expected variance for the differences between forecasts using the regression and new observations. The figure below describes the forecast process:

    We however are not only using the 90% interval for The GDP growth rate or the 95% forecast band, but the full range of the distributions. The final forecasts of net sales are given as a histogram in the graph below:

    This distribution of forecasted net sales has:  mean sales 1820M, standard deviation 81, minimum sales 1590M and maximum sales 2055M – and it is slightly skewed with a left tail.

    So what added information have we got from the added effort?

    Well, we now know that there is only a 20% probability for net sales to be lower than 1755 or above 1890. The interval from 1755M to 1890M in net sales will then with 60% probability contain the actual sales in 2012 – see graph below giving the cumulative sales distribution:

    We also know that we with 90% probability will see actual net sales in 2012 between 1720M and 1955M.But most important is that we have visualized the uncertainty in the sales forecasts and that contingency planning for both low and high sales should be performed.

    An uncertain past

    The Bank of England’s fan chart from 2008 showed a wide range of possible futures, but it also showed the uncertainty about where we were then – see that the black line showing National Statistics data for the past has probability bands around it:

    This indicates that the values for past GDP growth rates are uncertain (stochastic) or contains measurement errors. This of course also holds for the IMF historic growth rates, but they are not supplying this type of information.

    If the growth rates can be considered stochastic the results above will still hold, if the conditional distribution for net sales given the GDP growth rate still fulfills the standard assumptions for using regression methods. If not other methods of estimation must be considered.

    Black Swans

    But all this uncertainty was still not enough to contain what was to become reality – shown by the red line in the graph above.

    How wrong can we be? Often more wrong than we like to think. This is good – as in useful – to know.

    “As Donald Rumsfeld once said: it’s not only what we don’t know – the known unknowns – it’s what we don’t know we don’t know.”

    While statistic methods may lead us to a reasonably understanding of some phenomenon that does not always translate into an accurate practical prediction capability. When that is the case, we find ourselves talking about risk, the likelihood that some unfavorable or favorable event will take place. Risk assessment is then necessitated and we are left only with probabilities.

    A final word

    Sales forecast models are an integrated part of our enterprise simulation models – as parts of the models predictive analytics. Predictive analytics can be described as statistic modeling enabling the prediction of future events or results ((in this case the probability distribution of future net sales)) , using present and past information and data.

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the management process. The ability to quickly and accurately detect changes in key external and internal variables and adjust tactics accordingly can make all the difference between success and failure:

    1. Forecasts must integrate both external and internal drivers of business and the financial results.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5. Never relay on single point or scenario forecasting.

    The forecasts are usually done in three stages, first by forecasting the market for that particular product(s), then the firm’s market share(s) ending up with a sales forecast. If the firm has activities in different geographic markets then the exercise has to be repeated in each market, having in mind the correlation between markets:

    1. All uncertainty about the different market sizes, market shares and their correlation will finally end up contributing to the uncertainty in the forecast for the firm’s total sales.
    2. This uncertainty combined with the uncertainty from other forecasted variables like interest rates, exchange rates, taxes etc. will eventually be manifested in the probability distribution for the firm’s equity value.

    The ‘model’ we have been using in the example have never been tested out of sample. Its usefulness as a forecast model is therefore still debatable.

    References

    Gardner, D & Tetlock, P., (2011), Overcoming Our Aversion to Acknowledging Our Ignorance, http://www.cato-unbound.org/2011/07/11/dan-gardner-and-philip-tetlock/overcoming-our-aversion-to-acknowledging-our-ignorance/

    World Economic Outlook Database, April 2012 Edition; http://www.imf.org/external/pubs/ft/weo/2012/01/weodata/index.aspx

    Endnotes

     

     

  • “How can you be better than us understand our business risk?”

    “How can you be better than us understand our business risk?”

    This is a question we often hear and the simple answer is that we don’t! But by using our methods and models we can use your knowledge in such a way that it can be systematically measured and accumulated throughout the business and be presented in easy to understand graphs to the management and board.

    The main reason for this lies in how we can treat uncertainties ((Variance is used as measure of uncertainty or risk.)) in the variables and in the ability to handle uncertainties stemming from variables from different departments simultaneously.

    Risk is usually compartmentalized in “silos” and regarded as proprietary to the department and – not as a risk correlated or co-moving with other risks in the company caused by common underlying events influencing their outcome:

    When Queen Elizabeth visited the London School of Economics in autumn 2008 she asked why no one had foreseen the crisis. The British Academy Forum replied to the Queen in a letter six months later. Included in the letter was the following:

    One of our major banks, now mainly in public ownership, reputedly had 4000 risk managers. But the difficulty was seeing the risk to the system as a whole rather than to any specific financial instrument or loan (…) they frequently lost sight of the bigger picture ((The letter from the British Academy to the Queen is available at: http://media.ft.com/cms/3e3b6ca8-7a08-11de-b86f-00144feabdc0.pdf)).

    To be precise we are actually not simulating risk in and of itself, risk just is a bi-product from simulation of a company’s financial and operational (economic) activities. Since the variables describing these activities is of stochastic nature, which is to say contains uncertainty, all variables in the P&L and Balance sheet will contain uncertainty. They can as such best be described by the shape of their frequency distribution – found after thousands of simulations. And it is the shape of these distributions that describes the uncertainty in the variables.

    Most ERM activities are focused on changing the left or downside tail – the tail that describes what normally is called risk.

    We however are also interested in the right tail or upside tail, the tail that describes possible outcomes increasing company value. Together they depict the uncertainty the company faces:

    S@R thus treats company risk holistic by modeling risks (uncertainty) as parts of the overall operational and financial activities. We are thus able to “add up” the risks – to a consolidated level.

    Having the probability distribution for e.g. the company’s equity value gives us the opportunity to apply risk measures to describe the risk facing the shareholders or the risk added or subtracted by different strategies like investments or risk mitigation tools.

    Since this can’t be done with ordinary addition (( The variance of the sum of two stochastic variables is the sum of their variance plus the covariance between them.)) (or subtraction) we have to use Monte Carlo simulation.

    The value added by this are:

    1.  A method for assessing changes in strategy; investments, new markets, new products etc.
    2. A heightening of risk awareness in management across an organization’s diverse businesses.
    3. A consistent measure of risk allowing executive management and board reporting and response across a diverse organization.
    4. A measure of risk (including credit and market risk) for the organization that can be compared with capital required by regulators, rating agencies and investors.
    5. A measure of risk by organization unit, product, channel and customer segment which allows risk adjusted returns to be assessed, and scarce capital to be rationally allocated.
    6.  A framework from which the organization can decide its risk mitigation requirements rationally.
    7. A measure of risk versus return that allows businesses and in particular new businesses (including mergers and acquisitions) to be assessed in terms of contribution to growth in shareholder value.

    The independent risk experts are often essential for consistency and integrity. They can also add value to the process by sharing risk and risk management knowledge gained both externally and elsewhere in the organization. This is not just a measurement exercise, but an investment in risk management culture.

    Forecasting

    All business planning are built on forecasts of market sizes, market shares, prices and costs. They are usually given as low, mean and high scenarios without specifying the relationship between the variables. It is easy to show that when you combine such forecasts you can end up very wrong (( https://www.strategy-at-risk.com/2009/05/04/the-fallacies-of-scenario-analysis/)). However the 5 %, 50 % and 95 % values from the scenarios can be used to produce a probability distribution for the variable and the simultaneous effect of these distributions can be calculated using Monte Carlo simulation, giving for instance the probability distribution for profit or cash flow from that market. This can again be used to consolidate the company’s cash flow or profit etc.

    Controls and Mitigation

    Controls and mitigation play a significant part in reducing the likelihood of a risk event or the amount of loss should one occur. They however have a material cost. One of the drivers of measuring risk is to support a more rational analysis of the costs and benefits of controls and.
    The result after controls and mitigation becomes the final or residual risk distribution for the company.

    Distributing Diversification Benefits

    At each level of aggregation within a business diversification benefits accrue, representing the capacity to leverage the risk capital against a larger range of non-perfectly correlated risks. How should these diversification benefits be distributed to the various businesses?

    This is not an academic matter, as the residual risk capital ((Bodoff, N. M.,  Capital Allocation by Percentile Layer VOLUME 3/ISSUE 1 CASUALTY ACTUARIAL SOCIETY, pp 13-30, http://www.variancejournal.org/issues/03-01/13.pdf

    Erel, Isil, Myers, Stewart C. and Read, James, Capital Allocation (May 28, 2009). Fisher College of Business Working Paper No. 2009-03-010. Available at SSRN: http://ssrn.com/abstract=1411190 or fttp://dx.doi.org/10.2139/ssrn.1411190))  attributed to each business segment is critical in determining its shareholder value creation and thus its strategic worth to the enterprise. Getting this wrong could lead the organization to discourage its better value creating segments and encourage ones that dissipate shareholder value.

    The simplest is the pro-rata approach which distributes the diversification benefits on a pro-rata basis down the various segment hierarchies (organizational unit, product, customer segment etc.).

    A more right approach that can be built into the Monte Carlo simulation is the contributory method which takes into account the extent to which a segment of the organization’s business is correlated with or contrary to the major risks that make up the company’s overall risk. This rewards counter cyclical businesses and others that diversify the company’s risk profile.

    Aggregation with market & credit risk

    For many parts of an organization there may be no market or credit risk – for areas, such as sales and manufacturing, operational and business risk covers all of their risks.

    But at the company level the operational and business risk needs to be integrated with market and credit risk to establish the overall measure of risk being run by the company. And it is this combined risk capital measure that needs to be apportioned out to the various businesses or segments to form the basis for risk adjusted performance measures.

    It is not enough just to add the operational, credit and market risks together. This would over count the risk – the risk domains are by no means perfectly correlated, which a simple addition would imply. A sharp hit in one risk domain does not imply equally sharp hits in the others.

    Yet they are not independent either. A sharp economic downturn will affect credit and many operational risks and probably a number of market risks as well.

    The combination of these domains can be handled in a similar way to correlations within operational risk, provided aggregate risk distributions and correlation factors can be estimated for both credit and market risk.

    Correlation risk

    Markets that are part of the same sector or group are usually very highly correlated or move together. Correlation risk is the risk associated with having several positions in too many similar markets. By using Monte Carlo simulation as described above this risk can be calculated and added to the company’s risks distribution that will take part in forming the company’s yearly profit or equity value distribution. And this is the information that the management and board will need.

    Decision making

    The distribution for equity value (see above) can then be used for decision purposes. By making changes to the assumptions about the variables distributions (low, medium and high values) or production capacities etc. this new equity distribution can be compared with the old to find the changes created by the changes in assumptions etc.:

    A versatile tool

    This is not only a tool for C-level decision-making but also for controllers, treasury, budgeting etc.:

    The results from these analyses can be presented in form of B/S and P&L looking at the coming one to five (short-term) or five to fifteen years (long-term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ Evaluate alternative strategic investment options
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    If you always have a picture of what really can happen you are forewarned and thus forearmed to adverse events and better prepared to take advantage of favorable events.go-on-look-behind-the-curtainFrom Indexed: Go-on-look-behind-the-curtain ((From Indexed: http://thisisindexed.com/2012/02/go-on-look-behind-the-curtain/))

     Footnotes

  • Be prepared for a bumpy ride

    Be prepared for a bumpy ride

    Imagine you’re nicely settled down in your airline seat on a transatlantic flight – comfort-able, with a great feeling. Then the captain comes on and welcomes everybody on board and continues, “It’s the first time I fly this type of machine, so wish me luck!” Still feeling great? ((Inspired by an article from BTS: http://www.bts.com/news-insights/strategy-execution-blog/Why_are_Business_Simulations_so_Effective.aspx))

    Running a company in today’s interconnected and volatile world has become extremely complicated; surely far more than flying an airliner. You probably don’t have all the indicators, dashboard system and controls as on a flight deck. And business conditions are likely to change for more than flight conditions ever will. Today we live with an information overload. Data streaming at us almost everywhere we turn. How can we cope? How do we make smart decisions?

    Pilots train over and over again. They spend hour after hour in flight simulators before being allowed to sit as co-pilots on a real passenger flight. Fortunately, for us passengers, flight hours normally pass by, day after day, without much excitement. Time to hit the simulator again and train engine fires, damaged landing gear, landing on water, passenger evacuation etc. becoming both mentally and practically prepared to manage the worst.

    Why aren’t we running business simulations to the same extent? Accounting, financial models and budgeting is more an art than science, many times founded on theories from the last century. (Not to mention Pacioli’s Italian accounting from 1491.) While the theory of behavioural economics progresses we must use the best tools we can get to better understand financial risks and opportunities and how to improve and refine value creation. The true job we’re set to do.

    How is it done? Like Einstein – seeking simplicity, as far as it goes. Finding out which pieces of information that is most crucial to the success and survival of the business. For major corporations these can be drawn down from the hundreds to some twenty key variables. (These variables are not set in stone once and for all, but need to be redefined in accordance with the business situation we foresee in the near future.)

    At Allevo our focal point is on Risk Governance at large and helping organisations implement Enterprise Risk Management (ERM) frame¬works and processes, specifically assisting boards and executive management to exercise their Risk Oversight duties. Fundamental to good risk management practice is to understand end articulate the organisation’s (i.e. the Board’s) appetite for risk. Without understanding the appetite and tolerance levels for various risks it’s hard to measure, aggregate and prioritize them. How much are we willing to spend on new ventures and opportunities? How much can we afford to lose? How do we calculate the trade-offs?

    There are two essential elements of Risk Appetite: risk capacity and risk capability.

    By risk capacity we mean the financial ability to take on new opportunities with their inherent risks (i.e. availability of cash and funding across the strategy period). By risk capability is meant the non-financial resources of the organisation. Do we have the know¬ledge and resources to take on new ventures? Cash and funding is fundamental and comes first.

    Does executive management and the board really understand the strengths and vulnerabilities hiding in the balance sheet or in the P&L-account? Many may have a gut feeling, mostly the CFO and the treasury department. But shouldn’t the executive team and the board (including the Audit Committee, and the Risk Committee if there is one) also really know?

    At Allevo we have aligned with Strategy@Risk Ltd to do business simulations. They have experiences from all kinds of industries; especially process industries where they even helped optimize manufacturing processes. They have simulated airports and flight patterns for a whole country. For companies with high level of raw material and commodity risks they simulate optimum hedging strategies. But their main contribution, in our opinion, is their ability to simulate your organisation’s balance sheet and P&L accounts. They have created a simulation tool that can be applied to a whole corporation. It needs only to be adjusted to your specific operations and business environ¬ments, which is done through inter-views and a few workshops with your own people that have the best knowledge of your business (operations, finances, markets, strategy etc.).

    When the key variables have been identified, it’s time to run the first Monte Carlo simulations to find out if the model fits with recent actual experiences and otherwise feels reliable.

    No model can ever predict the future. What we want to do is to find the key strengths and weaknesses in your operations and in your balance sheet. By running sensitivity analysis we can first of all understand which the key variables are. We want to focus what’s important, and leave alone those variables that have little effect on outcomes.

    Now, it’s time for the most important part. Considering how the selected variables can vary and interact over time. The future contains an inconceivable amount of different outcomes ((There are probably more different futures than ways of dealing 52 playing cards. Don’t you think? Well there are only 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 ways to shuffle a deck of 52 cards (8.1 x 1067 ))). What does that say about budgeting with discrete numbers?)). The question is how can we achieve the outcomes that we desire and avoid the ones that we dread the most?

    Running 10,000 simulations (i.e. closing each and every annual account over 10,000 years) we can stop the simulation when reaching a desired level of outcome and investigate the position of the key variables. Likewise when nasty results appear, we stop again and recording the underlying position of each variable.

    The simulations generate an 80-page standard report (which, once again, can feel like information overload). But once you’ve got a feeling for the sensitivity of the business you could instead do specific “what if?” analysis of scenarios of special interest to yourself, the executive team or to the board.

    Finally, the model equates the probability distribution of the organisation’s Enterprise Value going forward. The key for any business is to grow Enterprise Value.

    Simulations show how the likelihood of increasing or losing value varies with different strategies. This part of the simulation tool could be extremely important in strategy selection.

    If you wish to go into more depth on how simulations can support you and your organisation, please visit

    www.allevo.se or www.strategy-at-risk.com

    There you’ll find a great depth of material to chose from; or call us direct and we’ll schedule a quick on-site presentation.

    Have a good flight, and …

    Happy landing!

  • M&A: When two plus two is five or three or …

    M&A: When two plus two is five or three or …

    When two plus two is five (Orwell, 1949)

    Introduction

    Mergers & Acquisitions (M&A) is a way for companies to expand rapidly and much faster than organic growth – that is coming from existing businesses – would have allowed. M&A’s have for decades been a trillion-dollar business, but empirical studies reports that a significant proportion must be considered as failures.

    The conventional wisdom – is that the majority of deals fail to add shareholder value to the acquiring company. According to this research, only 30-50% of deals are considered to be successful (See Bruner, 2002).

    If most deals fail, why do companies keep doing them? Is it because they think the odds won’t apply to them, or are executives more concerned with extending its influence and company growth (empire building) and not with increasing their shareholder (s) value?

    Many writers argue that these are the main reasons driving the M&A activities, with the implication that executives are basically greedy (because their compensation is often tied to the size of the company) – or incompetent.

    To be able to create shareholder value the M&A must give rise to some forms of synergy. Synergy is the ability of the merged companies to generate higher shareholder value (wealth) than the standalone entities. That is; that the whole will be greater than the sum it’s of parts.

    For many of the observed M&A’s however, the opposite have been the truth – value have been destroyed; the whole have turned out to be less than the sum of its parts (dysergy).

    “When asked to name just one big merger that had lived up to expectations, Leon Cooperman, former co-chairman of Goldman Sachs’ Investment Policy Committee, answered: I’m sure there are success stories out there, but at this moment I draw a blank.” (Sirower, 1997)

    The “apparent” M&A failures have also been attributed to both methodological and measurement problems, stating that evidence – as cost saving or revenue enhancement brought by the M&A is difficult to obtain after the fact. This might also apply to some of the success stories.

    What is surprising in most (all?) of the studies of M&A success and failures is the lack understanding of the stochastic nature of business activities. For any company it is impossible to estimate with certainty its equity value, the best we can do is to estimate a range of values and the probability that the true value will fall inside this range. The merger two companies amplify this, and the discussion of possible synergies or dysergies can only be understood in the context of randomness (stochasticity) ((See: the IFA.com – Probability Machine, Galton Board, Randomness and Fair Price Simulator, Quincunx at http://www.youtube.com/watch?v=AUSKTk9ENzg)).

    [tube] http://www.youtube.com/watch?v=AUSKTk9ENzg, 400,300 [/tube]

    The M&A cases

    Let’s assume that we have two companies A and B that are proposed merged. We have the distribution for each company’s equity value (shareholders value) for both companies and we can calculate the equity distribution for the merged company. Company A’s value is estimated to be in the range of 0 to 150M with expected value 90M. Company B’s value is estimated to be in the range of -40 to 200M with expected value 140M. (See figure below)

    If we merge the two companies assuming no synergy or dysergy we get the value (shareholder) distribution shown by the green curve in the figure. The merged company will have a value in the range of 65 to 321M, with an expected value of 230M. Since there is no synergy/dysergy no value have been created or destroyed by the merger.

    For company B no value would be added in the merger if A was bought at a price equal to or higher than the expected value of the company.  If it was bought at a price less than expected value, then there is a probability that the wealth of the shareholders of company B will increase. But even then it is not with certainty. All increase of wealth to the shareholders of company B will be at the expenses of the shareholders of company A and vice versa.

    Case 1

    If we assume that there is a “connection” between the companies, such that an increase in one of the company’s revenues also will increase the revenues in the other, we will have a synergy that can be exploited.

    This situation is depicted in the figure below. The green curve gives the case with no synergy and the blue the case described above. The difference between them is the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is high and even negative (dysergy) when revenues is low.

    If we produce a frequency diagram of the sizes of the possible synergies it will look as the diagram below. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Case 2

    If we assume that the “connection” between the companies is such that a reduction in one of the company’s revenues streams will reduce the total production costs, we again have a synergy that can be exploited.
    This situation is depicted in the figure below. The green curve gives the case with no synergy and the red the case described above. The difference between them is again the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is lower and even negative (dysergy) when revenues is high.

    In this case, the merger acts as a hedge against revenue losses at the cost of parts of the upside created by the merger. This should not deter the participants from a merger since there is only a 30 % probability that this will happen.

    The graph above again gives the frequency diagram for the sizes of the possible synergies. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Conclusion

    The elusiveness of synergies in many M&A cases can be explained by the natural randomness in business activities. The fact that a merger can give rise to large synergies does not guarantee that it will occur, only that there is a probability that it will occur. Spread sheet exercises in valuation can lead to disaster if the stochastic nature of the involved companies is not taken into account. AND basing the pricing of the M&A candidate on expected synergies is pure foolishness.

    References

    Bruner, Robert F. (2002), Does M&A Pay? A Survey of Evidence for the Decision-Maker. Journal of Applied Finance, Vol. 12, No. 1. Available at SSRN: http://ssrn.com/abstract=485884

    Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg.

    The whole is more than the sum of its parts. Aristotle, Metaphysica

     

    Sirower, M. (1997) The Synergy Trap: How Companies Lose the Acquisition Game. New York. The Free Press.

  • Introduction to Simulation Models

    Introduction to Simulation Models

    This entry is part 4 of 6 in the series Balance simulation

     

    Simulation models sets out to mimic real life company operations, that is describing the transformation of raw materials and labor to finished products in such a way that it can be used as support for strategic decision making.

    A full simulation model will usually consist of two separate models:

    1. an EBITDA model that describes the particular firm’s operations and
    2. an generic P&L and Balance simulation model (PL&B).

     

     

    The EBITDA model ladder

    Both the deterministic and stochastic balance simulation can be approached as a ladder with two steps, where the first is especially well suited as an introduction to risk simulation and the second gives a full blown risk analysis. In these successive steps the EBITDA calculations will be based on:

    1. financial information only, by using coefficients of fabrications and unit prices (e.g. kg flour per 1000 bread and cost of flour per kg, etc.) as direct input to the balance model – the direct method and
    2. EBITDA models to give a detailed technical description of the company’s operations.

    The first step uses coefficients of fabrications and their variations give a low effort (cost) alternative, usually using the internal accounting as basis. In many cases, this will often give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans. The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    This step is especially well suited for introduction to risk simulation and the art of communicating risk and uncertainty throughout the firm. It can also profitably be used in cases where time and data is limited and where one wishes to limit efforts in an initial stage. Data and assumptions can later be augmented to much more sophisticated analyses within the same framework. This way the analysis can be successively built in the direction the previous studies suggested.

    The second step implies setting up a dedicated EBITDA subroutine to the balance model. This can then give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long-term planning and strategy development.

    The EBITDA model can both be a stand-alone model and a subroutine to the PL&B model. The stand-alone EBITDA model can be used to in detail study the firm’s operations and how different operational strategies will or can affect EBITDA outcomes and distribution.

    When connected to the PL&B model it will act as a subroutine giving the necessary information to produce the P&L and ultimately the Balance and the – outcomes distributions.

    This gives great flexibility in model formulations and the opportunity to fit models to different industries and accommodate for the data available.

    P&L and Balance simulation

    The generic PL&B model – based on the IFRS standard – can be used for a wide range of business activities both:

    1. describes the firm’s financial environment (taxes, interest rates, currency etc.) and
    2. acts as a testing bed for financial strategies (hedging, translation risk, etc.)

    Since S@R has set out to create models that can give answers to both deterministic and stochastic questions thus the PL&B model is a real balance simulation model – not a simple cash flow forecast model.

    All runs in the simulation produces a complete P&L and Balance it enables uncertainty curves (distributions) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. to be produced.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    The results from these analyses can be presented in different forms from detailed traditional financial reports to graphs describing the range of possible outcomes for all items in the P&L and Balance (+ much more) looking at the coming one to five (short term) or five to fifteen years (long term) and showing the impacts to e.g. equity value, company value, operating income etc.

    The goal is to find the distribution for the firm’s equity value which will incorporate all uncertainty facing the firm.

    This uncertainty gives both shape and location of the equity value distribution, and this is what we – if possible – are aiming to change:

    1. reducing downside risk by reducing the left tail (blue curve)
    2. increasing expected company value by moving the curve to the right (green curve)
    3. increasing the upside potential by  increasing the right tail (red curve) etc.

     

    The Data

    To be able to simulate the operations we need to put into the model all variables that will affect the firm’s net yearly result. Most of these will be collected by S@R from outside sources like central banks, local authorities and others, but some will have to be collected from the firm.

    The production and firm specific variables are related to every day’s activities in the firm. Their historic values can be collected from internal accounts or from production reports.  Someone in the procurement-, production- or sales department will have their records and most always the controllers.  The rest will be variables inside the domain of the CEO and the company treasurer.

    The variables fall in five groups:

    i.      general  variables describing the firm’s financial environment ,
    ii.      variables describing the firms strategy,
    iii.      general variables used for forecasting purposes,
    iv.      direct problem related variables and
    v.      the firm specific:
    a.  production coefficients  and
    b.  cost of raw materials and labor related variables.

    The first group will contain – for all countries either delivering raw materials or buying the finished product (s) – variables like: taxes, spot exchange rates etc.  For the firm’s domestic country it will in addition contain variables like: Vat rates, taxes on investments and dividend income, depreciation rates and method, initial tax allowances, overdraft interest rates etc.

    The second group will contain variables like: minimum cash levels, debt distribution on short and long term loans and currencies, hedge ratios, targeted leverage, economic depreciation etc.

    The third group will contain variables needed for forecasting purposes: yield curves, inflation forecasts, GDP forecasts etc. The expected values and their 5 % and 95 % probability limits will be used to forecast exchange rates, interest rates, demand etc. They will be collected by S@R.

    The fourth group will contain variables related to sales forecasts: yearly air temperature profiles (and variation) for forecasting beer sales and yearly water temperature profiles (and variation) for forecasting increase in biomass in fish farming.

    The fifth group will contain variables that specify the production and costs of production. They will vary according to the type of operations e.g.: operating rate (%), max days of production, tools maintenance (h per 10.000 units) , error rate (errors per 1000 units), waste (% of weight of prod unit), cycle time (units per min), number of machines per shift (#), concession density (kg per m3), feed rates (%), mortality rates (%) etc., etc.. This variable specifies the production and will they be stochastic in the sense that they are not constant but will vary inside a given – theoretical or historic – range.

    To simulate costs of production we use the coefficients of fabrication and their unit costs. Both the coefficients and their unit costs will always be of stochastic nature and they can vary with capacity utilization:  energy per unit produced (kwh/unit) and energy price (cost per Kwh), malt use (kg per hectoliter), malt price (per kg), maximum takeoff weight (ton), takeoff charge (per ton), specific consumption of wood, (m3/Adt), cost of round wood (per m3), etc., etc.

    The uncertainty (and risk) stemming from all groups of variables will be propagated through the P&L and down to the Balance, ending up as volatility in the equity distribution.

    The aim is to estimate the economic impact that such uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight, and the ability to deal with uncertainties in an informed way – and thus benefits above ordinary spread-sheet exercises.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    Most companies have some sort of model describing the company’s operations. They are used mostly for budgeting, but in some cases also for forecasting cash flow and other important performance measures.

    If the client already has spread sheet models describing the operations, we can build on this. There is no reason to reinvent what has already been done – thus saving time and resources that can be better utilized in other phases of the project.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    In some cases, we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.

    The first approach can also be considered as an introduction and stepping-stone to a more complete EBITDA model and detailed simulations.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However, the type of data will have to be agreed upon depending on the scope of analysis.

    Very often, key people from the controller group will be adequate for this work and if they do not have the direct knowledge, they usually know whom to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months – depending on the scope of the project.

    For S&R, the period will depend on the availability of key personnel from the client and the availability of data. For the second alternative, it can take from one to three weeks of normal work to three to six months for the second alternative for more complex models. The total time will also depend on the number of analyses that needs to be run and the type of reports that has to be delivered.

    The team’s participation in the project also makes communication of the results up or down in the system simpler. Since the input data is collected by templates this gives the responsible departments and persons, ownership to assumptions, data and results. These templates thus visualize the flow of data thru the organization and the interdependence between the departments – facilitating the communication of risk and the different strategies both reviewed and selected.

    No knowledge or expertize on uncertainty calculations or statistical methods is required from the clients side. The team will thru ‘osmosis’ acquires the necessary knowledge. Usually the team finds this as an exciting experience.

  • Corn and ethanol futures hedge ratios

    Corn and ethanol futures hedge ratios

    This entry is part 2 of 2 in the series The Bio-ethanol crush margin

     

    A large amount of literature has been published discussing hedging techniques and a number of different hedging models and statistical refinements to the OLS model that we will use in the following. For a comprehensive review see “Futures hedge ratios: a review,” (Chen et al., 2003).

    We are here looking for hedge models and hedge ratio estimations techniques that are “good enough” and that can fit into valuation models using Monte Carlo simulation.

    The ultimately purpose is to study hedging strategies using P&L and Balance simulation to forecast the probability distribution for the company’s equity value. By comparing the distributions for the different strategies, we will be able to select the hedging strategy that best fits the boards risk appetite /risk aversion and that at the same time “maximizes” the company value.

    Everything should be made as simple as possible, but not simpler. – Einstein, Reader’s Digest. Oct. 1977.

    To use futures contracts for hedging we have to understand the objective: a futures contract serves as a price-fixing mechanism. In their simplest form, futures prices are prices set today to be paid in the future for goods. If properly designed and implemented, hedge profits will offset the loss from an adverse price moves. In a like fashion, hedge losses will also eliminate effects of a favorable price change. Ultimately, the success of any hedge program rests on the implementation of a correctly sized futures position.

    The minimum variation hedge

    This is often referred to as – the volatility-minimizing hedge for one unit of exposure. It can be found by minimizing the variance of the hedge payoff at maturity.

    For an ideal hedge, we would like the change in the futures price (Delta F) to match as exactly as possible the change in the value of the asset (Delta S) we wish to hedge, i.e.:

    Delta S = Delta F

    The expected payoff from the hedge will be equal to the value of the cash position at maturity plus the payoff of the hedge (Johnson, 1960) or:

    E(H) = X_S delim{[} {E (S2)-S1} {]} + X_F delim{[} {E (F2)-F1 }{]}

    With spot position XS, a short futures market holding XF, spot price S1 and expected spot price at maturity E (S2), current future contract price F1 andexpected future price E (F2) – excluding transaction costs.

    What we want is to find the value of the futures position that reduces the variability of price changes to the lowest possible level.

    The minimum-variance hedge ratio is then defined as the number of futures per unit of the spot asset that will minimize the variance of the hedged portfolio returns.

    The variance of the portfolio return is: ((The variance of the un-hedged position is: Var (U) =X^2_S Var (Delta S))):

    Var (H) =X^2_ S Var (Delta S) + X^2_F Var (Delta F) + 2 X_S X_F Covar (Delta S, Delta F)

    Where Var (Delta S) is the variance in the future price change, Var (Delta F) is the variance of the change in the spot price and Covar (Delta S, Delta F) the covariance between the spot and future price changes. Letting h =  X_F/X_S represent the proportion of the spot position hedged, minimum value of Var (H) can then be found ((by minimizing Var (H) as a function of h)) as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)} or equivalently as: h*= {Corr (Delta S, Delta F)} {Var(Delta S)}/{Var (Delta F)}

    Where Corr (Delta S, Delta F) is the correlation between the spot and future price changes while  assuming that XS is exogenous determined or fixed.

    Estimating the hedge coefficient

    It is also possible to estimate the optimal hedge (h*) using regression analysis. The basic equation is:

    Delta S = a + h Delta F + varepsilon

    with varepsilon as the change in spot price not explained by the regression model. Since the basic OLS regression for this equation estimates the value of h* as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)}

    we can use this regression to find the solution that minimizes the objective function E(H). This is one of the reasons that use of the objective function E (H) is so appealing. ((Note that other and very different objective functions could have chosen.))

    We can then use the coefficient of determination, or R^2 , as an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position – the hedge effectiveness. (Ederington, 1979) ((Not taking into account variation margins etc.)).

    The basis

    Basis is defined as the difference between the spot price (S) and the futures price (F). When the expected change in the future contract price is equal to the expected change in the spot price, the optimal variance minimizing strategy is to set  h*=1. However, for most future contract markets the future price does not perfectly parallel the spot price, causing an element of basis risk to directly affect the hedging decision.

    A negative basis is called contango and a positive basis backwardation:

    1. When the spot price increases by more than the futures price, the basis increases and is said to “strengthen the basis” (when unexpected, this is favorable for a short hedge and unfavorable for a long hedge).
    2. When the futures price increases by more than the spot price, the basis declines and this is said to “weaken the basis” (when unexpected, this is favorable for a long hedge and unfavorable for a short hedge).

    There will usually be a different basis for each contract.

    The number of futures contracts

    The variance minimizing number of futures contracts N* will be:

    N*=h*{ X_S}/{Q_F}

    Where Q_F  is the size of one futures market contract. Since futures contracts are marked to market every day, the daily losses are debited and daily gains credited the parties accounts – settlement variations – i.e. the contracts are closed every day. The account will have to be replenished if the account falls below the maintenance margin (margin call). If the account is above the initial margin withdrawals can be made from the account.

    Ignoring the incremental income effects from investing variation margin gains (or borrowing to cover variation margin losses), we want the hedge to generate h*Delta F. Appreciating that there is an incremental effect, we want to accrue interest on a “tailed” hedge such that (Kawaller, 1997):

    h*Delta F =Delta F  (1+r)^n  or
    ĥ = h*/(1+r)^n  or h*/(1+ r*n/365) if time to maturity is less than one year.

    Where:
    r = interest rate and
    n = number of days remaining to maturity of the futures contract.

    This amounts to adjusting the hedge by a present value factor. Tailing converts the futures position into a forward position. It negates the effect of daily resettlement, in which profits and losses are realized before the day the hedge is lifted.

    For constant interest rates the tailed hedge (for h* < 1.) rises over time to reach the exposure at the maturity of the hedge. Un-tailed the hedge will over-hedge the exposure and increase the hedger’s risk.  Tailing the hedge is especially of importance when the interest rate is high and the time to maturity long.

    An appropriate interest rate would be one that reflects the average of the firm’s cost of capital (WACC) and the rate it would earn on its investments (ROIC) both which will be stochastic variable in the simulation. The first would be relevant in cases when the futures contracts generate losses, while the second when the futures contracts generate gains. In practice some average of these rates are used. ((See FAS 133 and later amendments))
    There are traditionally two approaches to tailing:

    1. Re-balance the tail each day. In this case the tailed hedge ratio is adjusted each day to maturity of the futures contract. In this approach the adjustment declines each day, until at expiration there is no adjustment.
    2. Use a constant tail (average): ĥ= h*/(1 + 0.5*r*N /365) where N is the original number of days remaining to maturity. In this shortcut, the adjustment is made at the time the hedge is put on, and not changed. The hedge will start with being too big and ends with being too small, but will on average be correct.

    For investors where trading is active, the first approach is more convenient, for inactive traders, the second is often used.

    Since our models always incorporate stochastic interest rates, hedges discounted with the appropriate rates are calculated. This amounts to solving the set of stochastic simultaneous equations created by the hedge and the WACC/ROIC calculations since the hedges will change their probability distributions. Note that the tailed hedge ratio will be a stochastic variable, and that minimizing the variance of the hedge will not necessarily maximize the company value. The value of – ĥ – that maximizes company value can only be found by simulation given the board’s risk appetite / risk aversion.

    The Spot and Futures Price movements

    At any time there are a number of futures contracts for the same commodity simultaneously being priced. The only difference between them is the delivery month. A continuous contract takes the individual contracts in the futures market and splices them together. The resulting continuous series ((The simplest method of splicing is to tack successive delivery months onto each other. Although the prices in the history are real, the chart will also preserve the price gaps that are present between expiring deliveries and those that replace them.)) allows us to study the price history in the market from a single chart. The following graphs show the price movements ((To avoid price gap problems, many prefer to base analysis on adjusted contracts that eliminate roll-over gaps. There are two basic ways to adjust a series.
    Forward-adjusting works by beginning with the true price for the first delivery and then adjusting each successive set of prices up or down depending on whether the roll-over gap is positive or negative.
    Back-adjusting reverses the process. Current price are always real but historical prices are adjusted up or down. This is the often preferred method, since the series always will show the latest actual price. However, there is no perfect method producing a continuous price series satisfying all needs.)) for the spliced corn contracts C-2010U to 2011N and the spliced ethanol contracts EH-2010U to 2011Q.

    In the graphs the spot price is given by the blue line and the corresponding futures price by the red line.

    For the corn futures, we can see that there is a difference between the spot and the futures price – the basis ((The reasons for the price difference are transportation costs between delivery locations, storage costs and availability, and variations between local and worldwide supply and demand of a given commodity. In any event, this difference in price plays an important part in what is being actually pay for the commodity when you hedge.))  – but that the price movements of the futures follow the spot price closely or – vice versa.

    The spliced contracts for bioethanol are a little different from the corn contracts. The delivery location is the same and the curves are juxtaposed very close to each other. Here are however other differences.

    The regression – the futures assay

    The selected futures contracts give us five parallel samples for the relation between the corn spot and futures price, and six for the relation between the ethanol spot and ethanol futures price. For every day in the period 8/2/2010 to 7/14/2011 we have from one to five observations of the corn relation (five replications) and from 8/5/2010 to 8/3/2011 we have one to twelve observations of the ethanol relation. Since we follow a set of contracts, the number of daily observations of the corn futures prices starts with five (twelve for the ethanol futures) and ends with only one as the contracts matures.  We could of course also have selected a sample giving an equal number of observations every day.

    There are three likely models which could be fit:

    1. Simple regression on the individual data points,
    2. Simple regression on the daily means,and
    3. Weighted regression on the daily means using the number of observations as the weight.

    When the number of daily observations is equal all three models will have the same parameter estimates. The weighted and individual regressions will always have the same parameter estimates, but when the sample sizes are unequal these will be different from the unweighted means regression. Whether the weighted or unweighted model should be used when the number of daily observations is unequal will depend on the situation.

    Since we now have replications of the relation between spot and the futures price we have the opportunity to test for lack of fit from the straight line model.

    In our case using this approach have a small drawback. We are looking for the regression of the spot price changes against the price changes in the futures contract. This model however will give us the inverse: the regression of the price changes in the futures contract against the changes in spot price. The inverse of the slope of this regression, which is what we are looking for, will in general not give the correct answer (Thonnard, 2006).  So we will use this approach (model#3) to test for linearity and then model #1 with all data for estimation of the slope.

    Ideally we would like to find stable (efficient) hedge ratios in the sense that they can be used for more than one hedge and over a longer period of time, thus greatly simplifying the workload for ethanol producing companies.

    All prices, both spot and futures in the following, have been converted from $/gallon (ethanol) or $/bushel (corn) to $/kg.

    The Corn hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the corn futures prices on the changes in corn spot prices (model#3):

    The analysis of variance cautions us that the lack of fit to a linear model for all contracts is significant. However the sum of squares due to this is very small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a linear function of the changes in the spot prices and the hedge ratios found as efficient. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    Nevertheless, this linear model will have to be monitored closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:

    Delta S = 0.0001 + 1.0073 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0073

    First, since the adjusted  R-square value (0.9838) is an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position, a hedge based on this regression coefficient (slope) should be highly effective.

    The ratio of the variance of the hedged position and the un-hedged position is equal to 1-R2.  The variance of a hedged position based on this hedge ratio will be 12.7 % of the unhedged position.

    We have thus eliminated 87.3 % of the variance of the unhedged position. For a simple model like this this can be considered as a good result.

    In the figure the thick black line gives the 95% confidence limits and the yellow area the 95% prediction limits. As we can see, the relationship between the daily price changes is quite tight thus promising the possibility of effective hedges.

    Second, due to the differencing the basis caused by the difference in delivery location have disappeared, and even if the constant term is significant, it is so small that it with little loss can be considered zero.

    The R-square values would have been higher for the regressions on the means than for the regression above. This is because the total variability in the data would have been reduced by using means (note that the total degrees of freedom is reduced for the regressions on means).  A regression on the means will thus always suggest greater predictive ability than a regression on individual data because it predicts mean values, not individual values.

    The Ethanol hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the ethanol futures prices on the changes in ethanol spot prices (model#3):

    The analysis of variance again cautions us that the lack of fit to a linear model for all contracts is significant.  In this case it is approximately ten times higher than for the corn contracts.

    However the sum of squares due to this is small small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a close to linear function of the changes in the spot prices and the hedge ratios found as “good enough”. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    In this graph we can clearly see the deviation from a strictly linear model. The assumption of a linear model for the changes in ethanol spot and futures prices will have to be monitored very closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:
    Delta S = 0.0135 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0135

    The adjusted  R-square value (0.8105) estimating the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position is high even with the “lack of linearity”. A hedge based on this regression coefficient (slope) should then still be highly effective.

    The variance of a hedged position based on this hedge ratio will be 43.7 % of the unhedged position. It is not as good as for the corn contracts, but will still give a healthy reduction in the ethanol price risk facing the company.

    As this turned out, we can use both of these estimation methods for the hedge ratio as basis for strategy simulations, but one question remains unanswered: will this minimize the variance of the crush  ratio?

    References

    Understanding Basis, Chicago Board of Trade, 2004.  http://www.gofutures.com/pdfs/Understanding-Basis.pdf

    http://www.cmegroup.com/trading/agricultural/files/AC-406_DDG_CornCrush_042010.pdf

    Bond, Gary E. (1984). “The Effects of Supply and Interest Rate Shocks in Commodity Futures Markets,” American Journal of Agricultural Economics, 66, pp. 294-301.

    Chen, S. Lee, C.F. and Shrestha, K (2003) “Futures hedge ratios: a review,” The Quarterly Review of Economics and Finance, 43 pp. 433–465

    Ederington, Louis H. (1979). “The Hedging Performance of the New Futures Markets,” Journal of Finance, 34, pp. 157-70

    Einstein, Albert (1923). Sidelights on Relativity (Geometry and Experience). P. Dutton., Co.

    Figlewski, S., Lanskroner, Y. and Silber, W. L. (1991) “Tailing the Hedge: Why and How,” Journal of Futures Markets, 11: pp. 201-212.

    Johnson, Leland L.  (1960). ” The Theory of Hedging and Speculation in Commodity Futures,” Review of Economic Studies, 27, pp. 139-51.

    Kawaller, I. G. (1997 ) ”Tailing Futures Hedges/Tailing Spreads,” The Journal of Derivatives, Vol. 5, No. 2, pp. 62-70.

    Li, A. and Lien, D. D. (2003) “Futures Hedging Under Mark-to-Market Risk,” Journal of Futures Markets, Vol. 23, No. 4.

    Myers Robert J. and Thompson Stanley R. (1989) “Generalized Optimal Hedge Ratio Estimation,” American Journal of Agricultural Economics, Vol. 71, No. 4, pp. 858-868.

    Thonnard, M., (2006), Confidence Intervals in Inverse Regression. Diss. Technische Universiteit Eindhoven, Department of Mathematics and Computer Science, Web. 5 Apr. 2013. <http://alexandria.tue.nl/extra1/afstversl/wsk-i/thonnard2006.pdf>.

    Stein, Jerome L.  (1961). “The Simultaneous Determination of Spot and Futures Prices,” American Economic Review, 51, pp. 1012-25.

    Endnotes