Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Other topics – Page 4 – Strategy @ Risk

Category: Other topics

  • Introduction to Simulation Models

    Introduction to Simulation Models

    This entry is part 4 of 6 in the series Balance simulation

     

    Simulation models sets out to mimic real life company operations, that is describing the transformation of raw materials and labor to finished products in such a way that it can be used as support for strategic decision making.

    A full simulation model will usually consist of two separate models:

    1. an EBITDA model that describes the particular firm’s operations and
    2. an generic P&L and Balance simulation model (PL&B).

     

     

    The EBITDA model ladder

    Both the deterministic and stochastic balance simulation can be approached as a ladder with two steps, where the first is especially well suited as an introduction to risk simulation and the second gives a full blown risk analysis. In these successive steps the EBITDA calculations will be based on:

    1. financial information only, by using coefficients of fabrications and unit prices (e.g. kg flour per 1000 bread and cost of flour per kg, etc.) as direct input to the balance model – the direct method and
    2. EBITDA models to give a detailed technical description of the company’s operations.

    The first step uses coefficients of fabrications and their variations give a low effort (cost) alternative, usually using the internal accounting as basis. In many cases, this will often give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans. The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    This step is especially well suited for introduction to risk simulation and the art of communicating risk and uncertainty throughout the firm. It can also profitably be used in cases where time and data is limited and where one wishes to limit efforts in an initial stage. Data and assumptions can later be augmented to much more sophisticated analyses within the same framework. This way the analysis can be successively built in the direction the previous studies suggested.

    The second step implies setting up a dedicated EBITDA subroutine to the balance model. This can then give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long-term planning and strategy development.

    The EBITDA model can both be a stand-alone model and a subroutine to the PL&B model. The stand-alone EBITDA model can be used to in detail study the firm’s operations and how different operational strategies will or can affect EBITDA outcomes and distribution.

    When connected to the PL&B model it will act as a subroutine giving the necessary information to produce the P&L and ultimately the Balance and the – outcomes distributions.

    This gives great flexibility in model formulations and the opportunity to fit models to different industries and accommodate for the data available.

    P&L and Balance simulation

    The generic PL&B model – based on the IFRS standard – can be used for a wide range of business activities both:

    1. describes the firm’s financial environment (taxes, interest rates, currency etc.) and
    2. acts as a testing bed for financial strategies (hedging, translation risk, etc.)

    Since S@R has set out to create models that can give answers to both deterministic and stochastic questions thus the PL&B model is a real balance simulation model – not a simple cash flow forecast model.

    All runs in the simulation produces a complete P&L and Balance it enables uncertainty curves (distributions) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. to be produced.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    The results from these analyses can be presented in different forms from detailed traditional financial reports to graphs describing the range of possible outcomes for all items in the P&L and Balance (+ much more) looking at the coming one to five (short term) or five to fifteen years (long term) and showing the impacts to e.g. equity value, company value, operating income etc.

    The goal is to find the distribution for the firm’s equity value which will incorporate all uncertainty facing the firm.

    This uncertainty gives both shape and location of the equity value distribution, and this is what we – if possible – are aiming to change:

    1. reducing downside risk by reducing the left tail (blue curve)
    2. increasing expected company value by moving the curve to the right (green curve)
    3. increasing the upside potential by  increasing the right tail (red curve) etc.

     

    The Data

    To be able to simulate the operations we need to put into the model all variables that will affect the firm’s net yearly result. Most of these will be collected by S@R from outside sources like central banks, local authorities and others, but some will have to be collected from the firm.

    The production and firm specific variables are related to every day’s activities in the firm. Their historic values can be collected from internal accounts or from production reports.  Someone in the procurement-, production- or sales department will have their records and most always the controllers.  The rest will be variables inside the domain of the CEO and the company treasurer.

    The variables fall in five groups:

    i.      general  variables describing the firm’s financial environment ,
    ii.      variables describing the firms strategy,
    iii.      general variables used for forecasting purposes,
    iv.      direct problem related variables and
    v.      the firm specific:
    a.  production coefficients  and
    b.  cost of raw materials and labor related variables.

    The first group will contain – for all countries either delivering raw materials or buying the finished product (s) – variables like: taxes, spot exchange rates etc.  For the firm’s domestic country it will in addition contain variables like: Vat rates, taxes on investments and dividend income, depreciation rates and method, initial tax allowances, overdraft interest rates etc.

    The second group will contain variables like: minimum cash levels, debt distribution on short and long term loans and currencies, hedge ratios, targeted leverage, economic depreciation etc.

    The third group will contain variables needed for forecasting purposes: yield curves, inflation forecasts, GDP forecasts etc. The expected values and their 5 % and 95 % probability limits will be used to forecast exchange rates, interest rates, demand etc. They will be collected by S@R.

    The fourth group will contain variables related to sales forecasts: yearly air temperature profiles (and variation) for forecasting beer sales and yearly water temperature profiles (and variation) for forecasting increase in biomass in fish farming.

    The fifth group will contain variables that specify the production and costs of production. They will vary according to the type of operations e.g.: operating rate (%), max days of production, tools maintenance (h per 10.000 units) , error rate (errors per 1000 units), waste (% of weight of prod unit), cycle time (units per min), number of machines per shift (#), concession density (kg per m3), feed rates (%), mortality rates (%) etc., etc.. This variable specifies the production and will they be stochastic in the sense that they are not constant but will vary inside a given – theoretical or historic – range.

    To simulate costs of production we use the coefficients of fabrication and their unit costs. Both the coefficients and their unit costs will always be of stochastic nature and they can vary with capacity utilization:  energy per unit produced (kwh/unit) and energy price (cost per Kwh), malt use (kg per hectoliter), malt price (per kg), maximum takeoff weight (ton), takeoff charge (per ton), specific consumption of wood, (m3/Adt), cost of round wood (per m3), etc., etc.

    The uncertainty (and risk) stemming from all groups of variables will be propagated through the P&L and down to the Balance, ending up as volatility in the equity distribution.

    The aim is to estimate the economic impact that such uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight, and the ability to deal with uncertainties in an informed way – and thus benefits above ordinary spread-sheet exercises.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    Most companies have some sort of model describing the company’s operations. They are used mostly for budgeting, but in some cases also for forecasting cash flow and other important performance measures.

    If the client already has spread sheet models describing the operations, we can build on this. There is no reason to reinvent what has already been done – thus saving time and resources that can be better utilized in other phases of the project.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    In some cases, we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.

    The first approach can also be considered as an introduction and stepping-stone to a more complete EBITDA model and detailed simulations.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However, the type of data will have to be agreed upon depending on the scope of analysis.

    Very often, key people from the controller group will be adequate for this work and if they do not have the direct knowledge, they usually know whom to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months – depending on the scope of the project.

    For S&R, the period will depend on the availability of key personnel from the client and the availability of data. For the second alternative, it can take from one to three weeks of normal work to three to six months for the second alternative for more complex models. The total time will also depend on the number of analyses that needs to be run and the type of reports that has to be delivered.

    The team’s participation in the project also makes communication of the results up or down in the system simpler. Since the input data is collected by templates this gives the responsible departments and persons, ownership to assumptions, data and results. These templates thus visualize the flow of data thru the organization and the interdependence between the departments – facilitating the communication of risk and the different strategies both reviewed and selected.

    No knowledge or expertize on uncertainty calculations or statistical methods is required from the clients side. The team will thru ‘osmosis’ acquires the necessary knowledge. Usually the team finds this as an exciting experience.

  • Corn and ethanol futures hedge ratios

    Corn and ethanol futures hedge ratios

    This entry is part 2 of 2 in the series The Bio-ethanol crush margin

     

    A large amount of literature has been published discussing hedging techniques and a number of different hedging models and statistical refinements to the OLS model that we will use in the following. For a comprehensive review see “Futures hedge ratios: a review,” (Chen et al., 2003).

    We are here looking for hedge models and hedge ratio estimations techniques that are “good enough” and that can fit into valuation models using Monte Carlo simulation.

    The ultimately purpose is to study hedging strategies using P&L and Balance simulation to forecast the probability distribution for the company’s equity value. By comparing the distributions for the different strategies, we will be able to select the hedging strategy that best fits the boards risk appetite /risk aversion and that at the same time “maximizes” the company value.

    Everything should be made as simple as possible, but not simpler. – Einstein, Reader’s Digest. Oct. 1977.

    To use futures contracts for hedging we have to understand the objective: a futures contract serves as a price-fixing mechanism. In their simplest form, futures prices are prices set today to be paid in the future for goods. If properly designed and implemented, hedge profits will offset the loss from an adverse price moves. In a like fashion, hedge losses will also eliminate effects of a favorable price change. Ultimately, the success of any hedge program rests on the implementation of a correctly sized futures position.

    The minimum variation hedge

    This is often referred to as – the volatility-minimizing hedge for one unit of exposure. It can be found by minimizing the variance of the hedge payoff at maturity.

    For an ideal hedge, we would like the change in the futures price (Delta F) to match as exactly as possible the change in the value of the asset (Delta S) we wish to hedge, i.e.:

    Delta S = Delta F

    The expected payoff from the hedge will be equal to the value of the cash position at maturity plus the payoff of the hedge (Johnson, 1960) or:

    E(H) = X_S delim{[} {E (S2)-S1} {]} + X_F delim{[} {E (F2)-F1 }{]}

    With spot position XS, a short futures market holding XF, spot price S1 and expected spot price at maturity E (S2), current future contract price F1 andexpected future price E (F2) – excluding transaction costs.

    What we want is to find the value of the futures position that reduces the variability of price changes to the lowest possible level.

    The minimum-variance hedge ratio is then defined as the number of futures per unit of the spot asset that will minimize the variance of the hedged portfolio returns.

    The variance of the portfolio return is: ((The variance of the un-hedged position is: Var (U) =X^2_S Var (Delta S))):

    Var (H) =X^2_ S Var (Delta S) + X^2_F Var (Delta F) + 2 X_S X_F Covar (Delta S, Delta F)

    Where Var (Delta S) is the variance in the future price change, Var (Delta F) is the variance of the change in the spot price and Covar (Delta S, Delta F) the covariance between the spot and future price changes. Letting h =  X_F/X_S represent the proportion of the spot position hedged, minimum value of Var (H) can then be found ((by minimizing Var (H) as a function of h)) as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)} or equivalently as: h*= {Corr (Delta S, Delta F)} {Var(Delta S)}/{Var (Delta F)}

    Where Corr (Delta S, Delta F) is the correlation between the spot and future price changes while  assuming that XS is exogenous determined or fixed.

    Estimating the hedge coefficient

    It is also possible to estimate the optimal hedge (h*) using regression analysis. The basic equation is:

    Delta S = a + h Delta F + varepsilon

    with varepsilon as the change in spot price not explained by the regression model. Since the basic OLS regression for this equation estimates the value of h* as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)}

    we can use this regression to find the solution that minimizes the objective function E(H). This is one of the reasons that use of the objective function E (H) is so appealing. ((Note that other and very different objective functions could have chosen.))

    We can then use the coefficient of determination, or R^2 , as an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position – the hedge effectiveness. (Ederington, 1979) ((Not taking into account variation margins etc.)).

    The basis

    Basis is defined as the difference between the spot price (S) and the futures price (F). When the expected change in the future contract price is equal to the expected change in the spot price, the optimal variance minimizing strategy is to set  h*=1. However, for most future contract markets the future price does not perfectly parallel the spot price, causing an element of basis risk to directly affect the hedging decision.

    A negative basis is called contango and a positive basis backwardation:

    1. When the spot price increases by more than the futures price, the basis increases and is said to “strengthen the basis” (when unexpected, this is favorable for a short hedge and unfavorable for a long hedge).
    2. When the futures price increases by more than the spot price, the basis declines and this is said to “weaken the basis” (when unexpected, this is favorable for a long hedge and unfavorable for a short hedge).

    There will usually be a different basis for each contract.

    The number of futures contracts

    The variance minimizing number of futures contracts N* will be:

    N*=h*{ X_S}/{Q_F}

    Where Q_F  is the size of one futures market contract. Since futures contracts are marked to market every day, the daily losses are debited and daily gains credited the parties accounts – settlement variations – i.e. the contracts are closed every day. The account will have to be replenished if the account falls below the maintenance margin (margin call). If the account is above the initial margin withdrawals can be made from the account.

    Ignoring the incremental income effects from investing variation margin gains (or borrowing to cover variation margin losses), we want the hedge to generate h*Delta F. Appreciating that there is an incremental effect, we want to accrue interest on a “tailed” hedge such that (Kawaller, 1997):

    h*Delta F =Delta F  (1+r)^n  or
    ĥ = h*/(1+r)^n  or h*/(1+ r*n/365) if time to maturity is less than one year.

    Where:
    r = interest rate and
    n = number of days remaining to maturity of the futures contract.

    This amounts to adjusting the hedge by a present value factor. Tailing converts the futures position into a forward position. It negates the effect of daily resettlement, in which profits and losses are realized before the day the hedge is lifted.

    For constant interest rates the tailed hedge (for h* < 1.) rises over time to reach the exposure at the maturity of the hedge. Un-tailed the hedge will over-hedge the exposure and increase the hedger’s risk.  Tailing the hedge is especially of importance when the interest rate is high and the time to maturity long.

    An appropriate interest rate would be one that reflects the average of the firm’s cost of capital (WACC) and the rate it would earn on its investments (ROIC) both which will be stochastic variable in the simulation. The first would be relevant in cases when the futures contracts generate losses, while the second when the futures contracts generate gains. In practice some average of these rates are used. ((See FAS 133 and later amendments))
    There are traditionally two approaches to tailing:

    1. Re-balance the tail each day. In this case the tailed hedge ratio is adjusted each day to maturity of the futures contract. In this approach the adjustment declines each day, until at expiration there is no adjustment.
    2. Use a constant tail (average): ĥ= h*/(1 + 0.5*r*N /365) where N is the original number of days remaining to maturity. In this shortcut, the adjustment is made at the time the hedge is put on, and not changed. The hedge will start with being too big and ends with being too small, but will on average be correct.

    For investors where trading is active, the first approach is more convenient, for inactive traders, the second is often used.

    Since our models always incorporate stochastic interest rates, hedges discounted with the appropriate rates are calculated. This amounts to solving the set of stochastic simultaneous equations created by the hedge and the WACC/ROIC calculations since the hedges will change their probability distributions. Note that the tailed hedge ratio will be a stochastic variable, and that minimizing the variance of the hedge will not necessarily maximize the company value. The value of – ĥ – that maximizes company value can only be found by simulation given the board’s risk appetite / risk aversion.

    The Spot and Futures Price movements

    At any time there are a number of futures contracts for the same commodity simultaneously being priced. The only difference between them is the delivery month. A continuous contract takes the individual contracts in the futures market and splices them together. The resulting continuous series ((The simplest method of splicing is to tack successive delivery months onto each other. Although the prices in the history are real, the chart will also preserve the price gaps that are present between expiring deliveries and those that replace them.)) allows us to study the price history in the market from a single chart. The following graphs show the price movements ((To avoid price gap problems, many prefer to base analysis on adjusted contracts that eliminate roll-over gaps. There are two basic ways to adjust a series.
    Forward-adjusting works by beginning with the true price for the first delivery and then adjusting each successive set of prices up or down depending on whether the roll-over gap is positive or negative.
    Back-adjusting reverses the process. Current price are always real but historical prices are adjusted up or down. This is the often preferred method, since the series always will show the latest actual price. However, there is no perfect method producing a continuous price series satisfying all needs.)) for the spliced corn contracts C-2010U to 2011N and the spliced ethanol contracts EH-2010U to 2011Q.

    In the graphs the spot price is given by the blue line and the corresponding futures price by the red line.

    For the corn futures, we can see that there is a difference between the spot and the futures price – the basis ((The reasons for the price difference are transportation costs between delivery locations, storage costs and availability, and variations between local and worldwide supply and demand of a given commodity. In any event, this difference in price plays an important part in what is being actually pay for the commodity when you hedge.))  – but that the price movements of the futures follow the spot price closely or – vice versa.

    The spliced contracts for bioethanol are a little different from the corn contracts. The delivery location is the same and the curves are juxtaposed very close to each other. Here are however other differences.

    The regression – the futures assay

    The selected futures contracts give us five parallel samples for the relation between the corn spot and futures price, and six for the relation between the ethanol spot and ethanol futures price. For every day in the period 8/2/2010 to 7/14/2011 we have from one to five observations of the corn relation (five replications) and from 8/5/2010 to 8/3/2011 we have one to twelve observations of the ethanol relation. Since we follow a set of contracts, the number of daily observations of the corn futures prices starts with five (twelve for the ethanol futures) and ends with only one as the contracts matures.  We could of course also have selected a sample giving an equal number of observations every day.

    There are three likely models which could be fit:

    1. Simple regression on the individual data points,
    2. Simple regression on the daily means,and
    3. Weighted regression on the daily means using the number of observations as the weight.

    When the number of daily observations is equal all three models will have the same parameter estimates. The weighted and individual regressions will always have the same parameter estimates, but when the sample sizes are unequal these will be different from the unweighted means regression. Whether the weighted or unweighted model should be used when the number of daily observations is unequal will depend on the situation.

    Since we now have replications of the relation between spot and the futures price we have the opportunity to test for lack of fit from the straight line model.

    In our case using this approach have a small drawback. We are looking for the regression of the spot price changes against the price changes in the futures contract. This model however will give us the inverse: the regression of the price changes in the futures contract against the changes in spot price. The inverse of the slope of this regression, which is what we are looking for, will in general not give the correct answer (Thonnard, 2006).  So we will use this approach (model#3) to test for linearity and then model #1 with all data for estimation of the slope.

    Ideally we would like to find stable (efficient) hedge ratios in the sense that they can be used for more than one hedge and over a longer period of time, thus greatly simplifying the workload for ethanol producing companies.

    All prices, both spot and futures in the following, have been converted from $/gallon (ethanol) or $/bushel (corn) to $/kg.

    The Corn hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the corn futures prices on the changes in corn spot prices (model#3):

    The analysis of variance cautions us that the lack of fit to a linear model for all contracts is significant. However the sum of squares due to this is very small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a linear function of the changes in the spot prices and the hedge ratios found as efficient. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    Nevertheless, this linear model will have to be monitored closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:

    Delta S = 0.0001 + 1.0073 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0073

    First, since the adjusted  R-square value (0.9838) is an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position, a hedge based on this regression coefficient (slope) should be highly effective.

    The ratio of the variance of the hedged position and the un-hedged position is equal to 1-R2.  The variance of a hedged position based on this hedge ratio will be 12.7 % of the unhedged position.

    We have thus eliminated 87.3 % of the variance of the unhedged position. For a simple model like this this can be considered as a good result.

    In the figure the thick black line gives the 95% confidence limits and the yellow area the 95% prediction limits. As we can see, the relationship between the daily price changes is quite tight thus promising the possibility of effective hedges.

    Second, due to the differencing the basis caused by the difference in delivery location have disappeared, and even if the constant term is significant, it is so small that it with little loss can be considered zero.

    The R-square values would have been higher for the regressions on the means than for the regression above. This is because the total variability in the data would have been reduced by using means (note that the total degrees of freedom is reduced for the regressions on means).  A regression on the means will thus always suggest greater predictive ability than a regression on individual data because it predicts mean values, not individual values.

    The Ethanol hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the ethanol futures prices on the changes in ethanol spot prices (model#3):

    The analysis of variance again cautions us that the lack of fit to a linear model for all contracts is significant.  In this case it is approximately ten times higher than for the corn contracts.

    However the sum of squares due to this is small small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a close to linear function of the changes in the spot prices and the hedge ratios found as “good enough”. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    In this graph we can clearly see the deviation from a strictly linear model. The assumption of a linear model for the changes in ethanol spot and futures prices will have to be monitored very closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:
    Delta S = 0.0135 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0135

    The adjusted  R-square value (0.8105) estimating the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position is high even with the “lack of linearity”. A hedge based on this regression coefficient (slope) should then still be highly effective.

    The variance of a hedged position based on this hedge ratio will be 43.7 % of the unhedged position. It is not as good as for the corn contracts, but will still give a healthy reduction in the ethanol price risk facing the company.

    As this turned out, we can use both of these estimation methods for the hedge ratio as basis for strategy simulations, but one question remains unanswered: will this minimize the variance of the crush  ratio?

    References

    Understanding Basis, Chicago Board of Trade, 2004.  http://www.gofutures.com/pdfs/Understanding-Basis.pdf

    http://www.cmegroup.com/trading/agricultural/files/AC-406_DDG_CornCrush_042010.pdf

    Bond, Gary E. (1984). “The Effects of Supply and Interest Rate Shocks in Commodity Futures Markets,” American Journal of Agricultural Economics, 66, pp. 294-301.

    Chen, S. Lee, C.F. and Shrestha, K (2003) “Futures hedge ratios: a review,” The Quarterly Review of Economics and Finance, 43 pp. 433–465

    Ederington, Louis H. (1979). “The Hedging Performance of the New Futures Markets,” Journal of Finance, 34, pp. 157-70

    Einstein, Albert (1923). Sidelights on Relativity (Geometry and Experience). P. Dutton., Co.

    Figlewski, S., Lanskroner, Y. and Silber, W. L. (1991) “Tailing the Hedge: Why and How,” Journal of Futures Markets, 11: pp. 201-212.

    Johnson, Leland L.  (1960). ” The Theory of Hedging and Speculation in Commodity Futures,” Review of Economic Studies, 27, pp. 139-51.

    Kawaller, I. G. (1997 ) ”Tailing Futures Hedges/Tailing Spreads,” The Journal of Derivatives, Vol. 5, No. 2, pp. 62-70.

    Li, A. and Lien, D. D. (2003) “Futures Hedging Under Mark-to-Market Risk,” Journal of Futures Markets, Vol. 23, No. 4.

    Myers Robert J. and Thompson Stanley R. (1989) “Generalized Optimal Hedge Ratio Estimation,” American Journal of Agricultural Economics, Vol. 71, No. 4, pp. 858-868.

    Thonnard, M., (2006), Confidence Intervals in Inverse Regression. Diss. Technische Universiteit Eindhoven, Department of Mathematics and Computer Science, Web. 5 Apr. 2013. <http://alexandria.tue.nl/extra1/afstversl/wsk-i/thonnard2006.pdf>.

    Stein, Jerome L.  (1961). “The Simultaneous Determination of Spot and Futures Prices,” American Economic Review, 51, pp. 1012-25.

    Endnotes

  • You only live once

    You only live once

    This entry is part 4 of 4 in the series The fallacies of scenario analysis

    You only live once, but if you do it right, once is enough.
    — Mae West

    Let’s say that you are considering new investment opportunities for your company and that the sales department has guesstimated that the market for one of your products will most likely grow by a little less than 5 % per year. You then observe that the product already has a substantial market and that this in fifteen years’ time nearly will be doubled:

    Building a new plant to accommodate this market growth will be a large investment so you find that more information about the probability distribution for the products future sales is needed. Your sales department then “estimates” the market yearly growth to have a mean close to zero and a lower quartile of minus 5 % and an upper quartile of plus 7 %.

    Even with no market growth the investment is a tempting one since the market already is substantial and there is always a probability of increased market shares.

    As quartiles are given, you rightly calculate that there will be a 25 % probability that the growth will be above 7 %, but also that there will be a 25 % probability that it can be below minus 5 %. At the face of it, and with you being not too risk averse, this looks as a gamble worth taking.

    Then you are informed that the distribution will be heavily left skewed – opening for considerable downside risk. In fact it turns out that it looks like this:

    A little alarmed you order the sales department to come up with a Monte Carlo simulation giving a better view of the future possible paths of the market development.

    The return with the graph below giving the paths for the first ten runs in the simulation with the blue line giving average value and the green and red the 90 % and 10 % limits of the one thousand simulated outcomes:

    The blue line is the yearly ensemble  averages ((A set of multiple predictions that is all valid at the same time. The term “ensemble” is often used in physics and physics-influenced literature. In probability theory literature the term probability space is more prevalent.

    An ensemble provides reliable information on forecast uncertainties (e.g., probabilities) from the spread (diversity) amongst ensemble members.

    Also see: Ensemble forecasting; a numerical prediction method that is used to attempt to generate a representative sample of the possible future states of dynamic systems. Ensemble forecasting is a form of Monte Carlo analysis: multiple numerical predictions are conducted using slightly different initial conditions that are all plausible given the past and current set of observations. Often used in weather forecasting.));  that is the time series of average of outcomes. The series shows a small decline in market size but not at an alarming rate. The sales department’s advice is to go for the investment and try to conquer market shares.

    You then note that the ensemble average implies that you are able jump from path to path and since each is a different realization of the future that will not be possible – you only live once!

    You again call the sales department asking them to calculate each paths average growth rate (over time) – using their geometric mean – and report the average of these averages to you. When you plot both the ensemble and the time averages you find quite a large difference between them:

    The time average shows a much larger market decline than the ensemble average.

    It can be shown that the ensemble average always will overestimate (Peters, 2010) the growth and thus can falsely lead to wrong conclusions about the market development.

    If we look at the distribution of path end values we find that the lower quartile is 64 and the upper quartile is 118 with a median of 89:

    It thus turns out that the process behind the market development is non-ergodic ((The term ergodic is used to describe dynamical systems which have the same behavior averaged over time as averaged over space.))  or non-stationary ((Stationarity is a necessary, but not sufficient, condition for ergodicity. )). In the ergodic case both the ensemble and time averages would have been equal and the problem above would not have appeared.

    The investment decision that at first glance looked a simple one is now more complicated and can (should) not be decided based on market development alone.

    Since uncertainty increases the further we look into the future, we should never assume that we have ergodic situations. The implication is that in valuation or M&A analysis we should never use an “ensemble average” in the calculations, but always do a full simulation following each time path!

    References

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338

    Endnotes

  • The probability distribution of the bioethanol crush margin

    The probability distribution of the bioethanol crush margin

    This entry is part 1 of 2 in the series The Bio-ethanol crush margin

    A chain is no stronger than its weakest link.

    Introduction

    Producing bioethanol is a high risk endeavor with adverse price development and crumbling margins.

    In the following we will illustrate some of the risks the bioethanol producer is facing using corn  as feedstock. However, these risks will persist regardless of the feedstock and production process chosen. The elements in the discussion below can therefore be applied to any and all types of bioethanol production:

    1.    What average yield (kg ethanol per kg feedstock) can we expect?  And  what is the shape of the yield distribution?
    2.    What will the future price ratio of feedstock to ethanol be? And what volatility can we expect?

    The crush margin ((The relationship between prices in the cash market is commonly referred to as the Gross Production Margin.))  measures the difference between the sales proceeds of finished bioethanol and its feedstock ((It can also be considered as the productions throughput; the rate at which the system converts raw materials to money. Throughput is net sales less variable cost, generally the cost of the most important raw materials. (see: Throughput Accounting)).

    With current technology, one bushel of corn can be converted into approx. 2.75 gallons of corn and 17 pounds of DDG (distillers’ dried grains). The crush margin (or gross processing margin) is then:

    1. Crush margin = 0.0085 x DDG price + 2.8 x ethanol price – corn price

    Since from 65 % to 75 % of the variable cost in bioethanol production is cost of corn, the crush margin is an important metric especially since the margin in addition shall cover all other expenses like energy, electricity, interest, transportation, labor etc. and – in the long term the facility’s fixed costs.

    The following graph taken from the CME report: Trading the corn for ethanol crush, (CME, 2010) gives the margin development in 2009 and the first months of 2010:

    This graph gives a good picture of the uncertainties that faces the bioethanol producers, and can be a helpful tool when hedging purchases of corn and sale of the products ((The historical chart going back to APR 2005 is available at the CBOT web site)).

    The Crush Spread, Crush Profit Margin and Crush Ratio

    There are a number of other ways to formulate the crush risk (CME, July 11. 2011):

    The CBOT defines the “Crush Spread” as the Estimated Gross Margin per Bushel of Corn. It is calculated as follows:

    2. Crush Spread = (Ethanol price per gallon X 2.8) – Corn price per bushel, or as

    3. Crush Profit margin = Ethanol price – (Corn price/2.8).

    Understanding these relationships is invaluable in trading ethanol stocks ((We will return to this in a later post.)).

    By rearranging the crush spread equation, we can express the spread as its ratio to the product price (simplifying by keeping bi-products like DDG etc. out of the equation):

    4. Crush ratio = Crush spread/Ethanol price = y – p,

    Where: y = EtOH Yield (gal)/ bushel corn and p = Corn price/Ethanol price.

    We will in the following look at the stochastic nature of y and p and thus the uncertainty in forecasting the crush ratio.

    The crush spread and thus the crush ratio is calculated using data from the same period. They therefore give the result of an unhedged operation. Even if the production period is short – two to three days – it will be possible to hedge both the corn and ethanol prices. But to do that in a consistent and effective way we have to look into the inherent volatility in the operations.

    Ethanol yield

    The ethanol yield is usually set to 2.682 gal/bushel corn, assuming 15.5 % humidity. The yield is however a stochastic variable contributing to the uncertainty in the crush ratio forecasts. As only starch in corn can be converted to ethanol we need to know the content of extractable starch in a standard bushel of corn – corrected for normal loss and moisture.  In the following we will lean heavily on the article: “A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch”, by Tad W. Patzek (Patzek, 2006) which fits our purpose perfectly. All relevant references can be found in the article.

    The aim of his article was to establish the mean extractable starch in hybrid corn and the mean highest possible yield of ethanol from starch. We however are also interested in the probability distributions for these variables – since no production company will ever experience the mean values (ensembles) and since the average return over time always will be less than the return using ensemble means ((We will return to this in a later post))  (Peters, 2010).

    The purpose of this exercise is after all to establish a model that can be used as support for decision making in regard to investment and hedging in the bioethanol industry over time.

    From (Patzek, 2006) we have that the extractable starch (%) can be described as approx. having a normal distribution with mean 66.18 % and standard deviation of 1.13:

    The nominal grain loss due to dirt etc. can also be described as approx. having a normal distribution with mean 3 % and a standard deviation of 0.7:

    The probability distribution for the theoretical ethanol yield (kg/kg corn) can then be found by Monte Carlo simulation ((See formula #3 in (Patzek, 2006))  as:

    – having an approx. normal distribution with mean 0.364 kg EtHO/kg of dry grain and standard deviation of 0.007. On average we will need 2.75 kg of clean dry grain to produce one kilo or 1.74 liter of ethanol ((With a specific density of 0.787 kg/l)).

    Since we now have a distribution for ethanol yield (y) as kilo of ethanol per kilo of corn we will in the following use price per kilo both for ethanol and corn, adjusting for the moisture (natural logarithm of moisture in %) in corn:

    We can also use this to find the EtHO yield starting with wet corn and using gal/bushel corn as unit (Patzek, 2006):

    giving as theoretical value a mean of 2.64 gal/wet bushel with a standard deviation of 0.05 – which is significantly lower than the “official” figure of 2.8 gal/wet bushel used in the CBOT calculations. More important to us however is the fact that we easily can get yields much lower than expected and thus a real risk of lower earnings than expected. Have in mind that to get a yield above 2.64 gallons of ethanol per bushel of corn all steps in the process must continuously be at or close to their maximum efficiency – which with high probability never will happen.

    Corn and ethanol prices

    Looking at the price developments since 2005 it is obvious that both the corn and ethanol prices have a large variability ($/kg and dry corn):

    The long term trends show a disturbing development with decreasing ethanol price, increasing corn prices  and thus an increasing price ratio:

    “Risk is like fire: If controlled, it will help you; if uncontrolled, it will rise up and destroy you.”

    Theodore Roosevelt

    The unhedged crush ratio

    Since the crush ratio on average is:

    Crush ratio = 0.364 – p, where:
    0.364 = Average EtOH Yield (kg EtHO/kg of dry grain) and
    p = Corn price/Ethanol price

    The price ratio (p) has to be less than 0.364 for the crush ratio in the outset to be positive. As of January 2011 the price ratios has overstepped that threshold and have for the first months of 2011 stayed above that.

    To get a picture of the risk an unhedged bioethanol producer faces only from normal variation in yield and forecasted variation in the price ratio we will make a simple forecast for April 2011 using the historic time series information on trend and seasonal factors:

    The forecasted probability distribution for the April price ratio is given in the frequency graph below:

    This represents the price risk the producer will face. We find that the mean value for the price ratio will be 0.323 with a standard deviation of 0.043. By using this and the distribution for ethanol yield we can by Monte Carlo simulation forecast the April distribution for the crush ratio:

    As we see, will negative values for the crush ratio be well inside the field of possible outcomes:

    The actual value of the average price ratio for April turned out to be 0.376 with a daily maximum of 0.384 and minimum of 0.363. This implies that the April crush ratio with 90 % probability would have been between -0.005 and -0.199, with only the income from DDGs to cover the deficit and all other costs.

    Hedging the crush ratio

    The distribution for the price ratio forecast above clearly points out the necessity of price ratio hedging (Johnson, 1960) and (Stein, 1961).
    The time series chart above shows both a negative trend development and seasonal variations in the price ratio. In the short run there is nothing much to do about the trend development, but in the longer run will probably other feedstock and better processes change the trend development (Shapouri et al., 2002).

    However, what immediately stand out are the possibilities to exploit the seasonal fluctuations in both markets:

    Ideally, raw material is purchased in the months seasonal factors are low and ethanol sold the months seasonal factor are high. In practice, this is not possible, restrictions on manufacturing; warehousing, market presence, liquidity, working capital and costs set limits to the producer’s degrees of freedom (Dalgran, 2009).

    Fortunately, there are a number of tools in both the physical and financial markets available to manage price risks; forwards and futures contracts, options, swaps, cash-forward, and index and basis contracts. All are available for the producers who understand financial hedging instruments and are willing to participate in this market. See: (Duffie, 1989), (Hull, 2003) and (Bjørk, 2009).

    The objective is to change the margin distributions shape (red) from having a large part of its left tail on the negative part of the margin axis to one resembling the green curve below where the negative part have been removed, but most of the upside (right tail) has been preserved, that is to: eliminate negative margins, reduce variability, maintain the upside potential and thus reduce the probability of operating at a net loss:

    Even if the ideal solution does not exist, large number of solutions through combinations of instruments can provide satisfactory results. In principle, it does not matter where these instruments exist, since both the commodity and financial markets are interconnected to each other. From a strategic standpoint, the purpose is to exploit fluctuations in the market to capture opportunities while mitigating unwanted risks (Mallory, et al., 2010).

    Strategic Risk Management

    To manage price risk in commodity markets is a complex topic. There are many strategic, economic and technical factors that must be understood before a hedging program can be implemented.

    Since all the hedging instruments have a cost and since only future outcomes ranges and not exact prices, can be forecasted in the individual markets, costs and effectiveness is uncertain.

    In addition, the degrees of desired protection have to be determined. Are we seeking to ensure only a positive margin, or a positive EBITDA, or a positive EBIT? With what probability and to what cost?

    A systematic risk management process is required to tailor an integrated risk management program for each individual bioethanol plant:

    The choice of instruments will define different strategies that will affect company liquidity and working capital and ultimately company value. Since the effect of each of these strategies will be of stochastic nature it will only be possible to distinguish between them using the concept of stochastic dominance. (selecting strategy)

    Models that can describe the business operations and underlying risk can be a starting point, to such an understanding. Linked to balance simulation they will provide invaluable support to decisions on the scope and timing of hedging programs.

    It is only when the various hedging strategies are simulated through the balance so that the effect on equity value can be considered that the best strategy with respect to costs and security level can be determined – and it is with this that S@R can help.

    References

    Bjørk, T.,(2009). Arbitrage Theory in Continuous Time. Oxford University Press, Oxford.

    CME Group., (2010).Trading the corn for ethanol crush,
    http://www.cmegroup.com/trading/agricultural/corn-for-ethanol-crush.html

    CME Group., (July 11. 2011). Ethanol Outlook Report, , http://cmegroup.barchart.com/ethanol/

    Dalgran, R.,A., (2009) Inventory and Transformation Hedging Effectiveness in Corn Crushing. Journal of Agricultural and Resource Economics 34 (1): 154-171.

    Duffie, D., (1989). Futures Markets. Prentice Hall, Englewood Cliffs, NJ.

    Hull, J. (2003). Options, Futures, and Other Derivatives (5th edn). Prentice Hall, Englewood Cliffs, N.J.

    Johnson, L., L., (1960). The Theory of Hedging and Speculation in Commodity Futures, Review of Economic Studies , XXVII, pp. 139-151.

    Mallory, M., L., Hayes, D., J., & Irwin, S., H. (2010). How Market Efficiency and the Theory of Storage Link Corn and Ethanol Markets. Center for Agricultural and Rural Development Iowa State University Working Paper 10-WP 517.

    Patzek, T., W., (2004). Sustainability of the Corn-Ethanol Biofuel Cycle, Department of Civil and Environmental Engineering, U.C. Berkeley, Berkeley, CA.

    Patzek, T., W., (2006). A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch, Natural Resources Research, Vol. 15, No. 3.

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338.

    Shapouri,H., Duffield,J.,A., & Wang, M., (2002). The Energy Balance of Corn Ethanol: An Update. U.S. Department of Agriculture, Office of the Chief Economist, Office of Energy Policy and New Uses. Agricultural Economic Report No. 814.

    Stein, J.L. (1961). The Simultaneous Determination of Spot and Futures Prices. American Economic Review, vol. 51, p.p. 1012-1025.

    Footnotes

  • Moddeling World 2011

    Moddeling World 2011

     

     

    S&R participated as a keynote speaker at the Modelling World conference held in London June 16. The theme was forecasting and decision making inn an uncertain world. The event was organized by Local Transport Today Ltd. and Modelling World 2011 © Local Transport Today and covered a broad range of issues in transport modeling (Conference programme as Pdf-file).

    The presentation can be found as a Pdf-file here.

     

     

  • The tool that would improve everybody’s toolkit

    The tool that would improve everybody’s toolkit

    Edge, which every year ((http://www.edge.org/questioncenter.html))   invites scientists, philosophers, writers, thinkers and artists to opine on a major question of the moment, asked this year: “What scientific concept would improve everybody’s cognitive toolkit?”

    The questions are designed to provoke fascinating, yet inspiring answers, and are typically open-ended, such as:  “What will change everything” (2008), “What are you optimistic about?” (2007), and “How is the internet changing the way you think?” (Last’s years question). Often these questions ((Since 1998))  are turned into paperback books.

    This year many of the 151 contributors pointed to Risk and Uncertainty in their answers. In the following we bring excerpt from some of the answers. We will however advice the interested reader to look up the complete answers:

    A Probability Distribution

    The notion of a probability distribution would, I think, be a most useful addition to the intellectual toolkit of most people.

    Most quantities of interest, most projections, most numerical assessments are not point estimates. Rather they are rough distributions — not always normal, sometimes bi-modal, sometimes exponential, sometimes something else.

    Related ideas of mean, median, and variance are also important, of course, but the simple notion of a distribution implicitly suggests these and weans people from the illusion that certainty and precise numerical answers are always attainable.

    JOHN ALLEN PAULOS, Professor of Mathematics, Temple University, Philadelphia.

    Randomness

    The First Law of Randomness: There is such a thing as randomness.
    The Second Law of Randomness: Some events are impossible to predict.
    The Third Law of Randomness: Random events behave predictably in aggregate even if they’re not predictable individually

    CHARLES SEIFE, Professor of Journalism, New York University; formerly journalist, Science magazine; Author, Proofiness: The Dark Arts of Mathematical Deception.

    The Uselessness of Certainty

    Every knowledge, even the most solid, carries a margin of uncertainty. (I am very sure about my own name … but what if I just hit my head and got momentarily confused?) Knowledge itself is probabilistic in nature, a notion emphasized by some currents of philosophical pragmatism. Better understanding of the meaning of probability, and especially realizing that we never have, nor need, ‘scientifically proven’ facts, but only a sufficiently high degree of probability, in order to take decisions and act, would improve everybody’ conceptual toolkit.

    CARLO ROVELLI, Physicist, University of Aix-Marseille, France; Author, The First Scientist: Anaximander and the Nature of Science.

    Uncertainty

    Until we can quantify the uncertainty in our statements and our predictions, we have little idea of their power or significance. So too in the public sphere. Public policy performed in the absence of understanding quantitative uncertainties, or even understanding the difficulty of obtaining reliable estimates of uncertainties usually means bad public policy.

    LAWRENCE KRAUSS, Physicist, Foundation Professor & Director, Origins Project, Arizona State University; Author, A Universe from Nothing; Quantum Man: Richard Feynman’s Life in Science.

    Risk Literacy

    Literacy — the ability to read and write — is the precondition for an informed citizenship in a participatory democracy. But knowing how to read and write is no longer enough. The breakneck speed of technological innovation has made risk literacy as indispensable in the 21st century as reading and writing were in the 20th century. Risk literacy is the ability to deal with uncertainties in an informed way.

    GERD GIGERENZER, Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings.

    Living is fatal

    The ability to reason clearly in the face of uncertainty. If everybody could learn to deal better with the unknown, then it would improve not only their individual cognitive toolkit (to be placed in a slot right next to the ability to operate a remote control, perhaps), but the chances for humanity as a whole.

    SETH LLOYD, Quantum Mechanical Engineer, MIT; Author, Programming the Universe.

    Uncalculated Risk

    We humans are terrible at dealing with probability. We are not merely bad at it, but seem hardwired to be incompetent, in spite of the fact that we encounter innumerable circumstances every day which depend on accurate probabilistic calculations for our wellbeing. This incompetence is reflected in our language, in which the common words used to convey likelihood are “probably” and “usually” — vaguely implying a 50% to 100% chance. Going beyond crude expression requires awkwardly geeky phrasing, such as “with 70% certainty,” likely only to raise the eyebrow of a casual listener bemused by the unexpected precision. This blind spot in our collective consciousness — the inability to deal with probability — may seem insignificant, but it has dire practical consequences. We are afraid of the wrong things, and we are making bad decisions.

    GARRETT LISI, Independent Theoretical Physicist

    And there is more … much more at the Edge site