Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Balance sheet and P&L simulation – Page 2 – Strategy @ Risk

Category: Balance sheet and P&L simulation

  • Introduction to Simulation Models

    Introduction to Simulation Models

    This entry is part 4 of 6 in the series Balance simulation

     

    Simulation models sets out to mimic real life company operations, that is describing the transformation of raw materials and labor to finished products in such a way that it can be used as support for strategic decision making.

    A full simulation model will usually consist of two separate models:

    1. an EBITDA model that describes the particular firm’s operations and
    2. an generic P&L and Balance simulation model (PL&B).

     

     

    The EBITDA model ladder

    Both the deterministic and stochastic balance simulation can be approached as a ladder with two steps, where the first is especially well suited as an introduction to risk simulation and the second gives a full blown risk analysis. In these successive steps the EBITDA calculations will be based on:

    1. financial information only, by using coefficients of fabrications and unit prices (e.g. kg flour per 1000 bread and cost of flour per kg, etc.) as direct input to the balance model – the direct method and
    2. EBITDA models to give a detailed technical description of the company’s operations.

    The first step uses coefficients of fabrications and their variations give a low effort (cost) alternative, usually using the internal accounting as basis. In many cases, this will often give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans. The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    This step is especially well suited for introduction to risk simulation and the art of communicating risk and uncertainty throughout the firm. It can also profitably be used in cases where time and data is limited and where one wishes to limit efforts in an initial stage. Data and assumptions can later be augmented to much more sophisticated analyses within the same framework. This way the analysis can be successively built in the direction the previous studies suggested.

    The second step implies setting up a dedicated EBITDA subroutine to the balance model. This can then give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long-term planning and strategy development.

    The EBITDA model can both be a stand-alone model and a subroutine to the PL&B model. The stand-alone EBITDA model can be used to in detail study the firm’s operations and how different operational strategies will or can affect EBITDA outcomes and distribution.

    When connected to the PL&B model it will act as a subroutine giving the necessary information to produce the P&L and ultimately the Balance and the – outcomes distributions.

    This gives great flexibility in model formulations and the opportunity to fit models to different industries and accommodate for the data available.

    P&L and Balance simulation

    The generic PL&B model – based on the IFRS standard – can be used for a wide range of business activities both:

    1. describes the firm’s financial environment (taxes, interest rates, currency etc.) and
    2. acts as a testing bed for financial strategies (hedging, translation risk, etc.)

    Since S@R has set out to create models that can give answers to both deterministic and stochastic questions thus the PL&B model is a real balance simulation model – not a simple cash flow forecast model.

    All runs in the simulation produces a complete P&L and Balance it enables uncertainty curves (distributions) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. to be produced.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    The results from these analyses can be presented in different forms from detailed traditional financial reports to graphs describing the range of possible outcomes for all items in the P&L and Balance (+ much more) looking at the coming one to five (short term) or five to fifteen years (long term) and showing the impacts to e.g. equity value, company value, operating income etc.

    The goal is to find the distribution for the firm’s equity value which will incorporate all uncertainty facing the firm.

    This uncertainty gives both shape and location of the equity value distribution, and this is what we – if possible – are aiming to change:

    1. reducing downside risk by reducing the left tail (blue curve)
    2. increasing expected company value by moving the curve to the right (green curve)
    3. increasing the upside potential by  increasing the right tail (red curve) etc.

     

    The Data

    To be able to simulate the operations we need to put into the model all variables that will affect the firm’s net yearly result. Most of these will be collected by S@R from outside sources like central banks, local authorities and others, but some will have to be collected from the firm.

    The production and firm specific variables are related to every day’s activities in the firm. Their historic values can be collected from internal accounts or from production reports.  Someone in the procurement-, production- or sales department will have their records and most always the controllers.  The rest will be variables inside the domain of the CEO and the company treasurer.

    The variables fall in five groups:

    i.      general  variables describing the firm’s financial environment ,
    ii.      variables describing the firms strategy,
    iii.      general variables used for forecasting purposes,
    iv.      direct problem related variables and
    v.      the firm specific:
    a.  production coefficients  and
    b.  cost of raw materials and labor related variables.

    The first group will contain – for all countries either delivering raw materials or buying the finished product (s) – variables like: taxes, spot exchange rates etc.  For the firm’s domestic country it will in addition contain variables like: Vat rates, taxes on investments and dividend income, depreciation rates and method, initial tax allowances, overdraft interest rates etc.

    The second group will contain variables like: minimum cash levels, debt distribution on short and long term loans and currencies, hedge ratios, targeted leverage, economic depreciation etc.

    The third group will contain variables needed for forecasting purposes: yield curves, inflation forecasts, GDP forecasts etc. The expected values and their 5 % and 95 % probability limits will be used to forecast exchange rates, interest rates, demand etc. They will be collected by S@R.

    The fourth group will contain variables related to sales forecasts: yearly air temperature profiles (and variation) for forecasting beer sales and yearly water temperature profiles (and variation) for forecasting increase in biomass in fish farming.

    The fifth group will contain variables that specify the production and costs of production. They will vary according to the type of operations e.g.: operating rate (%), max days of production, tools maintenance (h per 10.000 units) , error rate (errors per 1000 units), waste (% of weight of prod unit), cycle time (units per min), number of machines per shift (#), concession density (kg per m3), feed rates (%), mortality rates (%) etc., etc.. This variable specifies the production and will they be stochastic in the sense that they are not constant but will vary inside a given – theoretical or historic – range.

    To simulate costs of production we use the coefficients of fabrication and their unit costs. Both the coefficients and their unit costs will always be of stochastic nature and they can vary with capacity utilization:  energy per unit produced (kwh/unit) and energy price (cost per Kwh), malt use (kg per hectoliter), malt price (per kg), maximum takeoff weight (ton), takeoff charge (per ton), specific consumption of wood, (m3/Adt), cost of round wood (per m3), etc., etc.

    The uncertainty (and risk) stemming from all groups of variables will be propagated through the P&L and down to the Balance, ending up as volatility in the equity distribution.

    The aim is to estimate the economic impact that such uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight, and the ability to deal with uncertainties in an informed way – and thus benefits above ordinary spread-sheet exercises.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    Most companies have some sort of model describing the company’s operations. They are used mostly for budgeting, but in some cases also for forecasting cash flow and other important performance measures.

    If the client already has spread sheet models describing the operations, we can build on this. There is no reason to reinvent what has already been done – thus saving time and resources that can be better utilized in other phases of the project.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    In some cases, we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.

    The first approach can also be considered as an introduction and stepping-stone to a more complete EBITDA model and detailed simulations.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However, the type of data will have to be agreed upon depending on the scope of analysis.

    Very often, key people from the controller group will be adequate for this work and if they do not have the direct knowledge, they usually know whom to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months – depending on the scope of the project.

    For S&R, the period will depend on the availability of key personnel from the client and the availability of data. For the second alternative, it can take from one to three weeks of normal work to three to six months for the second alternative for more complex models. The total time will also depend on the number of analyses that needs to be run and the type of reports that has to be delivered.

    The team’s participation in the project also makes communication of the results up or down in the system simpler. Since the input data is collected by templates this gives the responsible departments and persons, ownership to assumptions, data and results. These templates thus visualize the flow of data thru the organization and the interdependence between the departments – facilitating the communication of risk and the different strategies both reviewed and selected.

    No knowledge or expertize on uncertainty calculations or statistical methods is required from the clients side. The team will thru ‘osmosis’ acquires the necessary knowledge. Usually the team finds this as an exciting experience.

  • Corn and ethanol futures hedge ratios

    Corn and ethanol futures hedge ratios

    This entry is part 2 of 2 in the series The Bio-ethanol crush margin

     

    A large amount of literature has been published discussing hedging techniques and a number of different hedging models and statistical refinements to the OLS model that we will use in the following. For a comprehensive review see “Futures hedge ratios: a review,” (Chen et al., 2003).

    We are here looking for hedge models and hedge ratio estimations techniques that are “good enough” and that can fit into valuation models using Monte Carlo simulation.

    The ultimately purpose is to study hedging strategies using P&L and Balance simulation to forecast the probability distribution for the company’s equity value. By comparing the distributions for the different strategies, we will be able to select the hedging strategy that best fits the boards risk appetite /risk aversion and that at the same time “maximizes” the company value.

    Everything should be made as simple as possible, but not simpler. – Einstein, Reader’s Digest. Oct. 1977.

    To use futures contracts for hedging we have to understand the objective: a futures contract serves as a price-fixing mechanism. In their simplest form, futures prices are prices set today to be paid in the future for goods. If properly designed and implemented, hedge profits will offset the loss from an adverse price moves. In a like fashion, hedge losses will also eliminate effects of a favorable price change. Ultimately, the success of any hedge program rests on the implementation of a correctly sized futures position.

    The minimum variation hedge

    This is often referred to as – the volatility-minimizing hedge for one unit of exposure. It can be found by minimizing the variance of the hedge payoff at maturity.

    For an ideal hedge, we would like the change in the futures price (Delta F) to match as exactly as possible the change in the value of the asset (Delta S) we wish to hedge, i.e.:

    Delta S = Delta F

    The expected payoff from the hedge will be equal to the value of the cash position at maturity plus the payoff of the hedge (Johnson, 1960) or:

    E(H) = X_S delim{[} {E (S2)-S1} {]} + X_F delim{[} {E (F2)-F1 }{]}

    With spot position XS, a short futures market holding XF, spot price S1 and expected spot price at maturity E (S2), current future contract price F1 andexpected future price E (F2) – excluding transaction costs.

    What we want is to find the value of the futures position that reduces the variability of price changes to the lowest possible level.

    The minimum-variance hedge ratio is then defined as the number of futures per unit of the spot asset that will minimize the variance of the hedged portfolio returns.

    The variance of the portfolio return is: ((The variance of the un-hedged position is: Var (U) =X^2_S Var (Delta S))):

    Var (H) =X^2_ S Var (Delta S) + X^2_F Var (Delta F) + 2 X_S X_F Covar (Delta S, Delta F)

    Where Var (Delta S) is the variance in the future price change, Var (Delta F) is the variance of the change in the spot price and Covar (Delta S, Delta F) the covariance between the spot and future price changes. Letting h =  X_F/X_S represent the proportion of the spot position hedged, minimum value of Var (H) can then be found ((by minimizing Var (H) as a function of h)) as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)} or equivalently as: h*= {Corr (Delta S, Delta F)} {Var(Delta S)}/{Var (Delta F)}

    Where Corr (Delta S, Delta F) is the correlation between the spot and future price changes while  assuming that XS is exogenous determined or fixed.

    Estimating the hedge coefficient

    It is also possible to estimate the optimal hedge (h*) using regression analysis. The basic equation is:

    Delta S = a + h Delta F + varepsilon

    with varepsilon as the change in spot price not explained by the regression model. Since the basic OLS regression for this equation estimates the value of h* as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)}

    we can use this regression to find the solution that minimizes the objective function E(H). This is one of the reasons that use of the objective function E (H) is so appealing. ((Note that other and very different objective functions could have chosen.))

    We can then use the coefficient of determination, or R^2 , as an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position – the hedge effectiveness. (Ederington, 1979) ((Not taking into account variation margins etc.)).

    The basis

    Basis is defined as the difference between the spot price (S) and the futures price (F). When the expected change in the future contract price is equal to the expected change in the spot price, the optimal variance minimizing strategy is to set  h*=1. However, for most future contract markets the future price does not perfectly parallel the spot price, causing an element of basis risk to directly affect the hedging decision.

    A negative basis is called contango and a positive basis backwardation:

    1. When the spot price increases by more than the futures price, the basis increases and is said to “strengthen the basis” (when unexpected, this is favorable for a short hedge and unfavorable for a long hedge).
    2. When the futures price increases by more than the spot price, the basis declines and this is said to “weaken the basis” (when unexpected, this is favorable for a long hedge and unfavorable for a short hedge).

    There will usually be a different basis for each contract.

    The number of futures contracts

    The variance minimizing number of futures contracts N* will be:

    N*=h*{ X_S}/{Q_F}

    Where Q_F  is the size of one futures market contract. Since futures contracts are marked to market every day, the daily losses are debited and daily gains credited the parties accounts – settlement variations – i.e. the contracts are closed every day. The account will have to be replenished if the account falls below the maintenance margin (margin call). If the account is above the initial margin withdrawals can be made from the account.

    Ignoring the incremental income effects from investing variation margin gains (or borrowing to cover variation margin losses), we want the hedge to generate h*Delta F. Appreciating that there is an incremental effect, we want to accrue interest on a “tailed” hedge such that (Kawaller, 1997):

    h*Delta F =Delta F  (1+r)^n  or
    ĥ = h*/(1+r)^n  or h*/(1+ r*n/365) if time to maturity is less than one year.

    Where:
    r = interest rate and
    n = number of days remaining to maturity of the futures contract.

    This amounts to adjusting the hedge by a present value factor. Tailing converts the futures position into a forward position. It negates the effect of daily resettlement, in which profits and losses are realized before the day the hedge is lifted.

    For constant interest rates the tailed hedge (for h* < 1.) rises over time to reach the exposure at the maturity of the hedge. Un-tailed the hedge will over-hedge the exposure and increase the hedger’s risk.  Tailing the hedge is especially of importance when the interest rate is high and the time to maturity long.

    An appropriate interest rate would be one that reflects the average of the firm’s cost of capital (WACC) and the rate it would earn on its investments (ROIC) both which will be stochastic variable in the simulation. The first would be relevant in cases when the futures contracts generate losses, while the second when the futures contracts generate gains. In practice some average of these rates are used. ((See FAS 133 and later amendments))
    There are traditionally two approaches to tailing:

    1. Re-balance the tail each day. In this case the tailed hedge ratio is adjusted each day to maturity of the futures contract. In this approach the adjustment declines each day, until at expiration there is no adjustment.
    2. Use a constant tail (average): ĥ= h*/(1 + 0.5*r*N /365) where N is the original number of days remaining to maturity. In this shortcut, the adjustment is made at the time the hedge is put on, and not changed. The hedge will start with being too big and ends with being too small, but will on average be correct.

    For investors where trading is active, the first approach is more convenient, for inactive traders, the second is often used.

    Since our models always incorporate stochastic interest rates, hedges discounted with the appropriate rates are calculated. This amounts to solving the set of stochastic simultaneous equations created by the hedge and the WACC/ROIC calculations since the hedges will change their probability distributions. Note that the tailed hedge ratio will be a stochastic variable, and that minimizing the variance of the hedge will not necessarily maximize the company value. The value of – ĥ – that maximizes company value can only be found by simulation given the board’s risk appetite / risk aversion.

    The Spot and Futures Price movements

    At any time there are a number of futures contracts for the same commodity simultaneously being priced. The only difference between them is the delivery month. A continuous contract takes the individual contracts in the futures market and splices them together. The resulting continuous series ((The simplest method of splicing is to tack successive delivery months onto each other. Although the prices in the history are real, the chart will also preserve the price gaps that are present between expiring deliveries and those that replace them.)) allows us to study the price history in the market from a single chart. The following graphs show the price movements ((To avoid price gap problems, many prefer to base analysis on adjusted contracts that eliminate roll-over gaps. There are two basic ways to adjust a series.
    Forward-adjusting works by beginning with the true price for the first delivery and then adjusting each successive set of prices up or down depending on whether the roll-over gap is positive or negative.
    Back-adjusting reverses the process. Current price are always real but historical prices are adjusted up or down. This is the often preferred method, since the series always will show the latest actual price. However, there is no perfect method producing a continuous price series satisfying all needs.)) for the spliced corn contracts C-2010U to 2011N and the spliced ethanol contracts EH-2010U to 2011Q.

    In the graphs the spot price is given by the blue line and the corresponding futures price by the red line.

    For the corn futures, we can see that there is a difference between the spot and the futures price – the basis ((The reasons for the price difference are transportation costs between delivery locations, storage costs and availability, and variations between local and worldwide supply and demand of a given commodity. In any event, this difference in price plays an important part in what is being actually pay for the commodity when you hedge.))  – but that the price movements of the futures follow the spot price closely or – vice versa.

    The spliced contracts for bioethanol are a little different from the corn contracts. The delivery location is the same and the curves are juxtaposed very close to each other. Here are however other differences.

    The regression – the futures assay

    The selected futures contracts give us five parallel samples for the relation between the corn spot and futures price, and six for the relation between the ethanol spot and ethanol futures price. For every day in the period 8/2/2010 to 7/14/2011 we have from one to five observations of the corn relation (five replications) and from 8/5/2010 to 8/3/2011 we have one to twelve observations of the ethanol relation. Since we follow a set of contracts, the number of daily observations of the corn futures prices starts with five (twelve for the ethanol futures) and ends with only one as the contracts matures.  We could of course also have selected a sample giving an equal number of observations every day.

    There are three likely models which could be fit:

    1. Simple regression on the individual data points,
    2. Simple regression on the daily means,and
    3. Weighted regression on the daily means using the number of observations as the weight.

    When the number of daily observations is equal all three models will have the same parameter estimates. The weighted and individual regressions will always have the same parameter estimates, but when the sample sizes are unequal these will be different from the unweighted means regression. Whether the weighted or unweighted model should be used when the number of daily observations is unequal will depend on the situation.

    Since we now have replications of the relation between spot and the futures price we have the opportunity to test for lack of fit from the straight line model.

    In our case using this approach have a small drawback. We are looking for the regression of the spot price changes against the price changes in the futures contract. This model however will give us the inverse: the regression of the price changes in the futures contract against the changes in spot price. The inverse of the slope of this regression, which is what we are looking for, will in general not give the correct answer (Thonnard, 2006).  So we will use this approach (model#3) to test for linearity and then model #1 with all data for estimation of the slope.

    Ideally we would like to find stable (efficient) hedge ratios in the sense that they can be used for more than one hedge and over a longer period of time, thus greatly simplifying the workload for ethanol producing companies.

    All prices, both spot and futures in the following, have been converted from $/gallon (ethanol) or $/bushel (corn) to $/kg.

    The Corn hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the corn futures prices on the changes in corn spot prices (model#3):

    The analysis of variance cautions us that the lack of fit to a linear model for all contracts is significant. However the sum of squares due to this is very small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a linear function of the changes in the spot prices and the hedge ratios found as efficient. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    Nevertheless, this linear model will have to be monitored closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:

    Delta S = 0.0001 + 1.0073 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0073

    First, since the adjusted  R-square value (0.9838) is an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position, a hedge based on this regression coefficient (slope) should be highly effective.

    The ratio of the variance of the hedged position and the un-hedged position is equal to 1-R2.  The variance of a hedged position based on this hedge ratio will be 12.7 % of the unhedged position.

    We have thus eliminated 87.3 % of the variance of the unhedged position. For a simple model like this this can be considered as a good result.

    In the figure the thick black line gives the 95% confidence limits and the yellow area the 95% prediction limits. As we can see, the relationship between the daily price changes is quite tight thus promising the possibility of effective hedges.

    Second, due to the differencing the basis caused by the difference in delivery location have disappeared, and even if the constant term is significant, it is so small that it with little loss can be considered zero.

    The R-square values would have been higher for the regressions on the means than for the regression above. This is because the total variability in the data would have been reduced by using means (note that the total degrees of freedom is reduced for the regressions on means).  A regression on the means will thus always suggest greater predictive ability than a regression on individual data because it predicts mean values, not individual values.

    The Ethanol hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the ethanol futures prices on the changes in ethanol spot prices (model#3):

    The analysis of variance again cautions us that the lack of fit to a linear model for all contracts is significant.  In this case it is approximately ten times higher than for the corn contracts.

    However the sum of squares due to this is small small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a close to linear function of the changes in the spot prices and the hedge ratios found as “good enough”. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    In this graph we can clearly see the deviation from a strictly linear model. The assumption of a linear model for the changes in ethanol spot and futures prices will have to be monitored very closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:
    Delta S = 0.0135 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0135

    The adjusted  R-square value (0.8105) estimating the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position is high even with the “lack of linearity”. A hedge based on this regression coefficient (slope) should then still be highly effective.

    The variance of a hedged position based on this hedge ratio will be 43.7 % of the unhedged position. It is not as good as for the corn contracts, but will still give a healthy reduction in the ethanol price risk facing the company.

    As this turned out, we can use both of these estimation methods for the hedge ratio as basis for strategy simulations, but one question remains unanswered: will this minimize the variance of the crush  ratio?

    References

    Understanding Basis, Chicago Board of Trade, 2004.  http://www.gofutures.com/pdfs/Understanding-Basis.pdf

    http://www.cmegroup.com/trading/agricultural/files/AC-406_DDG_CornCrush_042010.pdf

    Bond, Gary E. (1984). “The Effects of Supply and Interest Rate Shocks in Commodity Futures Markets,” American Journal of Agricultural Economics, 66, pp. 294-301.

    Chen, S. Lee, C.F. and Shrestha, K (2003) “Futures hedge ratios: a review,” The Quarterly Review of Economics and Finance, 43 pp. 433–465

    Ederington, Louis H. (1979). “The Hedging Performance of the New Futures Markets,” Journal of Finance, 34, pp. 157-70

    Einstein, Albert (1923). Sidelights on Relativity (Geometry and Experience). P. Dutton., Co.

    Figlewski, S., Lanskroner, Y. and Silber, W. L. (1991) “Tailing the Hedge: Why and How,” Journal of Futures Markets, 11: pp. 201-212.

    Johnson, Leland L.  (1960). ” The Theory of Hedging and Speculation in Commodity Futures,” Review of Economic Studies, 27, pp. 139-51.

    Kawaller, I. G. (1997 ) ”Tailing Futures Hedges/Tailing Spreads,” The Journal of Derivatives, Vol. 5, No. 2, pp. 62-70.

    Li, A. and Lien, D. D. (2003) “Futures Hedging Under Mark-to-Market Risk,” Journal of Futures Markets, Vol. 23, No. 4.

    Myers Robert J. and Thompson Stanley R. (1989) “Generalized Optimal Hedge Ratio Estimation,” American Journal of Agricultural Economics, Vol. 71, No. 4, pp. 858-868.

    Thonnard, M., (2006), Confidence Intervals in Inverse Regression. Diss. Technische Universiteit Eindhoven, Department of Mathematics and Computer Science, Web. 5 Apr. 2013. <http://alexandria.tue.nl/extra1/afstversl/wsk-i/thonnard2006.pdf>.

    Stein, Jerome L.  (1961). “The Simultaneous Determination of Spot and Futures Prices,” American Economic Review, 51, pp. 1012-25.

    Endnotes

  • Plans based on average assumptions ……

    Plans based on average assumptions ……

    This entry is part 3 of 4 in the series The fallacies of scenario analysis

     

    The Flaw of Averages states that: Plans based on the assumption that average conditions will occur are usually wrong. (Savage, 2002)

    Many economists use what they believe to be most likely ((Most likely estimates are often made in-house based on experience and knowledge about their operations.)) or average values ((Forecasts for many types of variable can be bought from suppliers of ‘consensus forecasts’.))  (Timmermann, 2006) (Gavin & Pande, 2008) as input for the exogenous variables in their spreadsheet calculations.

    We know however that:

    1. the probability for any variable to have outcomes equal to any of these values is close to zero,
    1. and that the probability of having outcomes for all the (exogenous) variables in the spreadsheet model equal to their average is virtually zero.

    So why do they do it? They obviously lack the necessary tools to calculate with uncertainty!

    But if a small deviation from the most likely value is admissible, how often will the use of a single estimate like the most probable value be ‘correct’?

    We can try to answer that by looking at some probability distributions that may represent the ‘mechanism’ generating some of these variables.

    Let’s assume that we are entering into a market with a new product, we know of course the maximum upper and lower limit of our future possible market share, but not the actual number so we guess it to be the average value = 0,5. Since we have no prior knowledge we have to assume that the market share is uniformly distributed between zero and one:

    If we then plan sales and production for a market share between 0, 4 and 0, 5 – we would out of a hundred trials only have guessed the market share correctly 13 times. In fact we would have overestimated the market share 31 times and underestimated it 56 times.

    Let’s assume a production process where the acceptable deviation from some fixed measurement is 0,5 mm and where the actual deviation have a normal distribution with expected deviation equal to zero, but with a standard deviation of one:

    Using the average deviation to calculate the expected error rate will falsely lead to us to believe it to be zero, while it in fact in the long run will be 64 %.

    Let’s assume that we have a contract for drilling a tunnel, and that the cost will depend on the hardness of the rock to be drilled. The contract states that we will have to pay a minimum of $ 0.5M and a maximum of $ 2M, with the most likely cost being $ 1M. The contract and our imperfect knowledge of the geology make us assume the cost distribution to be triangular:

    Using the average ((The bin containing the average in the histogram.)) as an estimate for expected cost will give a correct answer in only 14 out of a 100 trials, with cost being lower in 45 and higher in 41.

    Now, let’s assume that we are performing deep sea drilling for oil and that we have a single estimate for the cost to be $ 500M. However we expect the cost deviation to be distributed as in the figure below, with a typical small negative cost deviation and on average a small positive deviation:

    So, for all practical purposes this is considered as a low economic risk operation. What they have failed to do is to look at the tails of the cost deviation distribution that turns out to be Cauchy distributed with long tails, including the possibility of catastrophic events:

    The event far out on the right tail might be considered a Black Swan (Taleb, 2007), but as we now know they happen from time to time.

    So even more important than the fact that using a single estimate will prove you wrong most of the times it will also obscure what you do not know – the risk of being wrong.

    Don’t worry about the average, worry about how large the variations are, how frequent they occur and why they exists. (Fung, 2010)

    Rather than “Give me a number for my report,” what every executive should be saying is “Give me a distribution for my simulation.”(Savage, 2002)

    References

    Gavin,W.,T. & Pande,G.(2008), FOMC Consensus Forecasts, Federal Reserve Bank of St. Louis Review, May/June 2008, 90(3, Part 1), pp. 149-63.

    Fung, K., (2010). Numbers Rule Your World. New York: McGraw-Hill.

    Savage, L., S.,(2002). The Flaw of Averages. Harvard Business Review, (November), 20-21.

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Taleb, N., (2007). The Black Swan. New York: Random House.

    Timmermann, A.,(2006).  An Evaluation of the World Economic Outlook Forecasts, IMF Working Paper WP/06/59, www.imf.org/external/pubs/ft/wp/2006/wp0659.pdf

    Endnotes

  • Working Capital and the Balance Sheet

    Working Capital and the Balance Sheet

    This entry is part 2 of 3 in the series Working Capital

     

    The conservation-of-value principle says that it doesn’t matter how you slice the financial pie with financial engineering, share repurchases, or acquisitions; only improving cash flows will create value. (Dobbs, Huyett & Koller, 2010).

    The above, taken from “The CEO’s guide to corporate finance” will be our starting point and Occam’s razor the tool to simplify the balance sheet using the concept of working- and operating capital.

    To get a better grasp of the firm’s real activities we will as well separate non-operating assets from operating assets – since it will be the last that defines the firm’s operations.

    To find the amount of operating current assets we have to deduct the sum of minimum cash level, inventories and account receivables from total current assets. The difference between total- and operating current assets is assumed placed in excess marketable securities – and will not be included in the working capital.

    Many firms have cash levels above and well beyond what is really needed as working capital, tying up capital that could have had better uses generating higher return than mere short-term placements.

    The net working capital now found by deducting non-interest bearing current liabilities from operating current assets, will be the actual amount of working capital needed to safely run the firms operations – no more and no less.

    By summing net property, plant and equipment and other operating fixed assets we find the total amount of fixed assets involved in the firm’s operations. This together with net working capital forms the firms operating assets, assets that will generate the cash flow and return on equity that the owners are expecting.

    The non-operating part – excess marketable securities and non-operating investments – should be kept as small as possible, since this at best only will give an average market return. The rest of the above calculations give us the firm’s total liability and equity, which we will use to set up the firm’s ordinary balance sheet:

    However, by introducing operating-, non-operating- and working capital we can get a clearer picture of the firm’s activities ((Used in yearly reports by Stora Enso, a large international Pulp & Paper company, noted on NASDAQ OMX in Stockholm and Helsinki.)):

    The balance sheet’s bottom line has been reduced by the smallest value of operating current assets and non-interest bearing debt and the difference between them – the working capital – will be an asset or a liability depending on which of them that have the largest value:

    The above calculations is an integral part of our balance simulation model and the report that can be produced for planning, strategy- and risk assessment from the simulation can be viewed her; report for the most likely outcome (Pdf, pp 32). However this report can be produced for every run in the simulation giving the opportunity to look at tail events that might arise, distorting expectations.

    Simplicity is the ultimate sophistication. — Leonardo da Vinci

    References

    Dobbs, D, Huyett, H, & Koller, T. (2010). The ceo’s guide to corporate finance. McKinsey Quarterly, 4. Retrieved from http://www.mckinseyquarterly.com/home.aspx

    Endnotes

  • Uncertainty modeling

    Uncertainty modeling

    This entry is part 2 of 3 in the series What We Do

    Prediction is very difficult, especially about the future.
    Niels Bohr. Danish physicist (1885 – 1962)

    Strategy @ Risks models provide the possibility to study risk and uncertainties related to operational activities;  cost, prices, suppliers,  markets, sales channels etc. financial issues like; interest rates risk, exchange rates risks, translation risk , taxes etc., strategic issues like investments in new or existing activities, valuation and M&As’ etc and for a wide range of budgeting purposes.

    All economic activities have an inherent volatility that is an integrated part of its operations. This means that whatever you do some uncertainty will always remain.

    The aim is to estimate the economic impact that such critical uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight: the ability to deal with uncertainties in an informed way and thus benefits above ordinary spread-sheet exercises.

    The results from these analyzes can be presented in form of B/S and P&L looking at the coming one to five (short term) or five to fifteen years (long term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ probabilities
    • Evaluate alternative strategic investment options at risk
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    S@R has set out to create models that can give answers to both deterministic and stochastic questions, by linking dedicated Ebitda models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or
    2. by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model – the ‘short cut’ method.

    The first approach implies setting up a dedicated Ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long term planning and strategy development.

    The second (‘the short cut’) uses coefficients of fabrications and their variations, and is a low effort (cost) alternative, usually using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans.  The data needed for the company’s economic environment (taxes, interest rates etc) will be the same in both alternatives:

    The ‘short cut’ approach is especially suited for quick appraisals of M&A cases where time and data is limited and where one wishes to limit efforts in an initial stage. Later the data and assumptions can be augmented to much more sophisticated analysis within the same ‘short cut’ framework. In this way analysis can be successively built in the direction the previous studies suggested.

    This also makes it a good tool for short-term (3-5 years) analysis and even for budget assessment. Since it will use a limited number of variables – usually less than twenty – describing the operations, it is easy to maintain and operate. The variables describing financial strategy and the economic environment come in addition, but will be easy to obtain.

    Used in budgeting it will give the opportunity to evaluate budget targets, their probable deviation from expected result and the probable upside or down side given the budget target (Upside/downside ratio).

    Done this way analysis can be run for subsidiaries across countries translating the P&L and Balance to any currency for benchmarking, investment appraisals, risk and opportunity assessments etc. The final uncertainty distributions can then be “aggregated’ to show global risk for the mother company.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    Since all runs (500 to 1000) in the simulation produces a complete P&L and Balance the uncertainty curve (distribution) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. can be produced.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic Ebitda model.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However the type of data will have to be agreed upon depending on the scope of analysis.

    Very often will key people from the controller group be adequate for this work and if they don’t have the direct knowledge they usually know who to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months.

    For S&R the time frame will depend on the availability of key personnel from the client and the availability of data. For the second alternative it can take from one to three weeks of normal work to three to six months for the first alternative for more complex models. The total time will also depend on the number of analysis that needs to be run and the type of reports that has to be delivered.

    S@R_ValueSim

    Selecting strategy

    Models like this are excellent for selection and assessment of strategies. Since we can find the probability distribution for equity value, changes in this brought by different strategies will form a basis for selection or adjustment of current strategy. Models including real option strategies are a natural extension of these simulation models:

    If there is a strategy with a curve to the right and under all other feasible strategies this will be the stochastic dominant one. If the curves crosses further calculations needs to be done before a stochastic dominant or preferable strategy can be found:

    Types of problems we aim to address:

    The effects of uncertainties on the P&L and Balance and the effects of the Boards strategies (market, hedging etc.) on future P&L and Balance sheets evaluating:

    • Market position and potential for growth
    • Effects of tax and capital cost
    • Strategies
    • Business units, country units or product lines –  capital allocation – compare risk, opportunity and expected profitability
    • Valuations, capital cost and debt requirements, individually and effect on company
    • The future cash-flow volatility of company and the individual BU’s
    • Investments, M&A actions, their individual value, necessary commitments and impact on company
    • Etc.

    The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against uncertain factors.

    Used in budgeting, this will improve budget stability through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.

    This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and Choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.

    A severe depression like that of 1920-1921 is outside the range of probability. –The Harvard Economic Society, 16 November 1929

  • Big Issues Needs Big Tools

    Big Issues Needs Big Tools

    This entry is part 3 of 3 in the series What We Do

     

    You can always amend a big plan, but you can never expand a little one. I don’t believe in little plans. I believe in plans big enough to meet a situation which we can’t possibly foresee now. Harry S. Truman : American statesman (33rd US president: 1945-53)

    We believe you know your business best and will in your capacity implement the necessary resources, competence, tools and methods for running a successful and efficient organization.

    Still issues related to uncertainty, whether it is finance, stakeholders, production , purchase or sale, has in most cases increased due to more complex business operational environment. Excellent systems for a range of processes; consolidation, Customer relationship, accounting has kept up with increasingly complex environments, and so has your most important tool – people.

    But we believe you do not possess the best available method and tool for bringing people – competence, experience, economic/financial facts, assumptions and economic/financial tools together.

    You know your budgets, valuations, projections and estimates, scenario analysis all are made and presented with valuable information regarding uncertainty left out on the way. Whether this is because human experience related to risk, or analyzing, understanding and projection macro or micro risks is hard to capture, or tools not designed to capture risk is the cause. It is a fact that most complex big issues important for companies are based on insufficient information, a portion of luck, gut feeling and believes in market turns /stability/cycles or other comforting assumptions shared by peers.

    Or you are restricted to giving guidelines, min/max orders, specifications and trust to third party experts that one hope are better capable of capturing risk and potential in a narrow area of expertise. Regardless of this risk spreading or differentiation works – you need the best assistance for setting your guidelines and road-map both for your internal as well as external resources.

    Systems and methods (( A Skeptic’s Guide to Computer Models (Pdf, pp 25) , by John D. Sterman. This paper is reprinted from Sterman, J. D. (1991). A Skeptic’s Guide to Computer Models. In Barney, G. O. et al., Managing a Nation: The Microcomputer Software Catalog. Boulder, CO: Westview Press, 209-229.)) are never better than human experience, knowledge and excellence, but if you want to look closer at a method/tool that can capture the best of your existing decision making process and bringing it to a new level. You should look closer at a stochastic complete p/L/balance simulation model for those big issues and big decisions.

    If you are not familiar with stochastic simulations and probability distributions, take a look at a report for the most likely outcome (Pdf, pp 32) from the simulations – similar reports could have been made for the outcomes you would not have liked to see, giving a heads up for the sources of downside risk, OR for outcomes you would have loved you see – explaining the generators of up-side possibilities.

    Endnotes