Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Balance sheet and P&L simulation – Strategy @ Risk

Category: Balance sheet and P&L simulation

  • The role of events in simulation modeling

    The role of events in simulation modeling

    This entry is part 2 of 2 in the series Handling Events

    “With a sample size large enough, any outrageous thing is likely to happen”

    The law of truly large numbers (Diaconis & Mosteller, 1989)

    Introduction

    The need for assessing the impact of events with binary[i] outcomes, like loan defaults, occurrence of recessions, passage of a special legislation, etc., or events that can be treated like binary events like paradigm shifts in consumer habits, changes in competitor behavior or new innovations, arises often in economics and other areas of decision making.

    To the last we can add political risks, both macro and micro; conflicts, economic crises, capital controls, exchange controls, repudiation of contracts, expropriation, quality of bureaucracy, government project decision-making, regulatory framework conditions; changes in laws and regulations, changes in tax laws and regimes etc.[ii]  Political risk acts like discontinuities and usually becomes more of a factor as the time horizon of a project gets longer.

    In some cases when looking at project feasibility, availability of resources, quality of work force and preparations can also be treated as binary variables.

    Events with binary outcomes have only two states, either it happens or it does not happen: the presence or absence of a given exposure. We may extend this to whether it may happen next year or not or if it can happen at some other point in the projects timeframe.

    We have two types of events:  external events originating from outside with the potential to create effects inside the project and events originating inside the project and having direct impact on the project. By the term project we will in the following mean; a company, plant or operation etc. The impact will eventually be of economic nature and it is this we want to put a value on.

    External events are normally grouped into natural events and man-made events. Examples of man-made external events are changes in laws and regulations, while extreme weather conditions etc. are natural external events.

    External events can occur as single events or as combinations of two or more external events. Potential combined events are two or more external events having a non-random probability of occurring simultaneously, e.g., quality of bureaucracy and government project decision-making.

    Identification of possible external events

    The identification of possible events should roughly follow the process sketched below[iii]:

    1. Screening for Potential Single External Events – Identify all natural and man-made external events threatening the project implementation (Independent Events).
    2. Screening for Potential Combined External Events – Combining single external events into various combinations that are both imaginable and which may possibly threaten the project implementation (Correlated Events).
    3. Relevance Screening – Screening out potential external events, either single or combined, that is not relevant to the project. By ‘not relevant’, we will understand that they cannot occur or that their probability of occurrence is evidently ‘too low’.
    4. Impact Screening – Screening out potential external events, either single or combined, that is not relevant to the project. By ‘not relevant’, we will understand that no possible project impact can be identified.
    5. Event Analysis – Acquiring and assessing information on the probability of occurrence, at each point in the future, for each relevant event. 
    6. Probabilistic Screening –  To accept the risk contribution of an external event, or to plan appropriate project modifications to reduce not acceptable  contributions to project risk.

    Project Impact Analysis; modelling and quantification

    It is useful to distinguish between two types of forecasts for binary outcomes: probability[iv] forecasts and point forecasts.  We will in the following only use probability forecasts since we also want to quantify forecast uncertainty, which is often ignored in making point forecasts. After all, the primary purpose of forecasting is to reduce uncertainty.

    We assume that none of the possible events is in the form of a catastrophe.  A mathematical catastrophe is a point in a model of an input-output system, where a vanishingly small change in an exogenous variate can produce a large change in the output. (Thom, 1975)

    Current practice in public projects

    The usual approach at least for many public projects[v] is to first forecast the total costs distribution from the cost model and then add, as a second cost layer outside the model, the effects of possible events. These events will be discoveries about: the quality of planning, availability of resources, the state of corporation with other departments, difficulties in getting decisions, etc.

    In addition are these costs more often than not calculated as a probability distribution of lump sums and then added to the distribution for the estimated expected total costs. The consequence of this is that:

    1. the ‘second cost layer’ introduces new lump sum cost variables,
    2. the events are unrelated to the variates in the cost model,
    3. the mechanism of costs transferal  from the events are rarely clearly stated and
    4. for a project with a time frame of several years and where the net present value of project costs is the decisive variable, these amounts to adding a lump sum to the first years cost.

    Thus using this procedure to identify project tolerability to external events – can easily lead decision and policy makers astray.

    We will therefor propose another approach with analogies taken from time series analysis – intervention analysis. This approach to intervention analysis is based on mixed autoregressive moving average (ARMA[vi]) models introduced by Box & Tiao in 1975. (Box and Tiao, 1975) Intervention models links one or more input (or independent) variates to a response (or dependent) variate by a transfer function.

    Handling Project Interventions

    In time series analysis we try to discern the effects of an intervention after the fact. In our context we are trying to establish what can happen if some event intervenes in our project.  We will do this by using transfer functions. Transfer functions are models of how the effects from the event are translated into future values of y.  This implies to:

    1. Forecast the probability pt that the event will happen at time – t,
    2. Select the variates (response variable) in the model that will be affected,
    3. Establish a transfer function for each response variable, giving expected effect (response) on that variate.

    The event can trigger a response at time T[vii] in the form of a step[viii] (St) (i.e. change in tax laws) or a pulse (Pt) (i.e. change in supplier). We will denote this as:

    St = 0, when t <T and =1, when t > T

    Pt = 0, when t ≠T and =1, when t = T

    For one exogenous variate x and one response variate y, the general form of an intervention model is:

    yt = [w(B) / d(B)] x t-s + N(et)

    Where Bs is the backshift operator, shifting the time series s steps backward and N(et) an appropriate noise model for y. The delay between a change in x and a response in y is s. The intervention model has both a numerator and a denominator polynomial.

    The numerator polynomial is the moving average polynomial (MA)[ix]. The numerator parameters are usually the most important, since they will determine the magnitude of the effect of x on y.

    The denominator polynomial is the autoregressive polynomial (AR)[x]. The denominator determines the shape of the response (growth or decay).

    Graphs of some common intervention models are shown in the panel (B) below taken from the original paper by Box & Tiao, p 72:

    Effect-response

    As the figures above show, a large number of different types of responses can be modelled using relatively simple models. In many cases will a step not give an immediate response, but have a more dynamic response and a response to a pulse may or may not decay all the way back. Most response models have a steady state solution that will be achieved after a number of periods. Model c) in the panel above however will continue to grow to infinity. Model a) gives a permanent change positive (carbon tax) or negative (new cheaper technology). Model b) gives a more gradual change positive (implementation of new technology) or negative (effect of crime reducing activities). The response to pulse can be positive or negative (loss of supplier) with a decay that can continue for a short or a long period all the way back or to a new permanent level.

    Summary

    By using analogies from intervention analysis a number of interesting and important issues can be analyzed:

    • If two events affects one response variable will the combined effect be less or greater than the sum of both?
    • Will one event affecting more than one response variable increase the effect dramatically?
    • Is there a risk of calculating the same cost twice?
    • If an event occurs at the end of a project, will it be prolonged? And what will the costs be?
    • Etc.

    Questions like this can never be analyzed when using a ‘second layer lump sum’ approach. Even more important is possibility to incorporate the responses to exogenous events inside the simulation model, thus having the responses at the correct point on the time line and by that a correct net present value for costs, revenues and company or project value.

    Because net present values are what this is all about isn’t it? After all the result will be used for decision making!

    REFERENCES

    Box, G.E.P.  and Tiao, G.C., 1975.  Intervention analysis with application to economic and environmental problems.  J. Amer. Stat. Assoc. 70, 349:  pp70-79.

    Diaconis, P. and Mosteller, F. , 1989. Methods of Studying Coincidences. J. Amer. Statist. Assoc. 84, 853-861.

    Knochenhauer, M & Louko, P., 2003. SKI Report 02:27 Guidance for External Events Analysis. Swedish Nuclear Inspectorate.

    Thom R., 1975. Structural stability and morphogenesis. Benjamin Addison Wesley, New York.

    ENDNOTES

    [i] Events with binary outcomes have only two states, either it happens or it does not happen: the presence or absence of a given exposure. The event can be described by a Bernoulli distribution. This is a discrete distribution having two possible outcomes labelled by n=0 and n=1 in which n=1 (“event occurs”) have probability p and n=0 (“do not occur”) have probability q=1-p, where 0<p<1. It therefore has probability density function P(n)= 1-p for n=0 and P(n)= p for n=1, which can also be written P(n)=pn(1-p) (1-n).

    [ii] ‘’Change point’’ (“break point” or “turning point”) usually denotes the point in time where the change takes place and “regime switching” the occurrence of a different regime after the change point.

    [iii] A good example of this is Probabilistic Safety Assessments (PSA). PSA is an established technique to numerically quantify risk measures in nuclear power plants. It sets out to determine what undesired scenarios can occur, with which likelihood, and what the consequences could be (Knochenhauer & Louko, 2003).

    [iv] A probability is a number between 0 and 1 (inclusive). A value of zero means the event in question never happens, a value of one means it always happens, and a value of 0.5 means it will happen half of the time.

    Another scale that is useful for measuring probabilities is the odds scale. If the probability of an event occurring is p, then the odds (W) of it occurring are p: 1- p, which is often written as  W = p/ (1-p). Hence if the probability of an event is 0.5, the odds are 1:1, whilst if the probability is 0.1, the odds are 1:9.

    Since odds can take any value from zero to infinity, then log (p/(1- p)) ranges from -infinity  to infinity. Hence, we can model g(p) = log [(p/(1- p)] rather than p. As g(p) goes from -infinity  to infinity, p goes from 0 to 1.

    [v] https://www.strategy-at-risk.com/2013/10/07/distinguish-between-events-and-estimates/

    [vi] In the time series econometrics literature this is known as an autoregressive moving average (ARMA) process.

    [vii] Interventions extending over several time intervals can be represented by a series of pulses.

    [viii] (1-B) step = pulse; pulse is a 1st differenced step and step = pulse /(1-B)  step is a cumulated pulse.

    Therefore, a step input for a stationary series produces an identical impulse response to a pulse input for an integrated I(1) series.

    [ix] w(B) = w0 + w1B + w2B2 + . . .

    [x] d(B) = 1 + d1B + d2B2 + . . . Where -1 < d < 1.

     

  • Simulation of balance sheet risk

    Simulation of balance sheet risk

    This entry is part 6 of 6 in the series Balance simulation

    iStock_000013200637XSmall

    As I wrote in the article about balance sheet risk, a company with covenants in its loan agreements may have to hedge balance sheet risk even though it is not optimal from a market risk perspective.

    But how can the company know which covenant to hedge?  Often a company will have more than one covenant, and hedging one of them may adversely impact the other.  To answer the question it is necessary to calculate the effect of a hedge strategy, and the best way to do that is by using a simulation model.  Such a model can give the answer by estimating the probability of breech of a covenant.

    Which hedging strategy the company is to choose demands knowledge about what covenant is the most at risk.   How likely is it that the company will face a breech?  Like I described in the previous article:

    Which hedging strategy the company chooses depends on which covenant is most at risk.  There are inherent conflicts between the different hedging strategies, and therefore it is necessary to make a thorough assessment before implementing any such hedging strategy.

    In addition:

    If the company hedges gearing, the size of the equity will be more at risk [..], And in addition, drawing a larger proportion of debt in the home (or functional) currency may imply an increase in economic risk.  [..] Hence, if the company does not have to hedge gearing it should hedge its equity.

    To analyse the impact of different strategies and to answer the questions above I have included simulation of currency rates in the example from the previous article:

    simulation model balance sheet risk

    The result of strategy choice given a +/- 10% change in currency rates  was shown in the previous article.  But that model cannot give the answer to how likely it is that the company will face a breech situation.  How large changes in currency rates can the company take?

    To look at this issue I have used the following modeling of currency rates:

    • Rates at the last day of every quarter from 31/12/02 to 30/06/2013.  The reason for choosing these dates is of course that they are the dates when the balance sheet is measured.  It doesn’t matter if the currency rates are unproblematic March 1st if they are problematic March 31st.  Because that is the date when books are closed for Q1 and the date when the balance sheet is measured.
    • I have analysed the rated using Excel @Risk, which can fit a probability curve on historical rates.  There are, of course, many methods for estimating currency rates and I will get back to that later.  But this method has advantages; the basis is actual rates which have actually occurred.

    The closest fit to the data was a LapLace-curve ((RiskLaplace (μ,σ) specifies a laplace distribution with the entered μ location and σ scale parameters. The laplace distribution is sometimes called a “double exponential distribution” because it resembles two exponential distributions placed back to back, positioned with the entered location parameter.))  for EUR and a Uniform-curve ((RiskUniform(minimum,maximum) specifies a uniform probability distribution with the entered minimum and maximum values. Every value across the range of the uniform distribution has an equal likelihood of occurrence)) for USD against NOK.

    estimatkurverIt is always a good idea to ask yourself if the fitted result has a good story behind it.  Is it logical?  What we want is to find a good estimate for future currency rates.  If the logic is hard to see, we should go back and analyze more.  But there seems to be a good logic/story behind these estimates in my opinion:

    • EUR against NOK is so called mean reverting, meaning that it normally will revert back to a level of around 8 NOK +/- for 1 EUR.  Hence, the curve is pointed and has long tails.  We most likely will have to pay 8 NOK for 1 EUR, but it can move quite a bit away from the expected mean, both up and down.
    • USD is more unpredictable against NOK and a uniform curve, with any level of USD/NOK being as likely, sound like a good estimate.

    In addition to the probability curves for USD and EUR an estimate for the correlation between them is needed.  I used the same historical data to calculate historical correlation.  On the end quarter rates it has been 0,39.  A positive correlation means that the rates move the same way – if one goes up, so does the other.  The reason is that it was the NOK that moved against both currencies.  That’s also a good assessment, I believe. History has shown it to be the case.

    Now we have all the information needed to simulate how much at risk our (simple) balance sheet is to adverse currency movements.  And based on the simulation, the answer is: Quite a bit.

    I have modeled the following covenants:

    • Gearing < 1,5
    • Equity > 3 000

    This is the result of the simulation (click on the image to zoom):

    Simulation results

    Gearing is the covenant most at risk, as the tables/graphs show.  Both in the original mix (all debt in NOK) and if the company is hedging equity there is a high likelihood of breaching the gearing covenant.

    There is a probability of 22% in the first case (all debt in NOK) and a probability of 23% in the second (equity-hedge).  This is a rather high probability, considering that the NOK may move quite a bit, quit quickly.

    The equity is less at risk and the covenant has more headroom.  There is a 13% probability for breech with all debt in NOK, but 0% should the company choose either of the two hedging strategies.  This is due to the fact that currency loans will reduce risk, regardless of whether debt fully hedges assets, or only partially.

    Hence, based on this example it is easy to give advice to the company.  The company should hedge gearing by drawing debt in a mix of currencies reflecting its assets.  Reality is of course more complex than this example, but the mechanism will be the same.  And the need for accurate decision criteria – likelihood of breech – is more important the more complex the business is.

    debtOne thing that complicates the picture is the impact different strategies have on the company’s debt.  Debt levels may vary substantially, depending on choice of strategy.

    If the company has to refinance some of its debt, and at the same time there is a negative impact on the value of the debt (weaker home currency), the refinancing need will be substantially higher than what would have been the case with local debt. This is also answers you can get from the simulation modeling.

    The answer to the questions: “How likely is it that the company to breech its covenants and what are the consequences of strategic choices on key figures, debt and equity?” is something really only a good simulation model can give.

    Originally published in Norwegian.

  • Hedging the balance sheet

    Hedging the balance sheet

    This entry is part 5 of 6 in the series Balance simulation

    iStock_000006045714XSmall

    A hedging strategy should be oriented towards hedging the company’s market value to build shareholder value.  Normally hedging of balance sheet items is not a good argument for hedging from the shareholders point of view, since a company’s balance sheet not necessarily reflect its market value.

    In some cases, however, it may be argued that hedging the balance sheet creates shareholder value, since a lack of hedging may lead to the company breeching covenants in loan agreements.  The cost for the shareholders in that case is, as a minimum, increased cost in the form of higher margins on debt.  Ultimately, it may mean that the company is technically bankrupt and that the share capital is lost, in which case the shareholders values are lost.  Therefore, implicitly this is a hedging strategy which is necessary from the shareholders point of view.

    Theoretically it may also be claimed that companies should not hedge at all, as the shareholders may achieve the wanted level of risk by diversifying their portfolios.  But in the case of balance sheet risk this is not possible.  Since the risk is in the books of the company, it is only in the company the risk may be hedged and have the desired impact on the bankruptcy risk of the company.  This is therefore a special case compared to many other risks.

    Covenants in loan agreements may warrant hedging to avoid breech solely because of changes in currency rates.  Such covenants may for instance be on gearing (debt/equity) or on tangible net worth.  If the company has such covenants and not a clear margin on breeching them, it may be necessary to limit or indeed immunize the negative impact from currency movements.

    To look at this issue I will look at a company which has assets in currency and all its debt in NOK, its home or functional currency.  The initial balance sheet looks like this:

    Initial balance sheet
    Initial balance sheet

    Which hedging strategy the company chooses depends on which covenant is most at risk.  There are inherent conflicts between the different hedging strategies, and therefore it is necessary to make a thorough assessment before implementing any such hedging strategy.

    • To immunize gearing from any impact of changes in currency rates the company needs to draw debt in currency in the same mix as the currency mix of assets, including assets in the home currency, NOK, like this:
    Hedge gearing
    Hedge gearing
    • To protect equity against changes in currency rates the company should draw all debt in foreign currency, corresponding to the mix of currency assets ((If the sum of assets is bigger than the sum of debt, the company may in addition use off balance sheet hedging to reach full hedge.  If debt is bigger than the sum of foreign currency denominated assets, the company only draws currency debt until it matches the assets.  The rest is drawn in NOK)), like so:
    Hedge equity
    Hedge equity

    If the company hedges gearing, the size of the equity will be more at risk, since the company hedges a smaller proportion of its assets in foreign currency.  And in addition, drawing a larger proportion of debt in the home (or functional) currency may imply an increase in economic risk.  Normally a company with foreign assets also has revenue streams in foreign currency, while it by drawing debt in the home currency takes on local cost, thus increasing economic exposure.  Hence, if the company does not have to hedge gearing it should hedge its equity.

    Choice of hedging strategy will have different results:

    Impact on gearing
    Impact on gearing of different hedging strategies
    Impact on equity of different strategies
    Impact on equity of different strategies

    As the graphs show, gearing or equity hedge will have different impact on key figures.  However, no hedge at all (all debt in the home currency) will have the biggest impact both on gearing and equity, or tangible net worth:

    Overview of impact on key ratios
    Overview of impact on key ratios

    If the impact on balance sheet values due to movements in currency rates may result in breach of covenants in loan agreements, the risk should therefore be hedged in a way which limits the impact on the most vulnerable figure, be it gearing or equity.

    Originally written in Norwegian.

  • Working Capital Strategy Revisited

    Working Capital Strategy Revisited

    This entry is part 3 of 3 in the series Working Capital

    Introduction

    To link the posts on working capital and inventory management, we will look at a company with a complicated market structure, having sales and production in a large number of countries and with a wide variety of product lines. Added to this is a marked seasonality with high sales in the years two first quarters and much lower sales in the years two last quarters ((All data is from public records)).

    All this puts a strain on the organizations production and distribution systems and of course on working capital.

    Looking at the development of net working capital ((Net working capital = Total current assets – Total current liabilities)) relative to net sales it seems as the company in the later years have curbed the initial net working capital growth:

    Just by inspecting the graph however it is difficult to determine if the company’s working capital management is good or lacking in performance. We therefore need to look in more detail at the working capital elements  and compare them with industry ‘averages’ ((By their Standard Industrial Classification (SIC) )).

    The industry averages can be found from the annual “REL Consultancy /CFO Working Capital Survey” that made its debut in 1997 in the CFO Magazine. We can thus use the survey’s findings to assess the company’s working capital performance ((Katz, M.K. (2010). Working it out: The 2010 Working Capital Scorecard. CFO Magazine, June, Retrieved from http://www.cfo.com/article.cfm/14499542
    Also see: https://www.strategy-at-risk.com/2010/10/18/working-capital-strategy-2/)).

    The company’s working capital management

    Looking at the different elements of the company’s working capital, we find that:

    I.    Day’s sales outstanding (DSO) is on average 70 days compared with REL’s reported industry median of 56 days.

    II.    Day’s payables outstanding (DPO) is the difference small and in the right direction, 25 days against the industry median of 23 days.

    III.    Day’s inventory outstanding (DIO) on average 138 days compared with the industry median of 23 days, and this is where the problem lies.

    IV.    The company’s days of working capital (DWC = DSO+DIO-DPO) (( Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC). Se endnote for more.)) have on average according to the above, been 183 days over the last five years compared to REL’s  median DWC of 72 days in for comparable companies.

    This company thus has more than 2.5 times ‘larger’ working capital than its industry average.

    As levers of financial performance, none is more important than working capital. The viability of every business activity rests on daily changes in receivables, inventory, and payables.

    The goal of the company is to minimize its ‘Days of Working Capital’ (DWC) or which is equivalent the ‘Cash Conversion Cycle’ (CCC), and thereby reduce the amount of outstanding working capital. This requires examining each component of DWC discussed above and taking actions to improve each element. To the extent this can be achieved without increasing costs or depressing sales, they should be carried out:

    1.    A decrease in ‘Day’s sales outstanding’ (DSO) or in ‘Day’s inventory outstanding’ (DIO) will represent an improvement, and an increase will indicate deterioration,

    2.    An increase in ‘Day’s payables outstanding’ (DPO) will represent an improvement and an decrease will indicate deterioration,

    3.    Reducing ‘Days of Working Capital’ (DWC or CCC) will represent an improvement, whereas an increasing (DWC or CCC) will represent deterioration.

    Day’s sales- and payables outstanding

    Many companies think in terms of “collecting as fast as possible, and paying as slowly as permissible.” This strategy, however, may not be the wisest.
    At the same time the company is attempting to integrate with its customers – and realize the related benefits – so are its suppliers. A “pay slow” approach may not optimize either the accounts or inventory, and it is likely to interfere with good supplier relationships.

    Supply-chain finance

    One way around this might be ‘Supply Chain Finance ‘(SCF) or reverse factoring ((“The reverse factoring method, still rare, is similar to the factoring insofar as it involves three actors: the ordering party, the supplier and the factor. Just as basic factoring, the aim of the process is to finance the supplier’s receivables by a financier (the factor), so the supplier can cash in the money for what he sold immediately (minus an interest the factor deducts to finance the advance of money).” http://en.wikipedia.org/wiki/Reverse_factoring)). Properly done, it can enable a company to leverage credit to increase the efficiency of its working capital and at the same time enhance its relationships with suppliers. The company can extend payment terms and the supplier receives advance payments discounted at rates considerably lower than their normal funding margins. The lender (factor), in turn, gets the benefit of a margin higher than the risk profile commands.

    This is thus a form of receivables financing using solutions that provide working capital to suppliers and/or buyers within any part of a supply chain and that is typically arranged on the credit risk of a large corporate within that supply chain.

    Day’s inventory outstanding (DIO)

    DIO is a financial and operational measure, which expresses the value of inventory in days of cost of goods sold. It represents how much inventory an organization has tied up across its supply chain or more simply – how long it takes to convert inventory into sales. This measure can be aggregated for all inventories or broken down into days of raw material, work in progress and finished goods. This measure should normally be produced monthly.

    By using the industry typical ‘days inventory outstanding’ (DIO) we can calculate the potential reduction in the company’s inventory – if the company should succeed in being as good in inventory management as its peers.

    If the industry’s typical DIO value is applicable, then there should be a potential for a 60 % reduction in the company’s inventory.

    Even if this overstates the true potential it is obvious that a fairly large reduction is possible since 98% of the 1000 companies in the REL report have a value for DIO less than 138 days:

    Adding to the company’s concern should also be the fact that the inventories seems to increase at a faster pace than net sales:

    Inventory Management

    Successfully addressing the challenge of reducing inventory requires an understanding of why inventory is held and where it builds in the system.
    Achieving this goal requires a focus on inventory improvement efforts on four core areas:

    1. demand management – information integration with both suppliers and customers,
    2. inventory optimization – using statistical/finance tools to monitor and set inventory levels,
    3. transportation and logistics – lead time length and variability and
    4. supply chain planning and execution – coordinating planning throughout the chain from inbound to internal processing to outbound.

    We believe that the best way of attacking this problems is to produce a simulation model that can ‘mimic’ the sales – distribution – production chain in necessary detail to study different strategies and the probabilities of stock-out and possible stock-out costs compared with the costs of doing the different products (items).

    The costs of never experience a stock-out can be excessively high – the global average of retail out-of-stocks is 8.3% ((Gruen, Thomas W. and Daniel Corsten (2008), A Comprehensive Guide to Retail Out-of-Stock Reduction in the Fast-Moving Consumer Goods Industry, Grocery Manufacturers of America, Washington, DC, ISBN: 978-3-905613-04-9)) .

    By basing the model on activity-based costing, it can estimate the cost and revenue elements of the product lines thus either identify and/or eliminate those products and services that are unprofitable or ineffective. The scope is to release more working capital by lowering values of inventories and streamlining the end to end value chain

    To do this we have to make improved forecasts of sales and a breakdown of risk and economic values both geographically and for product groups to find out were capital should be employed coming years  (product – geography) both for M&A and organic growth investments.

    A model like the one we propose needs detailed monthly data usually found in the internal accounts. This data will be used to statistically determine the relationships between the cost variables describing the different value chains. In addition will overhead from different company levels (geographical) have to be distributed both on products and on the distribution chains.

    Endnote

    Days Sales Outstanding (DSO) = AR/(total revenue/365)

    Year-end trade receivables net of allowance for doubtful accounts, plus financial receivables, divided by one day of average revenue.

    Days Inventory Outstanding (DIO) = Inventory/(total revenue/365)

    Year-end inventory plus LIFO reserve divided by one day of average revenue.

    Days Payables Outstanding (DPO) = AP/(total revenue/365)

    Year-end trade payables divided by one day of average revenue.

    Days Working Capital (DWC): (AR + inventory – AP)/(total revenue/365)

    Where:
    AR = Average accounts receivable
    AP = Average accounts payable
    Inventory = Average inventory + Work in progress

    Year-end net working capital (trade receivables plus inventory, minus AP) divided by one day of average revenue. (DWC = DSO+DIO-DPO).

    For the comparable industry we find an average of: DWC=56+39-23=72 days

    Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC) except that the CCC uses the Cost of Goods Sold (COGS) when calculating both the Days Inventory Outstanding (DIO) and the Days Payables Outstanding (DPO) whereas DWC uses sales (Total Revenue) for all calculations:

    CCC= Days in period x {(Average  inventory/COGS) + (Average receivables / Revenue) – (Average payables/[COGS + Change in Inventory)]

    Where:
    COGS= Production Cost – Change in Inventory

    Footnotes

     

  • Budgeting Revisited

    Budgeting Revisited

    This entry is part 2 of 2 in the series Budgeting

     

    Introduction

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Good budgeting practices are structured to minimize errors and inconsistencies, drawing in all the necessary participants to contribute their business experience and the perspective of each department. Best practice in budgeting entails a mixture of top-down guidelines and standards, combined with bottom-up individual knowledge and experience.

    Excel, the de facto tool for budgeting, is a powerful personal productivity tool. Its current capabilities, however, are often inadequate to support the critical nature of budgeting and forecasting. There will come a point when a company’s reliance on spreadsheets for budgeting leads to severely ineffective decision-making, lost productivity and lost opportunities.

    Spreadsheets can accommodate many tasks – but, over time, some of the models running in Excel may grow too big for the spreadsheet application. Programming in a spreadsheet model often requires embedded assumptions, complex macros, creating opportunities for formula errors and broken links between workbooks.

    It is common for spreadsheet budget models and their intricacies to be known and maintained by a single person who becomes a vulnerability point with no backup. And there are other maintenance and usage issues:

    A.    Spreadsheet budget models are difficult to distribute and even more difficult to collect and consolidate.
    B.    Data confidentiality is almost impossible to maintain in spreadsheets, which are not designed to hide or expose data based upon each user’s role.
    C.    Financial statements are usually not fully integrated leaving little basis for decision making.

    These are serious drawbacks for corporate governance and make the audit process more difficult.

    This is a few of many reasons why we use a dedicated simulation language for our models that specifically do not mix data and code.

    The budget model

    In practice budgeting can be performed on different levels:
    1.    Cash Flow
    2.    EBITDA
    3.    EBIT
    4.    Profit or
    5.    Company value.

    The most efficient is on EBITDA level, since taxes, depreciation and amortization on the short-term is mostly given. This is also the level where consolidation of daughter companies easiest is achieved. An EBITDA model describing the firm’s operations can again be used as a subroutine for more detailed and encompassing analysis thru P&L and Balance simulation.

    The aim will then to estimate of the firm’s equity value and is probability distribution. This can again be used for strategy selection etc.

    Forecasting

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the budget process.

    Forecasting or predictive analytics can best be described as statistic modeling enabling prediction of future events or results, using present and past information and data.

    1. Forecasts must integrate both external and internal cost and value drivers of the business.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5.  Never relay on single point or scenario forecasting.

    All uncertainty about the market sizes, market shares, cost and prices, interest rates, exchange rates and taxes etc. – and their correlation will finally end up contributing to the uncertainty in the firm’s budget forecasts.

    The EBITDA model

    The EBITDA model have to be detailed enough to capture all important cost and value drivers, but simple enough to be easy to update with new data and assumptions.

    Input to the model can come from different sources; any internal reporting system or spread sheet. The easiest way to communicate with the model is by using Excel  spread sheet – templates.

    Such templates will be pre-defined in the sense that the information the model needs is on a pre-determined place in the workbook.  This makes it easy if the budgets for daughter companies is reported (and consolidated) in a common system (e.g. SAP) and can ‘dump’ onto an excel spread sheet. If the budgets are communicated directly to head office or the mother company then they can be read directly by the model.

    Standalone models and dedicated subroutines

    We usually construct our EBITDA models so that they can be used both as a standalone model and as a subroutine for balance simulation. The model can then be used both for short term budgeting and long-term EBITDA forecasting and simulation and for short/long term balance forecasting and simulation. This means that the same model can be efficiently reused in different contexts.
    Rolling budgets and forecast

    The EBITDA model can be constructed to give rolling forecast based on updated monthly or quarterly values, taking into consideration the seasonality of the operations. This will give new forecasts (new budget) for the remaining of the year and/or the next twelve month. By forecasts we again mean the probability distributions for the budget variables.

    Even if the variables have not changed, the fact that we move towards the end of the year will reduce the uncertainty of if the end year results and also for the forecast for the next twelve month.

    Uncertainty

    The most important part of budgeting with Monte Carlo simulation is assessment of the uncertainty in the budgeted (forecasted) cost and value drivers. This uncertainty is given as the most likely value (usually the budget figure) and the interval where it is assessed with a high degree of confidence (approx. 95%) to fall.

    We will then use these lower and upper limits (5% and 95%) for sales, prices and other budget items and the budget values as indicators of the shape of the probability distributions for the individual budget items. Together they described the range and uncertainty in the EBITDA forecasts.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – by a large number of runs of the model, usually 1000 – of net revenue, operating expenses and finally EBITDA. This again will give us their probability distributions

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall. It is not based on any precise computation but is a reasonable assessment by knowledgeable persons. Selecting the budget value however is more difficult. Should it be the “mean”
    or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average .

    This is therefore where the individual mangers intent and culture will be manifested, and it is here the greatest learning effect for both the managers and the mother company will be, as under-budgeting  and overconfidence  will stand out as excessive large deviations from the model calculated expected value (probability weighted average over the interval).

    Output

    The output from the Monte Carlo simulation will be in the form of graphs that puts all run’s in the simulation together to form the cumulative distribution for the operating expenses (red line):

    In the figure we have computed the frequencies of observed (simulated) values for operating expenses (blue frequency plot) – the x-axis gives the operating expenses and the left y-axis the frequency. By summing up from left to right we can compute the cumulative probability curve. The s-shaped curve (red) gives for every point the probability (on the right y-axis) for having an operating expenses less than the corresponding point on the x-axis. The shape of this curve and its range on the x-axis gives us the uncertainty in the forecasts.

    A steep curve indicates little uncertainty and a flat curve indicates greater uncertainty.  The curve is calculated from the uncertainties reported in the reporting package or templates.

    Large uncertainties in the reported variables will contribute to the overall uncertainty in the EBITDA forecast and thus to a flatter curve and contrariwise. If the reported uncertainty in sales and prices has a marked downside and the costs a marked upside the resulting EBITDA distribution might very well have a portion on the negative side on the x-axis – that is, with some probability the EBITDA might end up negative.

    In the figure below the lines give the expected EBITDA and the budget value. The expected EBIT can be found by drawing a horizontal line from the 0.5 (50%) point on the y-axis to the curve and a vertical line from this point on the curve to the x-axis. This point gives us the expected EBITDA value – the point where it is 50% probability of having a value of EBITDA below and 100%-50%=50% of having it above.

    The second set of lines give the budget figure and the probability that it will end up lower than budget. In this case it is almost a 100% probability that it will be much lower than the management have expected.

    This distributions location on the EBITDA axis (x-axis) and its shape gives a large amount of information of what we can expect of possible results and their probability.

    The following figure that gives the EBIT distributions for a number of subsidiaries exemplifies this. One wills most probable never earn money (grey), three is cash cows (blue, green and brown) and the last (red) can earn a lot of money:

    Budget revisions and follow up

    Normally – if something extraordinary does not happen – we would expect both the budget and the actual EBITDA to fall somewhere in the region of the expected value. We have however to expect some deviation both from budget and expected value due to the nature of the industry.  Having in mind the possibility of unanticipated events or events “outside” the subsidiary’s budget responsibilities, but affecting the outcome this implies that:

    • Having the actual result deviating from budget is not necessary a sign of bad budgeting.
    • Having the result close to or on budget is not necessary a sign of good budgeting.

    However:

    •  Large deviations between budget and actual result needs looking into – especially if the deviation to expected value also is large.
    • Large deviation between budget and expected value can imply either that the limits are set “wrong” or that the budget EBITDA is not reflecting the downside risk or upside opportunity expressed by the limits.

    Another way of looking at the distributions is by the probabilities of having the actual result below budget that is how far off line the budget ended up. In the graph below, country #1’s budget came out with a probability of 72% of having the actual result below budget.  It turned out that the actual figure with only 36% probability would have been lower. The length of the bars thus indicates the budget discrepancies.

    For country# 2 it is the other way around: the probability of having had a result lower than the final result is 88% while the budgeted figure had a 63% probability of having been too low. In this case the market was seriously misjudged.

    In the following we have measured the deviation of the actual result both from the budget values and from the expected values. In the figures the left axis give the deviation from expected value and the bottom axis the deviation from budget value.

    1.  If the deviation for a country falls in the upper right quadrant the deviation are positive for both budget and expected value – and the country is overachieving.
    2. If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the country is underachieving.
    3. If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – and the country is overachieving but has had a to high budget.

    With a left skewed EBITDA distribution there should not be any observations in the lower right quadrant that will only happen when the distribution is skewed to the right – and then there will not be any observations in the upper left quadrant:

    As the manager’s gets more experienced in assessing the uncertainty they face, we see that the budget figures are more in line with the expected values and that the interval’s given is shorter and better oriented.

    If the budget is in line with expected value given the described uncertainty, the upside potential ratio should be approx. one. A high value should indicate a potential for higher EBITDA and vice versa. Using this measure we can numerically describe the managements budgeting behavior:

    Rolling budgets

    If the model is set up to give rolling forecasts of the budget EBITDA as new and in this case monthly data, we will get successive forecast as in the figure below:

    As data for new month are received, the curve is getting steeper since the uncertainty is reduced. From the squares on the lines indicating expected value we see that the value is moving slowly to the right and higher EBITDA values.

    We can of course also use this for long term forecasting as in the figure below:

    As should now be evident; the EBITDA Monte Carlo model have multiple fields of use and all of them will increases the managements possibilities of control and foresight giving ample opportunity for prudent planning for the future.

     

     

  • M&A: When two plus two is five or three or …

    M&A: When two plus two is five or three or …

    When two plus two is five (Orwell, 1949)

    Introduction

    Mergers & Acquisitions (M&A) is a way for companies to expand rapidly and much faster than organic growth – that is coming from existing businesses – would have allowed. M&A’s have for decades been a trillion-dollar business, but empirical studies reports that a significant proportion must be considered as failures.

    The conventional wisdom – is that the majority of deals fail to add shareholder value to the acquiring company. According to this research, only 30-50% of deals are considered to be successful (See Bruner, 2002).

    If most deals fail, why do companies keep doing them? Is it because they think the odds won’t apply to them, or are executives more concerned with extending its influence and company growth (empire building) and not with increasing their shareholder (s) value?

    Many writers argue that these are the main reasons driving the M&A activities, with the implication that executives are basically greedy (because their compensation is often tied to the size of the company) – or incompetent.

    To be able to create shareholder value the M&A must give rise to some forms of synergy. Synergy is the ability of the merged companies to generate higher shareholder value (wealth) than the standalone entities. That is; that the whole will be greater than the sum it’s of parts.

    For many of the observed M&A’s however, the opposite have been the truth – value have been destroyed; the whole have turned out to be less than the sum of its parts (dysergy).

    “When asked to name just one big merger that had lived up to expectations, Leon Cooperman, former co-chairman of Goldman Sachs’ Investment Policy Committee, answered: I’m sure there are success stories out there, but at this moment I draw a blank.” (Sirower, 1997)

    The “apparent” M&A failures have also been attributed to both methodological and measurement problems, stating that evidence – as cost saving or revenue enhancement brought by the M&A is difficult to obtain after the fact. This might also apply to some of the success stories.

    What is surprising in most (all?) of the studies of M&A success and failures is the lack understanding of the stochastic nature of business activities. For any company it is impossible to estimate with certainty its equity value, the best we can do is to estimate a range of values and the probability that the true value will fall inside this range. The merger two companies amplify this, and the discussion of possible synergies or dysergies can only be understood in the context of randomness (stochasticity) ((See: the IFA.com – Probability Machine, Galton Board, Randomness and Fair Price Simulator, Quincunx at http://www.youtube.com/watch?v=AUSKTk9ENzg)).

    [tube] http://www.youtube.com/watch?v=AUSKTk9ENzg, 400,300 [/tube]

    The M&A cases

    Let’s assume that we have two companies A and B that are proposed merged. We have the distribution for each company’s equity value (shareholders value) for both companies and we can calculate the equity distribution for the merged company. Company A’s value is estimated to be in the range of 0 to 150M with expected value 90M. Company B’s value is estimated to be in the range of -40 to 200M with expected value 140M. (See figure below)

    If we merge the two companies assuming no synergy or dysergy we get the value (shareholder) distribution shown by the green curve in the figure. The merged company will have a value in the range of 65 to 321M, with an expected value of 230M. Since there is no synergy/dysergy no value have been created or destroyed by the merger.

    For company B no value would be added in the merger if A was bought at a price equal to or higher than the expected value of the company.  If it was bought at a price less than expected value, then there is a probability that the wealth of the shareholders of company B will increase. But even then it is not with certainty. All increase of wealth to the shareholders of company B will be at the expenses of the shareholders of company A and vice versa.

    Case 1

    If we assume that there is a “connection” between the companies, such that an increase in one of the company’s revenues also will increase the revenues in the other, we will have a synergy that can be exploited.

    This situation is depicted in the figure below. The green curve gives the case with no synergy and the blue the case described above. The difference between them is the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is high and even negative (dysergy) when revenues is low.

    If we produce a frequency diagram of the sizes of the possible synergies it will look as the diagram below. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Case 2

    If we assume that the “connection” between the companies is such that a reduction in one of the company’s revenues streams will reduce the total production costs, we again have a synergy that can be exploited.
    This situation is depicted in the figure below. The green curve gives the case with no synergy and the red the case described above. The difference between them is again the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is lower and even negative (dysergy) when revenues is high.

    In this case, the merger acts as a hedge against revenue losses at the cost of parts of the upside created by the merger. This should not deter the participants from a merger since there is only a 30 % probability that this will happen.

    The graph above again gives the frequency diagram for the sizes of the possible synergies. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Conclusion

    The elusiveness of synergies in many M&A cases can be explained by the natural randomness in business activities. The fact that a merger can give rise to large synergies does not guarantee that it will occur, only that there is a probability that it will occur. Spread sheet exercises in valuation can lead to disaster if the stochastic nature of the involved companies is not taken into account. AND basing the pricing of the M&A candidate on expected synergies is pure foolishness.

    References

    Bruner, Robert F. (2002), Does M&A Pay? A Survey of Evidence for the Decision-Maker. Journal of Applied Finance, Vol. 12, No. 1. Available at SSRN: http://ssrn.com/abstract=485884

    Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg.

    The whole is more than the sum of its parts. Aristotle, Metaphysica

     

    Sirower, M. (1997) The Synergy Trap: How Companies Lose the Acquisition Game. New York. The Free Press.