Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Balance sheet and P&L simulation – Page 3 – Strategy @ Risk

Category: Balance sheet and P&L simulation

  • Working Capital Strategy

    Working Capital Strategy

    This entry is part 1 of 3 in the series Working Capital

     

    Passion is inversely proportional to the amount of real information available. See Benford’s law of controversy.

    The annual “REL ((REL Consultancy. (2010). Wikipedia. Retrieved October 10, 2010, from http://en.wikipedia.org/wiki/REL_Consultancy)) /CFO Working Capital Survey” made its debut in 1997 in the CFO Magazine. The magazine identifies working capital management as one of the key issues facing financial executives in the 21st century (Filbeck, Krueger, & Preece, 2007).

    The 2010 Working Capital scorecard (Katz, 2010) and its accompanying data ((http://www.cfo.com/media/201006/1006WCcompletev2.xls)) gives us an opportunity to look at working capital management ((Data from 1,000 of the largest U.S. public companies)); that is the effect of working capital management on the return on capital employed (ROCE):

    ROCE = EBIT/{Capital~Employed}   or,

    ROCE = EBIT/(Operating fixed assets + net operating working capital)

    From the last formula we can see that – all else kept constant – a reduction in net operating working capital should imply an increased return on capital employed.

    Gross and Net Operating Working Capital

    A firm’s gross working capital comprises its total current assets. One part of it will consist of financial current assets held for various reasons other than operational, and the other part of receivables from operations and the inventory and cash necessary to run these operations. It is this last part that interests us.

    The firm’s operations will have been long term financed by equity from owners and by loans from lenders. Firms usually also have short term financing from banks (short term credit + overdraft facilities/ credit lines) and most always from suppliers by trade credit. The rest of the current liabilities; current tax and dividends will not be considered as parts of operating current liabilities, since they comprises only non recurrent payments.

    Net working capital is defined as the difference between current assets and current liabilities (see figure below). It can be both positive and negative depending on the firm’s strategic position in the market.

    However usually a positive net working capital is required to ensure that the firm is able to continue its operations and that it has sufficient funds to satisfy both maturing short-term debt and upcoming operational expenses.  In the following we assume that any positive net working capital is held as cash and that all excess cash is held as marketable securities.

    By removing from both current assets and liabilities all items not directly related to and necessary for the operations, we arrive at net operating working capital as the difference between operating current assets and operating current liabilities:

    Net operating working capital = Operating current assets – Operating current liabilities

    Since the needed amount of working capital will differ between industries and be dependent on company size it will be easier to base comparisons on the cash conversion cycle.

    Working Capital Management

    Working capital management is the administration of current assets as well as current liabilities. It is the main part of a firm’s short-term financial planning since it involves the management of cash, inventory and accounts receivable. Therefore, working capital management will reflect the firm’s short-term financial performance.

    Current assets often account for more than half of a company’s total assets and hence, represent a major investment for small firms as they can not be avoided in the same way as investments in fixed assets can – by renting or leasing. A large inventory will tie up capital but it prevents the company from lost sales or production stoppages due to stock-out. A high level of current assets hence means less risk to the company but also lower earnings due to higher capital tie-up – the risk-return trade-offs (Weston & Copeland, 1986).

    Since the needed amount of working capital will differ between industries and also will be dependent on company size it will be easier to base comparisons of working capital management between companies and industries on their cash conversion cycle (CCC).

    The Cash Conversion Cycle

    The term “cash conversion cycle” (CCC) refers to the time span between a firm’s disbursing and collecting cash and will thus be ‘unrelated’ to the firm’s size, but be dependent on the firm’s type of business (see figure below).

    Companies that have high inventory turnover and do business on a cash basis – usually have a low or negative CCC and hence needs very little working capital.

    For companies that make investment products the situation is a completely different. As these types of businesses are selling expensive items on a long-term payment basis, they will tend to have a high CCC and must keep enough working capital on hand to get through any unforeseen difficulties.

    The CCC cannot be directly observed in the cash flows, because these are also influenced by investment and financing activities and must be derived from the firm’s balance sheet:

    + Inventory conversion period (DSI)
    + Receivables conversion period (DSO)
    –  Payable conversion period (DPO)
    = Cash Conversion Cycle (days)

    Where:

    DSI  = Days sales of inventory, DSO = Days sales outstanding,  DPO = Days payable outstanding, WIP = Work in progress, Period = Accounting period and COGS = Cost of goods sold ((COGS = Opening inventory + Purchase of goods – Closing inventory)) or:

    + Average inventory+WIP / [COGS/days in period]
    + Average Accounts Receivable / [Revenue / days in period]
    + Average Accounts Payable / [(Inventory increase + COGS)/ days in period]
    = Cash Conversion Cycle (days)

    The Observations

    Even if not all of the working capital is determined by the cash conversion cycle, there should be a tendency for higher return on operating capital with lower CCC. However the data from the annual survey (Katz, 2010) does not support this ((Data used with permission from REL/CFO. Twenty of the one thousand observations have been removed as outliers, to give a better picture of the relation)):

    The scatter graph shows no direct relation between return on operating capital and the cash conversion cycle. A closer inspection of the data for the surveys different industries confirms this.

    Since the total amount of capital invested in the CCC is:

    Cap(CCC) = CCC * Sales * (1 + VAT)/{days~pr~period}

    and is thus a function of sales. The company size will then certainly play a role when we only look at the yearly data. The survey however also gives the change from 2008 to 2009 for all the companies so we are able to remove the size effect by looking at the changes (%) in ROCE by a change in CCC:

    The graph still shows no obvious relation between change (%) in CCC and change (%) in ROCE.  Now, we know that the shorter this cycle, the fewer resources the company needs to lock-up; reduced debtor levels (DSO), decreased inventory levels (DSI) and/or increased creditor levels (DPO) must have an effect on the ROCE – but will it be lost in the clutter of all the other company operations effects on the ROCE?

    Cash Management

    Net operating working capital is the cash plus cash equivalents needed to pay for the day-to-day operation of the business. This will include; demand deposits, money market accounts, currency holdings and highly liquid short-term investments such as marketable securities ((Marketable securities with a maturity of less than three months are referred to as ‘cash equivalents’ on the balance sheet, those with a longer maturity as ‘short-term investments’)); portfolios of highly liquid, near-cash assets which serves as a backup to the cash account.

    There are many reasons why holding cash is important; to act as a buffer when daily cash flows do not match cash out flows (Transaction motive), as a safety stock to face forecast errors and unforeseen expenses (Precautionary motive) or to be able to react immediately when opportunities can be taken (Speculative motive). If the cash level is too low and unexpected outflows occurs, the firm will have to either borrow funds or in the case of an investment – forgo the opportunity.

    Such short-term borrowing of funds can be costly as can a lost opportunity by the lost returns of rejected investments. Holding cash however also induces opportunity costs due to loss of interest.

    Cash management therefore aim at optimizing cash availability and interest income on any idle funds. Cash budgeting – as a part of the firm’s of short-term planning – constitutes the starting-point for all cash management activities as it represents the forecast of cash in- and outflows and therefore reflects the firm’s expected availability and need for cash.

    Working Capital Strategy

    We will in the following look closer at working capital management using balance simulation ((In the Monte Carlo simulation we have used 200 runs, as that was sufficient to give a good enough approximation of the distributions)). The data is from a company with large fixed assets in infrastructure. The demand for its services is highly seasonal as schematic depicted in the figure below:

    A company like this will need a flexible working capital strategy with a low level of working capital in the off-seasons and high levels in the high seasons. As the company wants to maximize its equity value it is looking for working capital strategies that can do just that.

    The company has been working on its cash conversion cycle, and succeeded in that with on average of only 11,1 days 1M (standard deviation 0,2 days) (across seasons) for turning supplied goods and services into cash:

    All the same, even then a substantial amount, on average €4,1M (standard deviation €1,8M) of the company’s resources, is invested in the cash conversion cycle:

    In addition the company needs a fair amount of cash to meet its other obligations. Its first strategy was to keep cash instead of using short term financing in the high seasons. In the off-seasons this strategy gives a large portfolio of marketable securities – giving a low return and thereby a low contribution to the ROCE.  This strategy can be described as being close to the red line in the seasonal graph above.

    When we now plot the two hundred observed (simulated) values of working capital and the corresponding ROE (from now we use return on equity (ROE) since this of more interest to the owners), we get a picture as below:

    This lax strategy shows little relation between the amount of working capital and the ROE and – from just looking at the graph it would be easy to conclude that working capital management is a waste of time and effort.

    Now we turn to a stricter strategy: keeping a low level of cash through all seasons, using short term financing in the high seasons and always have cash closely connected to expected sales. Again plotting the two hundred observed values we get the graph below:

    From this graph we can clearly see that if we can reduce the working capital we will increase the ROE – even if we live in a stochastic environment. By removing some of the randomness in the amount of working capital by keeping it close to what is absolutely needed – we get a much clearer picture of the effect. This strategy is best described as being close to the green line in the seasonal graph.

    Since we use pseudo-random ((Pseudo random number generator (PRNG), also known as a deterministic random bit generator, is an algorithm for generating a sequence of numbers that approximates the properties of random numbers. The sequence is not truly random in that it is completely determined by a set of initial values, called the seed number)) simulation we have replicated the first simulation (blue line), for the stricter strategy (green line).

    This means that the same events happened for both strategies; changes in sale, prices, costs, interest and exchange rates etc. The effects for the amount of working capital are shown in the graph below:

    The lax strategy (blue line) will have an average working capital of €4,8M with a standard deviation of €3.0M, while the strict strategy (green line) will have an average working capital of €1,4M with a standard deviation of €3.3M.

    Even if the stricter strategy seems to associate lower amounts of working capital with higher return to equity (se figure) and that the amount of working capital always is lower than under the laxer strategy, we have not yet established that it is a better strategy.

    To do this we need to simulate the strategies over a number of years and compare the differences in equity value under the two strategies. Doing this we get the probability distribution for difference in equity value as shown below:

    The expected value of the strict strategy over the lax strategy is €3,4 M width a standard deviation of €6,1 M. The distribution is skewed to the right, so there is also a possible additional upside. From this we can conclude that the stricter strategy is stochastic dominant to the laxer strategy. However there might be other strategies that can prove to be better.

    This brings us to the question: does an optimal working capital strategy exist? What we do know that there will be strategies that are stochastic dominant, but proving one to be optimal might be difficult.  Given the uncertainty in any firm’s future operations, you will probably first have to establish a set of strategies that can be applied depending on the set of events that can be experienced by the firm.

    References

    Filbeck, G, Krueger, T, & Preece, D. (2007). Cfo magazine’s “working capital survey”: do selected firms work for shareholders?. Quarterly Journal of Business and Economics , (March), Retrieved from http://www.allbusiness.com/company-activities-management/financial/5846250-1.html

    Katz, D.M.K. (2010). Working it out: The 2010 Working Capital Scorecard. CFO Magazine, June, Retrieved from http://www.cfo.com/article.cfm/14499542

    Weston, J. & Copeland, T. (1986). Managerial finance, Eighth Edition, Hinsdale, The Dryden Press

    Footnotes

  • Stochastic Balance Simulation

    Stochastic Balance Simulation

    This entry is part 1 of 6 in the series Balance simulation

    Introduction

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on a single values forecasts; the expected or average value of the input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong (Savage, 2002).  In addition deterministic models will miss the important dimension of uncertainty – that gives both the different risks facing the company and the opportunities they produce.

    In contrast, a stochastic model will be calculated a large number of times with different values for the input variable drawn from all possible values of the individual variables. Each run will then give a probable realization of future cash flow or of the company’s equity value etc. With thousands of runs we can plot the relative frequencies of the calculated values:

    and thus, we have succeeded in generating the probability distribution for the company’s equity value. In insurance this type of technique is often called Dynamic Financial Analysis (DFA) which actually is a fitting name.

    The Balance Simulation Model

    The main tool in the S&R toolbox is the balance model. The starting point is the company’s balance, which is treated as the simulations opening balance. In the case of a greenfield project – new factories, power plants, airports, etc. built from scratch – the opening balance is empty.

    The successive balances are then built from the Profit & Loss, by simulation of the company’s operation thru an EBITDA model mimicking the real life operations. Investments can be driven by demand (capacity calculations) or by investment programs giving the necessary or planned production capacity. The model will throughout the simulation raise debt (short and/or long term) or equity (domestic or foreign) according to the financial strategy set out by the company and the difference between cash outflow and inflow adjusted for the minimum cash level.

    Since this is a dynamic model, it will raise equity when losses occur and/or the maximum Debt/equity ratio has been exceeded. On the other hand it will repay loans, pay dividend, repurchase shares or purchase excess marketable securities (cash above the need for the operations) – all in line with the board’s shareholder strategy.

    The ledger and Double-entry Bookkeeping

    The activity described in the EBITDA model; investments, purchase of raw materials, production, payment of wages, income from sales, payment of special taxes on investments etc. is registered as transactions in the ledger, following a standard chart of accounts with double-entry bookkeeping. In a similar fashion are all financial transactions; loans repayments, cash, taxes paid and deferred, Agio and Disagio, etc. posted in the ledger. Currently, approximately 400 accounts are in use.

    The Trial Balance and the Financial Statements

    The trial balance (Post-Closing) is compiled and checked for balance between total debts and total credits. The income statement is then prepared using revenue and expense accounts from the trial balance and the balance sheet is prepared from the asset and liability accounts by including net income with the other equity accounts – using the International Financial Reporting Standards (IFRS).

    The general purpose of producing the trial balance is to ensure that the entries in the ledger are mathematically correct. Have in mind that every run in a simulation will produce a number of entries in the ledger and that they might differ not only in size but also in type depending on the realized states of the company’s operations (see above). We therefore need to be sure that the final financial statements – for every run – are correctly produced, since they will be the basis for all further financial analysis of the company.

    There are of course other sources of errors in book keeping; compensating errors, errors of omission, errors of principle etc. but after many years of use – with millions of runs – we feel confident that the ledger and financial statements are produced correctly. The point is that serious problems need serious models.

    However there are more benefits to be had from simulating the ledger and trial balance:

    1. It increases the models transparency; the trial balance can be printed out and audited. Together with the models extensive reporting and error/consistency control, it is no longer a ‘black box’ to the user.
    2. It makes it easy to plug inn new EBITDA models for other types of industry giving an automated check for consistency with the main balance simulation model.
    3. It is used to ensure correct solving of all implicit equations in the model, the most obvious is of course the interest and bank balance equation (interest depends on the bank balance and the bank balance depends on the interest) but others like translation hedging and limits set by the company’s financial strategy, create large and complicated systems of simultaneous equations.
    4. The trial balance changes from year to year are also used to ensure correct year to year balance transition.

    Financial Analysis, Financial Measures and Valuation

    Given the framework described above financial analysis can be performed and the expected value, variability and probability distributions for the different types of ratios; profitability, liquidity, activity, debt and equity etc. can be calculated and given as graphs. All important measures are calculated at least twice from different starting points to ensure consistency and correct solving of implicit equations.

    The following table shows the reconciliation of Economic Profit, initially calculated from (ROIC-WACC) multiplied with Invested capital:

    The motivation for doing all these consistency controls – in all nearly one hundred – lies in previously experience from Cash Flow/ Valuation models written in Excel. The level of detail is more often than not so low that there is no way to establish if they are right or wrong.

    More interesting than ratios, are the yearly distributions for EBITDA, EBIT, NOPLAT, Profit (loss) for the period, Free cash Flow, Economic profit, ROIC, Wacc, Debt and Equity and Equity value etc. giving a visual picture of the uncertainties and risks the company faces:

    Financial analysis is the conversion of financial data into useful information for decision making. Therefore, virtually any use of financial statements or other financial data for some purpose is financial analysis and is the primary focus of accounting and finance. Financial analysis can be internal (e.g., decision analysis by a company using internal data to understand or improve management and operating results) or external (e.g., comprehensive analysis for the purposes of commercial lending, mergers and acquisition or investment activities). The key is how to analysis available data to make correct decisions.

     

    Input

    As input the model needs parameter values and operational data. The parameter values fall in seven groups:

    1. Parameters describing investors preferences; Market risk premium etc.
    2. Parameters describing the company’s financial strategy; Leverage, Long/Short-term Debt ratio, Expected Foreign/ Domestic Debt Ratio, Economic Depreciation, Maximum Dividend Pay-out Ratio, Translation Hedging Strategy etc.
    3. Parameters describing the economic regime under which it operates: Taxes, Depreciation Scheme etc.
    4. Opening Balance etc.

    Since the model have to produces stochastic forecasts of interest(s) and exchange rates it will need for every currency involved (included lower and upper 5% probability limit):

    1. The Yield curves,
    2. Expected yearly inflation
    3. Depending on the forecast method(s) chosen for the exchange rates; the different currencies expected risk premiums or real exchange rates etc.

    Since there is a large number of parameters they are usually read from an excel template but the program will if necessary ask for missing or report inconsistent values of the parameters.

    The company’s operations are best described through an EBITDA model even if prices, costs and production coefficients and their variability can be read from an excel template. A dedicated EBITDA model will always give the opportunity to give a more detailed and in some cases complex description of the operations, include forecast and demand models, ‘exotic’ taxes, real options strategies etc., etc.

    Output

    S@R has set out to create models that can give answers to both deterministic and stochastic questions the tables will answer most deterministic issues while graphs must be used to answer the risk and uncertainty related questions:

    [TABLE=6]

    1.    In all 27 different reports with more than 70 pages describing operations and the economics of operations.
    2.    In addition the probability distributions for all input and output variables are produced.

    Use

    By linking dedicated EBITDA models to holistic balance simulation, taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:
    1.    by a using a EBITDA model to describe the companies operations or
    2.    by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model.

    The first approach implies setting up a dedicated EBITDA performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.
    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modeling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    Strategy@Risk takes advantage of a program language developed and used for financial risk simulation. We have used the program language for over 25years, and developed a series of simulation models for industry, banks and financial institutions.

    The language has as one of its strengths, to be able to solve implicit equations in multiple dimensions. For the specific problems we seek to solve, this is a necessity that provides the necessary degrees of freedom to formulate the approach to problems.

    The Strategy@Risk tools have highly advance properties:

    • Using models written in dedicated financial simulation language (with code and data separated; see The risk of spreadsheet errors).
    • Solving implicit systems of equations giving unique WACC calculated for every period ensuring that “Free Cash Flow” always equals “Economic Profit” value.
    • Programs and models in “windows end-user” style.
    • Extended test for consistency in input, calculations and results.
    • Transparent reporting of assumptions and results.

    References

    Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November 2002, pp. 20-21

    Mukherjee, Mukherjee (2003). Financial Accounting. New York: Harper Perennial, ISBN 9780070581555.

  • The Case of Enterprise Risk Management

    The Case of Enterprise Risk Management

    This entry is part 2 of 4 in the series A short presentation of S@R

     

    The underlying premise of enterprise risk management is that every entity exists to provide value for its stakeholders. All entities face uncertainty and the challenge for management is to determine how much uncertainty to accept as it strives to grow stakeholder value. Uncertainty presents both risk and opportunity, with the potential to erode or enhance value. Enterprise risk management enables management to effectively deal with uncertainty and associated risk and opportunity, enhancing the capacity to build value. (COSO, 2004)

    The evils of a single point estimate

    Enterprise risk management is a process, effected by an entity’s board of directors, management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

    Traditionally, when estimating costs, project value, equity value or budgeting, one number is generated – a single point estimate. There are many problems with this approach.  In budget work this point is too often given as the best the management can expect, but in some cases budgets are set artificially low generating bonuses for later performance beyond budget. The following graph depicts the first case.

    Budget_actual_expected

    Here, we have based on the production and market structure and on the managements assumptions of the variability of all relevant input and output variables simulated the probability distribution for next years EBITDA. The graph gives the budgeted value, the actual result and the expected value. Both budget and actual value are above expected value, but the budgeted value was far too high, giving with more than 80% probability a realized EBITDA lower than budget. In this case the board will be mislead with regard to the company’ ability to earn money and all subsequent decisions made based on the budget EBITDA can endanger the company.

    The organization’s ERM system should function to bring to the board’s attention the most significant risks affecting entity objectives and allow the board to understand and evaluate how these risks may be correlated, the manner in which they may affect the enterprise, and management’s mitigation or response strategies. (COSO, 2009)

    It would have been much more preferable to the board to be given both the budget value and the accompanying probability distribution allowing it to make independent judgment about the possible size of the next years EBITDA. Only then will the board – both from the shape of the distribution, its localization and the point estimate of budget EBITDA – be able to assess the risk and opportunity facing the company.

    Will point estimates cancel out errors?

    In the following we measure the deviation of the actual result from both from the budget value and from the expected value. The blue dots represent daughter companies located in different countries. For each company we have the deviation (in percent) of the budgeted EBITDA (bottom axis) and the expected value (left axis) from the actual EBITDA observed 1 ½ year later.

    If the deviation for a company falls in the upper right quadrant the deviation are positive for both budget and expected value – and the company is overachieving.

    If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the company is underachieving.

    If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – the company is overachieving but has had a to high budget.

    With left skewed EBITDA distributions there should not be any observations in the lower right quadrant that will only happen when the distributions is skewed to the right – and then there will not be any observations in the upper left quadrant.

    The graph below shows that two companies have seriously underperformed and that the budget process did not catch the risk they were facing.  The rest of the companies have done very well, some however have seriously underestimated opportunities manifested by the actual result. From an economic point of view, the mother company would of course have preferred all companies (blue dots) above the x-axis, but due to the stochastic nature of the EBITDA it have to accept that some always will fall below.  Risk wise, it would have preferred the companies to fall to the right of the y-axis but will due to budget uncertainties have to accept that some always will fall to the left. However, large deviations both below the x-axis and to the left of the y-axis add to the company risk.

    Budget_actual_expected#1

    A situation like the one given in the graph below is much to be preferred from the board’s point of view.

    Budget_actual_expected#2

    The graphs above, taken from real life – shows that budgeting errors will not be canceled out even across similar daughter companies. Consolidating the companies will give the mother company a left skewed EBITDA distribution. They also show that you need to be prepared for deviations both positive and negative – you need a plan. So how do you get a plan? You make a simulation model! (See Pdf: Short-presentation-of-S@R#2)

    Simulation

    The Latin verb simulare means to “to make like”, “to create an exact representation” or imitate. The purpose of a simulation model is to imitate the company and is environment, so that its functioning can be studied. The model can be a test bed for assumptions and decisions about the company. By creating a representation of the company a modeler can perform experiments that are impossible or prohibitively expensive in the real world. (Sterman, 1991)

    There are many different simulation techniques, including stochastic modeling, system dynamics, discrete simulation, etc. Despite the differences among them, all simulation techniques share a common approach to modeling.

    Key issues in simulation include acquisition of valid source information about the company, selection of key characteristics and behaviors, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.

    Optimization models are prescriptive, but simulation models are descriptive. A simulation model does not calculate what should be done to reach a particular goal, but clarifies what could happen in a given situation. The purpose of simulations may be foresight (predicting how systems might behave in the future under assumed conditions) or policy design (designing new decision-making strategies or organizational structures and evaluating their effects on the behavior of the system). In other words, simulation models are “what if” tools. Often is such “what if” information more important than knowledge of the optimal decision.

    However, even with simulation models it is possible to mismanage risk by (Stulz, 2009):

    • Over-reliance on historical data
    • Using too narrow risk metrics , such as value at risk—probably the single most important measure in financial services—have underestimated risks
    • Overlooking knowable risks
    • Overlooking concealed risks
    • Failure to communicate effectively – failing to appreciate the complexity of the risks being managed.
    • Not managing risks in real time, you have to be able to monitor changing markets and,  respond to appropriately – You need a plan

    Being fully aware of the possible pitfalls we have methods and techniques’ that can overcome these issues and since we estimate the full probability distributions we can deploy a number of risk metrics  not having to relay on simple measures like value at risk – which we actually never uses.

    References

    COSO, (2004, September). Enterprise risk management — integrated framework. Retrieved from http://www.coso.org/documents/COSO_ERM_ExecutiveSummary.pdf

    COSO, (2009, October). Strengthening enterprise risk management for strategic advantage. Retrieved from http://www.coso.org/documents/COSO_09_board_position_final102309PRINTandWEBFINAL_000.pdf

    Sterman, J. D. (1991). A Skeptic’s Guide to Computer Models. In Barney, G. O. et al. (eds.),
    Managing a Nation: The Microcomputer Software Catalog. Boulder, CO: Westview Press, 209-229.

    Stulz, R.M. (2009, March). Six ways companies mismanage risk. Harvard Business Review (The Magazine), Retrieved from http://hbr.org/2009/03/six-ways-companies-mismanage-risk/ar/1

    Enterprise risk management is a process, effected by an entity’s board of directors,

    management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

  • A short presentation of S@R

    A short presentation of S@R

    This entry is part 1 of 4 in the series A short presentation of S@R

     

    My general view would be that you should not take your intuitions at face value; overconfidence is a powerful source of illusions. Daniel Kahneman (“Strategic decisions: when,” 2010)

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong. In addition deterministic models will miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they produce.

    S@R has set out to create models (See Pdf: Short presentation of S@R) that can give answers to both deterministic and stochastic questions, by linking dedicated EBITDA models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Generic Simulation_model

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or,
    2. by using coefficients of fabrications  as direct input to the balance model.

    The first approach implies setting up a dedicated ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    EBITDA_model

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.

    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modelling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analysing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    References

    Strategic decisions: when can you trust your gut?. (2010). McKinsey Quarterly, (March)

  • The Value of Information

    The Value of Information

    This entry is part 4 of 4 in the series A short presentation of S@R

     

    Enterprise risk management (ERM) only has value to those who know that the future is uncertain

    Businesses have three key needs:

    First, they need to have a product or service that people will buy. They need revenues.

    Second, they need to have the ability to provide that product or service at a cost less than what their customers will pay. They need profits. Once they have revenues and profits, their business is a valuable asset.

    So third, they need to have a system to avoid losing that asset because of unforeseen adverse experience. They need risk management.

    The top CFO concern is the firm’s ability to forecast results and the first stepping-stone in the process of forecasting results is to forecast demand – and this is where ERM starts.

    The main risk any firm faces is the variability (uncertainty) of demand. Since all production activities like procurement of raw materials, sizing of work force, investment in machinery etc. is based on expected demand the task of forecasting future demand is crucial. It is of course difficult and in most cases not possible to perfectly forecast demand, but it is always possible to make forecasts that give better results than mere educated guesses.

    We will attempt in the following to show the value of making good forecasts by estimating the daily probability distribution for demand. We will do this using a very simple model, assuming that:

    1. Daily demand is normal distributed with expected sales of 100 units and a standard deviation of 12 units,
    2. the product can not be stocked,
    3. it sells at $4 pr unit, has a variable production cost of $2 and a fixed production cost of $50.

    Now we need to forecast the daily sales. If we had perfect information about the demand, we would have a probability distribution for daily profit as given by the red histogram and line in the graphs below.

    • One form of forecast (average) is the educated guess using the average daily sales (blue histogram). As we can see from the graphs, this forecast method gives a large downside (too high production) and no upside (too low production).
    • A better method (limited information) would have been to forecast demand by its relation to some other observable variable. Let us assume that we have a forecast method that gives us a near perfect forecast in 50% of the cases and a probability distribution for the rest that is normal distributed with expected sales as for demand, but with a standard deviation of six units (green histogram).

    Profit-histogramWith the knowledge we have from (selecting strategy) we clearly se that the last forecast strategy is stochastic dominant to the use of average demand as forecast.

    ProfitSo, what is the value to the company of more informed forecasts than the mere use of expected sales? The graph below gives the distribution for the differences in profit (percentage) using the two methods. Over time, the second method  will give on average an 8% higher profit than just using the average demand as forecast.

    Diff-in-profitHowever, there is still another seven to eight percent room for further improvement in the forecasting procedure.

    If the company could be reasonable sure of the existence of a better forecast model than using the average, it would be a good strategy to put money into a betterment. In fact it could use up to 8% of all future profit if it knew that a method as good as or better than our second method existed.

  • WACC, Uncertainty and Infrastructure Regulation

    WACC, Uncertainty and Infrastructure Regulation

    This entry is part 2 of 2 in the series The Weighted Average Cost of Capital

     

    There is a growing consensus that the successful development of infrastructure – electricity, natural gas, telecommunications, water, and transportation – depends in no small part on the adoption of appropriate public policies and the effective implementation of these policies. Central to these policies is development of a regulatory apparatus that provides stability, protects consumers from the abuse of market power, guard’s consumers and operators against political opportunism, and provides incentives for service providers to operate efficiently and make the needed investments’ capital  (Jamison, & Berg, 2008, Overview).

    There are four primary approaches to regulating the overall price level – rate of return regulation (or cost of service), price cap regulation, revenue cap regulation, and benchmarking (or yardstick) regulation. Rate of return regulation adjusts overall price levels according to the operator’s accounting costs and cost of capital. In most cases, the regulator reviews the operator’s overall price level in response to a claim by the operator that the rate of return that it is receiving is less than its cost of capital, or in response to a suspicion of the regulator or claim by a consumer group that the actual rate of return is greater than the cost of capital (Jamison, & Berg, 2008, Price Level Regulation).

    We will in the following look at cost of service models (cost-based pricing); however some of the reasoning will also apply to the other approaches.  A number of different models exist:

    •    Long Run Average Total Cost – LRATC
    •    Long Run Incremental Cost – LRIC
    •    Long Run Marginal cost – LRMC
    •    Forward Looking Long Run Average Incremental Costs – FL-LRAIC
    •    Long Run Average Interconnection Costs – LRAIC
    •    Total Element Long Run Incremental Cost – TELRIC
    •    Total Service Long Run Incremental Cost – TSLRIC
    •    Etc.

    Where:
    Long run: The period over which all factors of production, including capital, are variable.
    Long Run Incremental Costs: The incremental costs that would arise in the long run with a defined increment to demand.
    Marginal cost: The increase in the forward-looking cost of a firm caused by an increase in its output of one unit.
    Long Run Average Interconnection Costs: The term used by the European Commission to describe LRIC with the increment defined as the total service.

    We will not discuss the merits and use of the individual methods only direct the attention on the fact that an essential ingredient in all methods is their treatment of capital and the calculation of capital cost – Wacc.

    Calculating Wacc a World without Uncertainty

    Calculating Wacc for the current year is a straight forward task, we know for certain the interest (risk free rate and credit risk premium) and tax rates, the budget values for debt and equity, the market premium and the company’s beta etc.

    There is however a small snag, should we use the book value of Equity or should we calculate the market value of Equity and use this in the Wacc calculations? The last approach is the recommended one (Copeland, Koller, & Murrin, 1994, p248-250), but this implies a company valuation with calculation of Wacc for every year in the forecast period. The difference between the two approaches can be large – it is only when book value equals market value for every year in the future that they will give the same Wacc.

    In the example below market value of equity is lower than book value hence market value Wacc is lower than book value Wacc. Since this company have a low and declining ROIC the value of equity is decreasing and hence also the Wacc.

    Wacc-and-Wacc-weights

    Calculating Wacc for a specific company for a number of years into the future ((For some telecom cases, up to 50 years.)) is not a straight forward task. Wacc is no longer a single value, but a time series with values varying from year to year.

    Using the average value of Wacc can quickly lead you astray. Using an average in e.g. an LRIC model for telecommunications regulation, to determine the price paid by competitors for services provided by an operator with significant market power (incumbent) will in the first years give a too low price and in the later years a to high price when the series is decreasing and vice versa. So the use of an average value for Wacc can either add to the incumbent’s problems or give him a windfall income.

    The same applies for the use of book value equity vs. market value equity. If for the incumbent the market value of equity is lower than the book value, the price paid by the competitors when book value Wacc is used will be to high and the incumbent will have a windfall gain and vise versa.

    Some advocates the use of a target capital structure (Copeland, Koller, & Murrin, 1994, p250) to avoid the computational difficulties (solving implicit equations) of using market value weights in the Wacc calculation. But in real life it can be very difficult to reach and maintain a fixed structure. And it does not solve the problems with market value of equity deviating from book value.

    Calculating Wacc a World with Uncertainty

    The future values for most, if not all variable will in the real world be highly uncertain – in the long run even the tax rates will vary.

    The ‘long run’ aspect of the methods therefore implies an ex-ante (before the fact) treatment of a number of variable; inflation, interest and tax rates, demand, investments etc. that have to be treated as stochastic variable.
    This is underlined by the fact that more and more central banks is presenting their forecasts of macro economic variable as density tables/charts (e.g. Federal Reserve Bank of Philadelphia, 2009) or as fan charts (Nakamura, & Shinichiro, 2008) like below from the Swedish Central Bank (Sveriges Riksbank, 2009):

    Riksbank_dec09

    Fan charts like this visualises the region of uncertainty or the possible yearly event space for central variable. These variables will also be important exogenous variables in any corporate valuation as value or cost drivers. Add to this all other variables that have to be taken into account to describe the corporate operation.

    Now, for every possible outcome of any of these variables we will have a different value of the company and is equity and hence it’s Wacc. So we will not have one time series of Wacc, but a large number of different time series all equally probable. Actually the probability of having a single series forecasted correctly is approximately zero.

    Then there is the question about how long it is feasible to forecast macro variables without having to use just the unconditional mean (Galbraith, John W. and Tkacz). In the charts above the ‘content horizon’ is set to approximately 30 month, in other the horizon can be 40 month or more (Adolfson, Andersson, Linde, Villani, & Vredin, 2007).

    As is evident from the charts the fan width is increasing as we lengthen the horizon. This is an effect from the forecast methods as the band of forecast uncertainty increases as we go farther and farther into the future.

    The future nominal values of GDP, costs, etc. will show even greater variation since these values will be dependent on the growth rates path’s to that point in time.

    Mont Carlo Simulation

    A possible solution to the problems discussed above is to use Monte Carlo techniques to forecast the company’s equity value distribution – coupled with market value weights calculation to forecast the corresponding yearly Wacc distributions:

    Wacc-2012

    This is the approach we have implemented in our models – it will not give a single value for Wacc but its distribution.  If you need a single value, the mean or mode from the yearly distributions is better than using the Wacc found from using average values of the exogenous variable – cf. Jensen’s inequality (Savage & Danziger, 2009).

    References

    Adolfson, A., Andersson, M.K., Linde, J., Villani, M., & Vredin, A. (2007). Modern forecasting models in action: improving macroeconomic analyses at central banks. International Journal of Central Banking, (December), 111-144.

    Copeland, T., Koller, T., & Murrin, J. (1994). Valuation. New York: Wiley.

    Copenhag Eneconomics. (2007, February 02). Cost of capital for broadcasting transmission . Retrieved from http://www.pts.se/upload/Documents/SE/WACCforBroadcasting.pdf

    Federal Reserve Bank of Philadelphia, Initials. (2009, November 16). Fourth quarter 2009 survey of professional forecasters. Retrieved from http://www.phil.frb.org/research-and-data/real-time-center/survey-of-professional-forecasters/2009/survq409.cfm

    Galbraith, John W. and Tkacz, Greg, Forecast Content and Content Horizons for Some Important Macroeconomic Time Series. Canadian Journal of Economics, Vol. 40, No. 3, pp. 935-953, August 2007. Available at SSRN: http://ssrn.com/abstract=1001798 or doi:10.1111/j.1365-2966.2007.00437.x

    Jamison, Mark A., & Berg, Sanford V. (2008, August 15). Annotated reading list for a body of knowledge on infrastructure regulation (Developed for the World Bank). Retrieved from http://www.regulationbodyofknowledge.org/

    Nakamura, K., & Shinichiro, N. (2008). The Uncertainty of the economic outlook and central banks’ communications. Bank of Japan Review, (June 2008), Retrieved from http://www.boj.or.jp/en/type/ronbun/rev/data/rev08e01.pdf

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Sveriges Riksbank, . (2009). The Economic outlook and inflation prospects. Monetary Policy Report, (October), p7. Retrieved from http://www.riksbank.com/upload/Dokument_riksbank/Kat_publicerat/Rapporter/2009/mpr_3_09oct.pdf