Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Other topics – Page 6 – Strategy @ Risk

Category: Other topics

  • The Uncertainty in Forecasting Airport Pax

    The Uncertainty in Forecasting Airport Pax

    This entry is part 3 of 4 in the series Airports

     

    When planning airport operations, investments both air- and land side or only making next years budget you need to make some forecasts of what traffic you can expect. Now, there are many ways of doing that most of them ending up with a single figure for the monthly or weekly traffic. However we do know that the probability for that figure to be correct is near zero, thus we end up with plans based on assumptions that most likely newer will happen.

    This is why we use Monte Carlo simulation to get a grasp of the uncertainty in our forecast and how this uncertainty develops as we go into the future. The following graph (from real life) shows how the passenger distribution changes as we go from year 2010 (blue) to 2017 (red). The distribution moves outwards showing an expected increase in Pax at the same time it spreads out on the x-axis (Pax) giving a good picture of the increased uncertainty we face.

    Pax-2010_2017This can also be seen from the yearly cumulative probability distributions given below. As we move out into the future the distributions are leaning more and more to the right while still being “anchored” on the left to approximately the same place – showing increased uncertainty in the future Pax forecasts. However our confidence in that the airport will reach at least 40M Pax during the next 5 years is bolstered.

    Pax_DistributionsIf we look at the fan-chart for the Pax forecasts below, the limits of the dark blue region give the lower (25%) and upper (75%) quartiles for the yearly Pax distributions i.e. the region where we expect with 50% probability the actual Pax figures to fall.

    Pax_Uncertainty

    The lower und upper limits give the 5% and 95% percentiles for the yearly Pax distributions i.e. we can expect with 90% probability that the actual Pax figures will fall somewhere inside these three regions.

    As shown the uncertainty about the future yearly Pax figures is quite high. With this as the backcloth for airport planning it is evident that the stochastic nature of the Pax forecasts has to be taken into account when investment decisions (under uncertainty) are to be made. (ref) Since the airport value will relay heavily on these forecasts it is also evident that this value will be stochastic and that methods from decision making under uncertainty have to be used for possible M&R.

    Major Airport Operation Disruptions

    Delays – the time lapse which occurs when a planned event does not happen at the planned time – are pretty common at most airports Eurocontrol  estimates it on average to approx 13 min on departure for 45%  of the flights and approx 12 min for arrivals in 42% of the flights (Guest, 2007). Nevertheless the airport costs of such delays are small; it can even give an increase in revenue (Cook, Tanner, & Anderson, 2004).

    We have lately in Europe experienced major disruptions in airport operations thru closing of airspace due to volcanic ash. Closed airspace give a direct effect on airport revenue and a higher effect the closer it is to an airport. Volcanic eruptions in some regions might be considered as Black Swan events to an airport, but there are a large number of volcanoes that might cause closing of airspace for shorter or longer time. The Smithsonian Global Volcanism Program lists more than 540 volcanoes with previous documented eruption.

    As there is little data for events like this it is difficult to include the probable effects of closed airspace due to volcanic eruptions in the simulation. However, the data includes effects of the 9/11 terrorist attack and the left tails of the yearly Pax distributions will be influenced by this.

    References

    Guest, Tim. (2007, September). A Matter of time: air traffic delay in Europe. , EUROCONTROL Trends in Air Traffic I, 2.

    Cook, A., Tanner, G., & Anderson, S. (2004). Evaluating the true cost to airlines of one minute of airborne or ground delay: final report. [University of Westminster]. Retrieved from, www.eurocontrol.int/prc/gallery/content/public/Docs/cost_of_delay.pdf

  • The Case of Enterprise Risk Management

    The Case of Enterprise Risk Management

    This entry is part 2 of 4 in the series A short presentation of S@R

     

    The underlying premise of enterprise risk management is that every entity exists to provide value for its stakeholders. All entities face uncertainty and the challenge for management is to determine how much uncertainty to accept as it strives to grow stakeholder value. Uncertainty presents both risk and opportunity, with the potential to erode or enhance value. Enterprise risk management enables management to effectively deal with uncertainty and associated risk and opportunity, enhancing the capacity to build value. (COSO, 2004)

    The evils of a single point estimate

    Enterprise risk management is a process, effected by an entity’s board of directors, management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

    Traditionally, when estimating costs, project value, equity value or budgeting, one number is generated – a single point estimate. There are many problems with this approach.  In budget work this point is too often given as the best the management can expect, but in some cases budgets are set artificially low generating bonuses for later performance beyond budget. The following graph depicts the first case.

    Budget_actual_expected

    Here, we have based on the production and market structure and on the managements assumptions of the variability of all relevant input and output variables simulated the probability distribution for next years EBITDA. The graph gives the budgeted value, the actual result and the expected value. Both budget and actual value are above expected value, but the budgeted value was far too high, giving with more than 80% probability a realized EBITDA lower than budget. In this case the board will be mislead with regard to the company’ ability to earn money and all subsequent decisions made based on the budget EBITDA can endanger the company.

    The organization’s ERM system should function to bring to the board’s attention the most significant risks affecting entity objectives and allow the board to understand and evaluate how these risks may be correlated, the manner in which they may affect the enterprise, and management’s mitigation or response strategies. (COSO, 2009)

    It would have been much more preferable to the board to be given both the budget value and the accompanying probability distribution allowing it to make independent judgment about the possible size of the next years EBITDA. Only then will the board – both from the shape of the distribution, its localization and the point estimate of budget EBITDA – be able to assess the risk and opportunity facing the company.

    Will point estimates cancel out errors?

    In the following we measure the deviation of the actual result from both from the budget value and from the expected value. The blue dots represent daughter companies located in different countries. For each company we have the deviation (in percent) of the budgeted EBITDA (bottom axis) and the expected value (left axis) from the actual EBITDA observed 1 ½ year later.

    If the deviation for a company falls in the upper right quadrant the deviation are positive for both budget and expected value – and the company is overachieving.

    If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the company is underachieving.

    If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – the company is overachieving but has had a to high budget.

    With left skewed EBITDA distributions there should not be any observations in the lower right quadrant that will only happen when the distributions is skewed to the right – and then there will not be any observations in the upper left quadrant.

    The graph below shows that two companies have seriously underperformed and that the budget process did not catch the risk they were facing.  The rest of the companies have done very well, some however have seriously underestimated opportunities manifested by the actual result. From an economic point of view, the mother company would of course have preferred all companies (blue dots) above the x-axis, but due to the stochastic nature of the EBITDA it have to accept that some always will fall below.  Risk wise, it would have preferred the companies to fall to the right of the y-axis but will due to budget uncertainties have to accept that some always will fall to the left. However, large deviations both below the x-axis and to the left of the y-axis add to the company risk.

    Budget_actual_expected#1

    A situation like the one given in the graph below is much to be preferred from the board’s point of view.

    Budget_actual_expected#2

    The graphs above, taken from real life – shows that budgeting errors will not be canceled out even across similar daughter companies. Consolidating the companies will give the mother company a left skewed EBITDA distribution. They also show that you need to be prepared for deviations both positive and negative – you need a plan. So how do you get a plan? You make a simulation model! (See Pdf: Short-presentation-of-S@R#2)

    Simulation

    The Latin verb simulare means to “to make like”, “to create an exact representation” or imitate. The purpose of a simulation model is to imitate the company and is environment, so that its functioning can be studied. The model can be a test bed for assumptions and decisions about the company. By creating a representation of the company a modeler can perform experiments that are impossible or prohibitively expensive in the real world. (Sterman, 1991)

    There are many different simulation techniques, including stochastic modeling, system dynamics, discrete simulation, etc. Despite the differences among them, all simulation techniques share a common approach to modeling.

    Key issues in simulation include acquisition of valid source information about the company, selection of key characteristics and behaviors, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.

    Optimization models are prescriptive, but simulation models are descriptive. A simulation model does not calculate what should be done to reach a particular goal, but clarifies what could happen in a given situation. The purpose of simulations may be foresight (predicting how systems might behave in the future under assumed conditions) or policy design (designing new decision-making strategies or organizational structures and evaluating their effects on the behavior of the system). In other words, simulation models are “what if” tools. Often is such “what if” information more important than knowledge of the optimal decision.

    However, even with simulation models it is possible to mismanage risk by (Stulz, 2009):

    • Over-reliance on historical data
    • Using too narrow risk metrics , such as value at risk—probably the single most important measure in financial services—have underestimated risks
    • Overlooking knowable risks
    • Overlooking concealed risks
    • Failure to communicate effectively – failing to appreciate the complexity of the risks being managed.
    • Not managing risks in real time, you have to be able to monitor changing markets and,  respond to appropriately – You need a plan

    Being fully aware of the possible pitfalls we have methods and techniques’ that can overcome these issues and since we estimate the full probability distributions we can deploy a number of risk metrics  not having to relay on simple measures like value at risk – which we actually never uses.

    References

    COSO, (2004, September). Enterprise risk management — integrated framework. Retrieved from http://www.coso.org/documents/COSO_ERM_ExecutiveSummary.pdf

    COSO, (2009, October). Strengthening enterprise risk management for strategic advantage. Retrieved from http://www.coso.org/documents/COSO_09_board_position_final102309PRINTandWEBFINAL_000.pdf

    Sterman, J. D. (1991). A Skeptic’s Guide to Computer Models. In Barney, G. O. et al. (eds.),
    Managing a Nation: The Microcomputer Software Catalog. Boulder, CO: Westview Press, 209-229.

    Stulz, R.M. (2009, March). Six ways companies mismanage risk. Harvard Business Review (The Magazine), Retrieved from http://hbr.org/2009/03/six-ways-companies-mismanage-risk/ar/1

    Enterprise risk management is a process, effected by an entity’s board of directors,

    management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

  • A short presentation of S@R

    A short presentation of S@R

    This entry is part 1 of 4 in the series A short presentation of S@R

     

    My general view would be that you should not take your intuitions at face value; overconfidence is a powerful source of illusions. Daniel Kahneman (“Strategic decisions: when,” 2010)

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong. In addition deterministic models will miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they produce.

    S@R has set out to create models (See Pdf: Short presentation of S@R) that can give answers to both deterministic and stochastic questions, by linking dedicated EBITDA models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Generic Simulation_model

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or,
    2. by using coefficients of fabrications  as direct input to the balance model.

    The first approach implies setting up a dedicated ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    EBITDA_model

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.

    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modelling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analysing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    References

    Strategic decisions: when can you trust your gut?. (2010). McKinsey Quarterly, (March)

  • WACC, Uncertainty and Infrastructure Regulation

    WACC, Uncertainty and Infrastructure Regulation

    This entry is part 2 of 2 in the series The Weighted Average Cost of Capital

     

    There is a growing consensus that the successful development of infrastructure – electricity, natural gas, telecommunications, water, and transportation – depends in no small part on the adoption of appropriate public policies and the effective implementation of these policies. Central to these policies is development of a regulatory apparatus that provides stability, protects consumers from the abuse of market power, guard’s consumers and operators against political opportunism, and provides incentives for service providers to operate efficiently and make the needed investments’ capital  (Jamison, & Berg, 2008, Overview).

    There are four primary approaches to regulating the overall price level – rate of return regulation (or cost of service), price cap regulation, revenue cap regulation, and benchmarking (or yardstick) regulation. Rate of return regulation adjusts overall price levels according to the operator’s accounting costs and cost of capital. In most cases, the regulator reviews the operator’s overall price level in response to a claim by the operator that the rate of return that it is receiving is less than its cost of capital, or in response to a suspicion of the regulator or claim by a consumer group that the actual rate of return is greater than the cost of capital (Jamison, & Berg, 2008, Price Level Regulation).

    We will in the following look at cost of service models (cost-based pricing); however some of the reasoning will also apply to the other approaches.  A number of different models exist:

    •    Long Run Average Total Cost – LRATC
    •    Long Run Incremental Cost – LRIC
    •    Long Run Marginal cost – LRMC
    •    Forward Looking Long Run Average Incremental Costs – FL-LRAIC
    •    Long Run Average Interconnection Costs – LRAIC
    •    Total Element Long Run Incremental Cost – TELRIC
    •    Total Service Long Run Incremental Cost – TSLRIC
    •    Etc.

    Where:
    Long run: The period over which all factors of production, including capital, are variable.
    Long Run Incremental Costs: The incremental costs that would arise in the long run with a defined increment to demand.
    Marginal cost: The increase in the forward-looking cost of a firm caused by an increase in its output of one unit.
    Long Run Average Interconnection Costs: The term used by the European Commission to describe LRIC with the increment defined as the total service.

    We will not discuss the merits and use of the individual methods only direct the attention on the fact that an essential ingredient in all methods is their treatment of capital and the calculation of capital cost – Wacc.

    Calculating Wacc a World without Uncertainty

    Calculating Wacc for the current year is a straight forward task, we know for certain the interest (risk free rate and credit risk premium) and tax rates, the budget values for debt and equity, the market premium and the company’s beta etc.

    There is however a small snag, should we use the book value of Equity or should we calculate the market value of Equity and use this in the Wacc calculations? The last approach is the recommended one (Copeland, Koller, & Murrin, 1994, p248-250), but this implies a company valuation with calculation of Wacc for every year in the forecast period. The difference between the two approaches can be large – it is only when book value equals market value for every year in the future that they will give the same Wacc.

    In the example below market value of equity is lower than book value hence market value Wacc is lower than book value Wacc. Since this company have a low and declining ROIC the value of equity is decreasing and hence also the Wacc.

    Wacc-and-Wacc-weights

    Calculating Wacc for a specific company for a number of years into the future ((For some telecom cases, up to 50 years.)) is not a straight forward task. Wacc is no longer a single value, but a time series with values varying from year to year.

    Using the average value of Wacc can quickly lead you astray. Using an average in e.g. an LRIC model for telecommunications regulation, to determine the price paid by competitors for services provided by an operator with significant market power (incumbent) will in the first years give a too low price and in the later years a to high price when the series is decreasing and vice versa. So the use of an average value for Wacc can either add to the incumbent’s problems or give him a windfall income.

    The same applies for the use of book value equity vs. market value equity. If for the incumbent the market value of equity is lower than the book value, the price paid by the competitors when book value Wacc is used will be to high and the incumbent will have a windfall gain and vise versa.

    Some advocates the use of a target capital structure (Copeland, Koller, & Murrin, 1994, p250) to avoid the computational difficulties (solving implicit equations) of using market value weights in the Wacc calculation. But in real life it can be very difficult to reach and maintain a fixed structure. And it does not solve the problems with market value of equity deviating from book value.

    Calculating Wacc a World with Uncertainty

    The future values for most, if not all variable will in the real world be highly uncertain – in the long run even the tax rates will vary.

    The ‘long run’ aspect of the methods therefore implies an ex-ante (before the fact) treatment of a number of variable; inflation, interest and tax rates, demand, investments etc. that have to be treated as stochastic variable.
    This is underlined by the fact that more and more central banks is presenting their forecasts of macro economic variable as density tables/charts (e.g. Federal Reserve Bank of Philadelphia, 2009) or as fan charts (Nakamura, & Shinichiro, 2008) like below from the Swedish Central Bank (Sveriges Riksbank, 2009):

    Riksbank_dec09

    Fan charts like this visualises the region of uncertainty or the possible yearly event space for central variable. These variables will also be important exogenous variables in any corporate valuation as value or cost drivers. Add to this all other variables that have to be taken into account to describe the corporate operation.

    Now, for every possible outcome of any of these variables we will have a different value of the company and is equity and hence it’s Wacc. So we will not have one time series of Wacc, but a large number of different time series all equally probable. Actually the probability of having a single series forecasted correctly is approximately zero.

    Then there is the question about how long it is feasible to forecast macro variables without having to use just the unconditional mean (Galbraith, John W. and Tkacz). In the charts above the ‘content horizon’ is set to approximately 30 month, in other the horizon can be 40 month or more (Adolfson, Andersson, Linde, Villani, & Vredin, 2007).

    As is evident from the charts the fan width is increasing as we lengthen the horizon. This is an effect from the forecast methods as the band of forecast uncertainty increases as we go farther and farther into the future.

    The future nominal values of GDP, costs, etc. will show even greater variation since these values will be dependent on the growth rates path’s to that point in time.

    Mont Carlo Simulation

    A possible solution to the problems discussed above is to use Monte Carlo techniques to forecast the company’s equity value distribution – coupled with market value weights calculation to forecast the corresponding yearly Wacc distributions:

    Wacc-2012

    This is the approach we have implemented in our models – it will not give a single value for Wacc but its distribution.  If you need a single value, the mean or mode from the yearly distributions is better than using the Wacc found from using average values of the exogenous variable – cf. Jensen’s inequality (Savage & Danziger, 2009).

    References

    Adolfson, A., Andersson, M.K., Linde, J., Villani, M., & Vredin, A. (2007). Modern forecasting models in action: improving macroeconomic analyses at central banks. International Journal of Central Banking, (December), 111-144.

    Copeland, T., Koller, T., & Murrin, J. (1994). Valuation. New York: Wiley.

    Copenhag Eneconomics. (2007, February 02). Cost of capital for broadcasting transmission . Retrieved from http://www.pts.se/upload/Documents/SE/WACCforBroadcasting.pdf

    Federal Reserve Bank of Philadelphia, Initials. (2009, November 16). Fourth quarter 2009 survey of professional forecasters. Retrieved from http://www.phil.frb.org/research-and-data/real-time-center/survey-of-professional-forecasters/2009/survq409.cfm

    Galbraith, John W. and Tkacz, Greg, Forecast Content and Content Horizons for Some Important Macroeconomic Time Series. Canadian Journal of Economics, Vol. 40, No. 3, pp. 935-953, August 2007. Available at SSRN: http://ssrn.com/abstract=1001798 or doi:10.1111/j.1365-2966.2007.00437.x

    Jamison, Mark A., & Berg, Sanford V. (2008, August 15). Annotated reading list for a body of knowledge on infrastructure regulation (Developed for the World Bank). Retrieved from http://www.regulationbodyofknowledge.org/

    Nakamura, K., & Shinichiro, N. (2008). The Uncertainty of the economic outlook and central banks’ communications. Bank of Japan Review, (June 2008), Retrieved from http://www.boj.or.jp/en/type/ronbun/rev/data/rev08e01.pdf

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Sveriges Riksbank, . (2009). The Economic outlook and inflation prospects. Monetary Policy Report, (October), p7. Retrieved from http://www.riksbank.com/upload/Dokument_riksbank/Kat_publicerat/Rapporter/2009/mpr_3_09oct.pdf

  • Credit Risk

    Credit Risk

    This entry is part 4 of 4 in the series Risk of Bankruptcy

    Other Methods

    A number of other statistical methods have also been used to predict future company failure and credit risk, see: (Atiya, 2001), (Chandra, Ravi, Bose, 2009) and (Bastos, 2008). A recent study (Boguslauskas, Mileris , 2009) analyzed 30 scientific publications comprising 77 models:

    1. 63% used artificial neural networks (ANN)
    2. 53% used logistic regression (LR)
    3. 37% used discriminant analysis (DA)
    4. 23% used decision trees and (DT)
    5. 33% used various other methods

    The general accuracy of the different models was evaluated: the proportion of companies correctly classified (figure 3 in the article):

    Classification-error

    The box and whisker plot above shows that logistic regression (87%) and artificial neural networks (87%) gives almost the same accuracy while decision trees (83%) and discriminant analysis (77%) seems to be less reliable methods.

    However from the boxes it is evident that decision trees as a method have a much larger variance in classification accuracy than the others and that artificial neural network have the lowest variance. For logistic regression and discriminant analysis the variance is approximately the same.

    Comparing methods based on different data sets can easily be misleading. Accurate parameter estimation relies heavily on available data and their usability for that particular method.

    References

    Atiya, Amir F. (2001). Bankruptcy prediction for credit risk using neural networks: a survey and new results. IEEE TRANSACTIONS ON NEURAL NETWORKS, 12(4), Retrieved from http://ieee-cis.org/pubs/tnn/

    Bastos, Joao. (2008, April 01). Credit scoring with boosted decision trees. Retrieved from http://mpra.ub.uni-muenchen.de/8156/

    Boguslauskas, Vytautas. Mileris, Ricardas. (2009). Estimation of credit risk by artificial neural networks models. ECONOMICS OF ENGINEERING DECISIONS, 4(64), Retrieved from http://internet.ktu.lt/en/science/journals/econo/inzek064.html

    Chandra, D. K., Ravi, V., Bose, I. (2009). Failure prediction of dotcom companies using hybrid intelligent techniques. Expert Systems with Applications, (36), 4830–4837.

  • Concession Revenue Modelling and Forecasting

    Concession Revenue Modelling and Forecasting

    This entry is part 2 of 4 in the series Airports

     

    Concessions are an important source of revenue for all airports. An airport simulation model should therefore be able to give a good forecast of revenue from different types of concessions -given a small set of assumptions about local future price levels and income development for its international Pax. Since we already have a good forecast model for the expected number of international Pax (and its variation) we will attempt to forecast the airports revenue pr Pax from one type of concession and use both forecasts to estimate the airports revenue from that concession.

    The theory behind is simple; the concessionaires sales is a function of product price and the customers (Pax) income level. Some other airport specific variables also enter the equation however they will not be discussed here. As a proxy for change in Pax income we will use the individual countries change in GDP.  The price movement is represented by the corresponding movements of a price index.

    We assume that changes in the trend for the airports revenue is a function of the changes in the general income level and that the seasonal variance is caused by the seasonal changes in the passenger mix (business/leisure travel).

    It is of course impossible to forecast the exact level of revenue, but that is as we shall see where Monte Carlo simulation proves its worth.

    The fist step is a time series analysis of the observed revenue pr Pax, decomposing the series in trend and seasonal factors:

    Concession-revenue

    The time series fit turns out to be very good explaining more than 90 % of the series variation. At this point however our only interest is the trend movements and its relation to change in prices, income and a few other airport specific variables. We will however here only look at income – the most important of the variable.

    Step two, is a time series analysis of income (weighted average of GDP development in countries with majority of Pax) separating trend and seasonal factors. This trend is what we are looking for; we want to use it to explain the trend movements in the revenue.

    Step three, is then a regression of the revenue trend on the income trend as shown in the graph below. The revenue trend was estimated assuming a quadratic relation over time and we can see that the fit is good. In fact 98 % of the variance in the revenue trend can be explained by the change in income (+) trend:

    Concession-trend

    Now the model will be as follows – step four:

    1. We will collect the central banks GDP forecasts (base line scenario) and use this to forecast the most likely change in income trend
    2. More and more central banks are now producing fan charts giving the possible event space (with probabilities) for their forecasts. We will use this to establish a probability distribution for our income proxy

    Below is given an example of a fan chart taken from the Bank of England’s inflation report November 2009. (Bank of England, 2009) ((The fan chart depicts the probability of various outcomes for GDP growth.  It has been conditioned on the assumption that the stock of purchased assets financed by the issuance of central bank reserves reaches £200 billion and remains there throughout the forecast period.  To the left of the first vertical dashed line, the distribution reflects the likelihood of revisions to the data over the past; to the right, it reflects uncertainty over the evolution of GDP growth in the future.  If economic circumstances identical to today’s were to prevail on 100 occasions, the MPC’s best collective judgement is that the mature estimate of GDP growth would lie within the darkest central band on only 10 of those occasions.  The fan chart is constructed so that outturns are also expected to lie within each pair of the lighter green areas on 10 occasions.  In any particular quarter of the forecast period, GDP is therefore expected to lie somewhere within the fan on 90 out of 100 occasions.  The bands widen as the time horizon is extended, indicating the increasing uncertainty about outcomes.  See the box on page 39 of the November 2007 Inflation Report for a fuller description of the fan chart and what it represents.  The second dashed line is drawn at the two-year point of the projection.))

    Bilde1

    3. We will then use the relation between historic revenue and income trend to forecast the revenue trend
    4. Adding the seasonal variation using the estimated seasonal factors – give us a forecast of the periodic revenue.

    For our historic data the result is shown in the graph below:

    Concession-revenue-estimate

    The calculated revenue series have a very high correlation with the observed revenue series (R=0.95) explaining approximately 90% of the series variation.

    Step five, now we can forecast the revenue from concession pr Pax figures for the next periods (month, quarters or years), using Monte Carlo simulation:

    1. From the income proxy distribution we draw a possible change in yearly income and calculates the new trend
    2. Using the estimated relation between historic revenue and income trend we forecast the most likely revenue trend and calculate the 95% confidence interval. We then use this to establish a probability distribution for the period’s trend level and draws a value. This value is adjusted with the period’s seasonal factor and becomes our forecasted value for the airports revenue from the concession – for this period.

    Running thru this a thousand times we get a distribution as given below:

    Concession-revenue-distribuIn the airport EBITDA model this only a small but important part for forecasting future airport revenue. As the models data are updated (monthly) all the time series analysis and regressions are redone dynamically to capture changes in trends and seasonal factors.

    The level of monthly revenue from the concession is obviously more complex than can be described with a small set of variable and assumptions. Our model has with high probability specification errors and we may or may not have violated some of the statistical methods assumptions (the model produces output to monitor this). But we feel that we are far better of than having put all our money on a single figure as a forecast. At least we know something about the forecasts uncertainty.

    References

    Bank of England. (2009, November). Inflation Report November 2009 . Retrieved from http://www.bankofengland.co.uk/publications/inflationreport/ir09nov5.ppt