Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Handling Events – Strategy @ Risk

Series: Handling Events

  • Distinguish between events and estimates

    Distinguish between events and estimates

    This entry is part 1 of 2 in the series Handling Events

     

    Large public sector investment projects in Norway have to go through an established methodology for quality assurance. There must be an external quality assurance process both of the selected concept (KS1) and the projects profitability and cost (KS2).

    KS1 and KS2

    Concept quality control (KS1) shall ensure the realization of socioeconomic advantages (the revenue side of a public project) by ensuring that the most appropriate concept for the project is selected. Quality assurance of cost and management support (KS2) shall ensure that the project can be completed in a satisfactory manner and with predictable costs.

    KS1 and KS2

    I have worked with KS2 analysis, focusing on the uncertainty analysis. The analysis must be done in a quantitative manner and be probability based. There is special focus on probability level P50, the project’s expected value or the grant to the Project and P85, the Parliament grant. The civil service entity doing the project is granted the expected value (P50) and must go to a superior level (usually the ministerial level) to use the uncertainty reserve (the difference between the cost level P85 and).

    Lessons learnt from risk management in large public projects

    Many lessons may be learned from this quality assurance methodology by private companies. Not least is the thorough and methodical way the analysis is done, the way uncertainty is analysed and how the uncertainty reserve is managed.

    The analogy to the decision-making levels in the private sector is that the CEO shall manage the project on P50, while he must go to the company’s board to use the uncertainty reserve (P85-P50).

    In the uncertainty analyses in KS2 a distinction is made between estimate uncertainty and event uncertainty. This is a useful distinction, as the two types of risks are by nature different.

    Estimate uncertainty

    Uncertainty in the assumptions behind the calculation of a project’s cost and revenue, such as

    • Prices and volumes of products and inputs
    • Market mix
    • Strategic positioning
    • Construction cost

    These uncertainties can be modelled in great detail ((But remember – you need to see the forest for the trees!)) and are direct estimates of the project’s or company’s costs and revenues.

    Event Uncertainties

    These events are not expected to occur and therefore should not be included in the calculation of direct cost or revenue. The variables will initially have an expected value of 0, but events may have serious consequences if they do occur. Events can be modeled by estimating the probability of the event occurring and the consequence if they do. Examples of event uncertainties are

    • Political risks in emerging markets
    • Paradigm Shifts in consumer habits
    • Innovations
    • Changes in competitor behavior
    • Changes in laws and regulations
    • Changes in tax regimes

    Why distinguish between estimates and events?

    The reason why there are advantages to separating estimates and events in risk modeling is that they are by nature different. An estimate of an expense or income is something we know will be part of a project’s results, with an expected value that is NOT equal to 0. It can be modeled as a probability curve with an expected outcome and a high and low value.

    An event, on the other hand, can occur or not, and has an expected value of 0. If the event is expected to occur, the impact of the event should be modeled as an expense or income. Whether the event occurs or not has a probability, and there will be an impact if the event occurs (0 if it doesn’t occur).

    Such an event can be modeled as a discrete distribution (0, it does not occur, 1if it occurs) and there is only an impact on the result of the project or business IF it occurs. The consequence may be deterministic – we know what it means if it happens – or it could be a distribution with a high, low and expected value.

    An example

    I have created an example using a private manufacturing company. They have an expected P&L which looks like this:

    Example data

    The company has a high export share to the EU and Norwegian cost (both variable and fixed). Net margin is expected to fall to a level of 17% in 2018. The situation looks somewhat better when the simulated – there is more upside than downside in the market.

    Initial simulation

    But potential events that may affect the result are not yet modeled, and what impact can they have? Let’s look at two examples of potential events:

    1. The introduction of a duty of 25% on the company’s products in the EU. The company will not be able to lift the cost onto the customers, and therefore this will be a cost for the company.
    2. There are only two suppliers of the raw materials the company uses to produce its products and the demand for it is high. This means that the company has a risk of not getting enough raw materials (25% less) in order to produce as much as the market demands.

    events

    As the table shows the risk that the events occur increase with time. Looking at the consequences of the probability weighted events in 2018, the impact on the expected result is:

    resultat 2018

    The consequence of these events is a larger downside risk (lower expected result) and higher variability (larger standard deviation). The probability of a 0 result is

    • 14% in the base scenario
    • 27% with the event “Duty in the EU”
    • 36% with the event “Raw material Shortage” in addition

    The events have no upside, so this is a pure increase in company risk. A 36% probability of a result of 0 or lower may be dramatic. The knowledge of what potential events may mean to the company’s profitability will contribute to the company’s ability to take appropriate measures in time, for instance

    • Be less dependent on EU customers
    • Securing a long-term raw materials contract

    and so on.

    Normally, this kind of analysis is done as a scenario. But a scenario analysis will not provide the answer to how likely the event is nor to what the likely consequence is. Neither will it be able to give the answer to the question: How likely it is that the business will make a loss?

    One of the main reasons for risk analysis is that it increases the ability to take action in time. Good risk management is all about being one step ahead – all the time. As a rule, the consequences of events that no one has thought of (and thus no plan B is in place) are greater than that of events which have been thought through. It is far better to have calculated the consequences, reflected on the probabilities and if possible put in place risk mitigation.

    Knowing the likelihood that something can go horribly wrong is also an important tool in order to properly prioritize and put mitigation measures in at the right place.

  • The role of events in simulation modeling

    The role of events in simulation modeling

    This entry is part 2 of 2 in the series Handling Events

    “With a sample size large enough, any outrageous thing is likely to happen”

    The law of truly large numbers (Diaconis & Mosteller, 1989)

    Introduction

    The need for assessing the impact of events with binary[i] outcomes, like loan defaults, occurrence of recessions, passage of a special legislation, etc., or events that can be treated like binary events like paradigm shifts in consumer habits, changes in competitor behavior or new innovations, arises often in economics and other areas of decision making.

    To the last we can add political risks, both macro and micro; conflicts, economic crises, capital controls, exchange controls, repudiation of contracts, expropriation, quality of bureaucracy, government project decision-making, regulatory framework conditions; changes in laws and regulations, changes in tax laws and regimes etc.[ii]  Political risk acts like discontinuities and usually becomes more of a factor as the time horizon of a project gets longer.

    In some cases when looking at project feasibility, availability of resources, quality of work force and preparations can also be treated as binary variables.

    Events with binary outcomes have only two states, either it happens or it does not happen: the presence or absence of a given exposure. We may extend this to whether it may happen next year or not or if it can happen at some other point in the projects timeframe.

    We have two types of events:  external events originating from outside with the potential to create effects inside the project and events originating inside the project and having direct impact on the project. By the term project we will in the following mean; a company, plant or operation etc. The impact will eventually be of economic nature and it is this we want to put a value on.

    External events are normally grouped into natural events and man-made events. Examples of man-made external events are changes in laws and regulations, while extreme weather conditions etc. are natural external events.

    External events can occur as single events or as combinations of two or more external events. Potential combined events are two or more external events having a non-random probability of occurring simultaneously, e.g., quality of bureaucracy and government project decision-making.

    Identification of possible external events

    The identification of possible events should roughly follow the process sketched below[iii]:

    1. Screening for Potential Single External Events – Identify all natural and man-made external events threatening the project implementation (Independent Events).
    2. Screening for Potential Combined External Events – Combining single external events into various combinations that are both imaginable and which may possibly threaten the project implementation (Correlated Events).
    3. Relevance Screening – Screening out potential external events, either single or combined, that is not relevant to the project. By ‘not relevant’, we will understand that they cannot occur or that their probability of occurrence is evidently ‘too low’.
    4. Impact Screening – Screening out potential external events, either single or combined, that is not relevant to the project. By ‘not relevant’, we will understand that no possible project impact can be identified.
    5. Event Analysis – Acquiring and assessing information on the probability of occurrence, at each point in the future, for each relevant event. 
    6. Probabilistic Screening –  To accept the risk contribution of an external event, or to plan appropriate project modifications to reduce not acceptable  contributions to project risk.

    Project Impact Analysis; modelling and quantification

    It is useful to distinguish between two types of forecasts for binary outcomes: probability[iv] forecasts and point forecasts.  We will in the following only use probability forecasts since we also want to quantify forecast uncertainty, which is often ignored in making point forecasts. After all, the primary purpose of forecasting is to reduce uncertainty.

    We assume that none of the possible events is in the form of a catastrophe.  A mathematical catastrophe is a point in a model of an input-output system, where a vanishingly small change in an exogenous variate can produce a large change in the output. (Thom, 1975)

    Current practice in public projects

    The usual approach at least for many public projects[v] is to first forecast the total costs distribution from the cost model and then add, as a second cost layer outside the model, the effects of possible events. These events will be discoveries about: the quality of planning, availability of resources, the state of corporation with other departments, difficulties in getting decisions, etc.

    In addition are these costs more often than not calculated as a probability distribution of lump sums and then added to the distribution for the estimated expected total costs. The consequence of this is that:

    1. the ‘second cost layer’ introduces new lump sum cost variables,
    2. the events are unrelated to the variates in the cost model,
    3. the mechanism of costs transferal  from the events are rarely clearly stated and
    4. for a project with a time frame of several years and where the net present value of project costs is the decisive variable, these amounts to adding a lump sum to the first years cost.

    Thus using this procedure to identify project tolerability to external events – can easily lead decision and policy makers astray.

    We will therefor propose another approach with analogies taken from time series analysis – intervention analysis. This approach to intervention analysis is based on mixed autoregressive moving average (ARMA[vi]) models introduced by Box & Tiao in 1975. (Box and Tiao, 1975) Intervention models links one or more input (or independent) variates to a response (or dependent) variate by a transfer function.

    Handling Project Interventions

    In time series analysis we try to discern the effects of an intervention after the fact. In our context we are trying to establish what can happen if some event intervenes in our project.  We will do this by using transfer functions. Transfer functions are models of how the effects from the event are translated into future values of y.  This implies to:

    1. Forecast the probability pt that the event will happen at time – t,
    2. Select the variates (response variable) in the model that will be affected,
    3. Establish a transfer function for each response variable, giving expected effect (response) on that variate.

    The event can trigger a response at time T[vii] in the form of a step[viii] (St) (i.e. change in tax laws) or a pulse (Pt) (i.e. change in supplier). We will denote this as:

    St = 0, when t <T and =1, when t > T

    Pt = 0, when t ≠T and =1, when t = T

    For one exogenous variate x and one response variate y, the general form of an intervention model is:

    yt = [w(B) / d(B)] x t-s + N(et)

    Where Bs is the backshift operator, shifting the time series s steps backward and N(et) an appropriate noise model for y. The delay between a change in x and a response in y is s. The intervention model has both a numerator and a denominator polynomial.

    The numerator polynomial is the moving average polynomial (MA)[ix]. The numerator parameters are usually the most important, since they will determine the magnitude of the effect of x on y.

    The denominator polynomial is the autoregressive polynomial (AR)[x]. The denominator determines the shape of the response (growth or decay).

    Graphs of some common intervention models are shown in the panel (B) below taken from the original paper by Box & Tiao, p 72:

    Effect-response

    As the figures above show, a large number of different types of responses can be modelled using relatively simple models. In many cases will a step not give an immediate response, but have a more dynamic response and a response to a pulse may or may not decay all the way back. Most response models have a steady state solution that will be achieved after a number of periods. Model c) in the panel above however will continue to grow to infinity. Model a) gives a permanent change positive (carbon tax) or negative (new cheaper technology). Model b) gives a more gradual change positive (implementation of new technology) or negative (effect of crime reducing activities). The response to pulse can be positive or negative (loss of supplier) with a decay that can continue for a short or a long period all the way back or to a new permanent level.

    Summary

    By using analogies from intervention analysis a number of interesting and important issues can be analyzed:

    • If two events affects one response variable will the combined effect be less or greater than the sum of both?
    • Will one event affecting more than one response variable increase the effect dramatically?
    • Is there a risk of calculating the same cost twice?
    • If an event occurs at the end of a project, will it be prolonged? And what will the costs be?
    • Etc.

    Questions like this can never be analyzed when using a ‘second layer lump sum’ approach. Even more important is possibility to incorporate the responses to exogenous events inside the simulation model, thus having the responses at the correct point on the time line and by that a correct net present value for costs, revenues and company or project value.

    Because net present values are what this is all about isn’t it? After all the result will be used for decision making!

    REFERENCES

    Box, G.E.P.  and Tiao, G.C., 1975.  Intervention analysis with application to economic and environmental problems.  J. Amer. Stat. Assoc. 70, 349:  pp70-79.

    Diaconis, P. and Mosteller, F. , 1989. Methods of Studying Coincidences. J. Amer. Statist. Assoc. 84, 853-861.

    Knochenhauer, M & Louko, P., 2003. SKI Report 02:27 Guidance for External Events Analysis. Swedish Nuclear Inspectorate.

    Thom R., 1975. Structural stability and morphogenesis. Benjamin Addison Wesley, New York.

    ENDNOTES

    [i] Events with binary outcomes have only two states, either it happens or it does not happen: the presence or absence of a given exposure. The event can be described by a Bernoulli distribution. This is a discrete distribution having two possible outcomes labelled by n=0 and n=1 in which n=1 (“event occurs”) have probability p and n=0 (“do not occur”) have probability q=1-p, where 0<p<1. It therefore has probability density function P(n)= 1-p for n=0 and P(n)= p for n=1, which can also be written P(n)=pn(1-p) (1-n).

    [ii] ‘’Change point’’ (“break point” or “turning point”) usually denotes the point in time where the change takes place and “regime switching” the occurrence of a different regime after the change point.

    [iii] A good example of this is Probabilistic Safety Assessments (PSA). PSA is an established technique to numerically quantify risk measures in nuclear power plants. It sets out to determine what undesired scenarios can occur, with which likelihood, and what the consequences could be (Knochenhauer & Louko, 2003).

    [iv] A probability is a number between 0 and 1 (inclusive). A value of zero means the event in question never happens, a value of one means it always happens, and a value of 0.5 means it will happen half of the time.

    Another scale that is useful for measuring probabilities is the odds scale. If the probability of an event occurring is p, then the odds (W) of it occurring are p: 1- p, which is often written as  W = p/ (1-p). Hence if the probability of an event is 0.5, the odds are 1:1, whilst if the probability is 0.1, the odds are 1:9.

    Since odds can take any value from zero to infinity, then log (p/(1- p)) ranges from -infinity  to infinity. Hence, we can model g(p) = log [(p/(1- p)] rather than p. As g(p) goes from -infinity  to infinity, p goes from 0 to 1.

    [v] https://www.strategy-at-risk.com/2013/10/07/distinguish-between-events-and-estimates/

    [vi] In the time series econometrics literature this is known as an autoregressive moving average (ARMA) process.

    [vii] Interventions extending over several time intervals can be represented by a series of pulses.

    [viii] (1-B) step = pulse; pulse is a 1st differenced step and step = pulse /(1-B)  step is a cumulated pulse.

    Therefore, a step input for a stationary series produces an identical impulse response to a pulse input for an integrated I(1) series.

    [ix] w(B) = w0 + w1B + w2B2 + . . .

    [x] d(B) = 1 + d1B + d2B2 + . . . Where -1 < d < 1.