Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Decision making – Strategy @ Risk

Category: Decision making

  • Project Management under Uncertainty

    Project Management under Uncertainty

    You can’t manage what you can’t measure
    You can’t measure what you can’t define
    How do you define something that isn’t known?

    DeMarco, 1982

    1.     Introduction

    By the term Project we usually understand a unique, one-time operation designed to accomplish a set of objectives in a limited time frame. This could be building a new production plant, designing a new product or develop new software for a specific purpose.

    A project usually differ from normal operations by; being a onetime operation, having a limited time horizon and budget, having unique specifications and by working across organizational boundaries. A project can be divides into four phases: project definition, planning, implementation and project phase-out.

    2.     Project Scheduling

    The project planning phase, which we will touch upon in this paper, consists of braking down the project into tasks that must be accomplished for the project to be finished.

    The objectives of the project scheduling are to determine the earliest start and finish of each task in the project. The aim is to be able to complete the project as early as possible and to calculate the likelihood that the project will be completed within a certain time frame.

    The dependencies[i] between the tasks determine their predecessor(s) and successor(s) and thus their sequence (order of execution) in the project[1]. The aim is to list all tasks (project activities), their sequence and duration[2] (estimated activity time length). The figure[ii] below shows a simple project network diagram, and we will in the following use this as an example[iii].

    Sample-project#2This project thus consists of a linear flow of coordinated tasks where in fact time, cost and performance can vary randomly.

    A convenient way of organizing this information is by using a Gantt[iv] chart. This gives a graphic representation of the project’s tasks, the expected time it takes to complete them, and the sequence in which they must be done.

    There will usually be more than one path (sequence of tasks) from the first to the last task in a project. The path that takes the longest time to complete is the projects critical path. The objective of all this is to identify this path and the time it takes to complete it.

    3.     Critical Path Analysis

    The Critical Path (CP)[v] is defined as the sequence of tasks that, if delayed – regardless of whether the other project tasks are completed on or before time – would delay the entire project.

    The critical path is hence based on the forecasted duration of each task in the project. These durations are given as single point estimates[3] implying that the project’s tasks duration contain no uncertainty (deterministic). This is obviously wrong and will often lead to unrealistic project estimates due to the inherent uncertainty in all project work.

    Have in mind that: All plans are estimates and are only as good as the task estimates.

    As a matter of fact many different types of uncertainty can be expected in most projects:

    1. Ordinary uncertainty, where time, cost and performance can vary randomly, but inside predictable ranges. Variations in task durations will cause the projects critical path to shift, but this can be predicted and the variation in total project time can be calculated.
    2. Foreseen uncertainty, where a few known factors (events) can affect the project but in an unpredictable way[4]. This is projects where tasks and events occur probabilistic and contain logical relationships of a more complicated nature. E.g. from a specific event some tasks are undertaken with certainty while others probabilistically (Elmaghraby, 1964) and (Pritsker, 1966). The distribution for total project time can still be calculated, but will include variation from the chance events.
    3. Unforeseen uncertainty, where one or more factors (events) cannot be predicted. This will imply that decisions points about the projects implementation have to be included at one or more points in the projects execution.

    As a remedy to the critical path analysis inadequacy to the existence of ordinary uncertainty, the Program Evaluation and Review Technique (PERT[vi]) analysis was developed. PERT is a variation on Critical Path Analysis that takes a slightly more skeptical view of the duration estimates made for each of the project tasks.

    PERT uses a tree-point estimate,[vii] based on the forecast of the shortest possible task duration, the most likely task duration and the worst-case task duration. The tasks expected duration is then calculated as a weighted average of these tree estimates of the durations.

    This is assumed to help to bias time estimates away from the unrealistically short time-scales that often is the case.

    4.     CP, PERT and Monte Carlo Simulation

    The two most important questions we want answered are:

    • How long will it take to do the project?
    • How likely is the project to succeed within the allotted time frame?
    • In this example the projects time frame is set to 67 weeks.

    We will use the Critical Path method, PERT and Monte Carlo simulation to try to answer these questions, but first we need to make some assumptions on the variability of the estimated task durations. We will assume that the durations are triangular distributed and that the actual durations can be both higher and lower than their most likely value.

    The distributions will probably have a right tail since underestimation is common when assessing time and cost (positively skewed), but sometime people deliberately overestimate to avoid being responsible for later project delay (negatively skewed). The assumptions of the tasks duration are given in the table below:

    Project-table#2The corresponding paths, critical path and project durations is given in the table below. The critical path method finds path #1 (tasks: A,B,C,D,E) as the critical path and thus expected project duration to 65 weeks. The second question however cannot be answered by using this method. So, in regard to probable deviations from expected project duration, the project manager is left without any information.

    By using PERT, calculating expected durations and their standard deviation as described in endnote vii, we find the same critical path and roughly the same expected project duration (65.5 weeks), but since we now can calculate the estimate’s standard deviation we can find the probability of the project being finished inside the projects time frame.

    Project-table#1By assuming that the sum of task durations along the critical path is approximately normal distributed, we find that the probability of having the project finished inside the time frame of 67 weeks to 79%. Since this gives is a fairly high probability of project success the manager can rest contentedly – or can she?

    If we repeat the exercise, but now using Monte Carlo simulation we find a different answer. We can no longer with certainty establish a critical path. The tasks variability can in fact give three different critical paths. The most likely is path #1 as before, but there is a close to 30% probability that path #4 (tasks: A,B,C,G,E) will be the critical path. It is also possible, even if the probability is small (<5%), that path #3 (tasks: A,F,G,E) is the critical path (see figure below).Path-as-Critical-pathSo, in this case we cannot use the critical path method, it will give wrong answers and misleading information to the project manager and. More important is the fact that the method cannot use all the information we have about the project’s tasks, that is to say their variability.

    A better approach is to simulate project time to find the distribution for total project duration. This distribution will then include the duration of all critical paths that may arise during the project simulation, given by the red curve in figure below:

    Path-Durations-(CP)This figure gives the cumulative probability distribution for the possible critical paths duration (Path#: 1,3,4) as well as for total project duration. Since path #1 consistently have long duration times there are only in ‘extreme’ cases that path #4 is the critical path. Most strikingly is the large variation in path #3’s duration and the fact that it can end up in some of the simulation’s runs as critical path.

    The only way to find the distribution for total project duration is for every run in the simulation to find the critical path and calculate its duration.

    We now find the expected total project duration to be 67 weeks, one week more than what the CPM and PERT gave, but more important, we find that the probability of finishing the project inside the time frame is only 50%.

    By neglecting the probability that the critical path might change due to task variability PERT is underestimating project variance and thus the probability that the project will not finish inside the expected time frame.

    Monte Carlo models like this can be extended to include many types of uncertainty belonging to the classes of foreseen and unforeseen uncertainty. However, it will only be complete when all types of project costs and their variability are included.

    5.     Summary

    Key findings in comparative studies show that using Monte Carlo along with project planning techniques allows better understanding of project uncertainty and its risk level as well as provides project team with the ability to grasp various possible courses of the project within one simulation procedure.

    Notes

    [1] This can be visualized in a Precedence Diagram also known as a Project Network Diagram.In a Network Diagram, the start of an activity must be linked to the end of another activity

    [2] An event or a milestone is a point in time having no duration. A Precedence Diagram will always have a Start and an End event.

    [3] As a “best guess” or “best estimate” of a fixed or random variable.

    [4] E.g. repetition of tasks.

    Endnotes

    [i] There are four types of dependencies in a Precedence Diagram:

    1. Finish-Start: A task cannot start before a previous task has ended.
    2. Start-Start: There is a defined relationship between the start of tasks.
    3. Finish-Finish: There is a defined relationship between the end dates of tasks.
    4. Start-Finish: There is a defined relationship between the start of one task and the end date of a successor task.

    [ii] Taken from the Wikipedia article: Critical path drag, http://en.wikipedia.org/wiki/Critical_path_drag

    [iii] The Diagram contains more information than we will use. The diagram is mostly self-explaining, however Float (or Slack) and Drag is defined as the activity delay that the project can tolerate before the project comes in late and how much a task on the critical path is delaying project completion (Devaux,2012).

    [iv] The Gantt chart was developed by Henry Laurence Gantt in the 1910s.

    [v] The Critical Path Method (CPM) was developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley, Jr. of Remington Rand.

    [vi] The Program Evaluation and Review Technique (PERT) were developed by Booz Allen Hamilton and the U.S. Navy, at about the same time as the CPM. Key features of a PERT network are:

    1. Events must take place in a logical order.
    2. Activities represent the time and the work it takes to get from one event to another.
    3. No event can be considered reached until ALL activities leading to the event are completed.
    4. No activity may be begun until the event preceding it has been reached.

    [vii] Assuming, that a process with a double-triangular distribution underlies the actual task durations, the tree estimated values (min, ml, max) can then be used to calculate expected value (E) and standard deviation (SD) as L-estimators, with: E = (min + 4m + max)/6 and SD = (max − min)/6.

    E is thus a weighted average, taking into account both the most optimistic and most pessimistic estimates of the durations provided. SD measures the variability or uncertainty in the estimated durations.

    References

    Devaux, Stephen A.,(2012). “The Drag Efficient: The Missing Quantification of Time on the Critical Path” Defense AT&L magazine of the Defense Acquisition University. Retrieved from http://www.dau.mil/pubscats/ATL%20Docs/Jan_Feb_2012/Devaux.pdf

    DeMarco, T, (1982), Controlling Software Projects, Prentice-Hall, Englewood Cliffs, N.J., 1982

    Elmaghraby, S.E., (1964), An algebra for the Analyses of Generalized Activity Networks, Management Science, 10,3.

    Pritsker, A. A. B. (1966). GERT: Graphical Evaluation and Review Technique (PDF). The RAND Corporation, RM-4973-NASA.

  • The role of events in simulation modeling

    The role of events in simulation modeling

    This entry is part 2 of 2 in the series Handling Events

    “With a sample size large enough, any outrageous thing is likely to happen”

    The law of truly large numbers (Diaconis & Mosteller, 1989)

    Introduction

    The need for assessing the impact of events with binary[i] outcomes, like loan defaults, occurrence of recessions, passage of a special legislation, etc., or events that can be treated like binary events like paradigm shifts in consumer habits, changes in competitor behavior or new innovations, arises often in economics and other areas of decision making.

    To the last we can add political risks, both macro and micro; conflicts, economic crises, capital controls, exchange controls, repudiation of contracts, expropriation, quality of bureaucracy, government project decision-making, regulatory framework conditions; changes in laws and regulations, changes in tax laws and regimes etc.[ii]  Political risk acts like discontinuities and usually becomes more of a factor as the time horizon of a project gets longer.

    In some cases when looking at project feasibility, availability of resources, quality of work force and preparations can also be treated as binary variables.

    Events with binary outcomes have only two states, either it happens or it does not happen: the presence or absence of a given exposure. We may extend this to whether it may happen next year or not or if it can happen at some other point in the projects timeframe.

    We have two types of events:  external events originating from outside with the potential to create effects inside the project and events originating inside the project and having direct impact on the project. By the term project we will in the following mean; a company, plant or operation etc. The impact will eventually be of economic nature and it is this we want to put a value on.

    External events are normally grouped into natural events and man-made events. Examples of man-made external events are changes in laws and regulations, while extreme weather conditions etc. are natural external events.

    External events can occur as single events or as combinations of two or more external events. Potential combined events are two or more external events having a non-random probability of occurring simultaneously, e.g., quality of bureaucracy and government project decision-making.

    Identification of possible external events

    The identification of possible events should roughly follow the process sketched below[iii]:

    1. Screening for Potential Single External Events – Identify all natural and man-made external events threatening the project implementation (Independent Events).
    2. Screening for Potential Combined External Events – Combining single external events into various combinations that are both imaginable and which may possibly threaten the project implementation (Correlated Events).
    3. Relevance Screening – Screening out potential external events, either single or combined, that is not relevant to the project. By ‘not relevant’, we will understand that they cannot occur or that their probability of occurrence is evidently ‘too low’.
    4. Impact Screening – Screening out potential external events, either single or combined, that is not relevant to the project. By ‘not relevant’, we will understand that no possible project impact can be identified.
    5. Event Analysis – Acquiring and assessing information on the probability of occurrence, at each point in the future, for each relevant event. 
    6. Probabilistic Screening –  To accept the risk contribution of an external event, or to plan appropriate project modifications to reduce not acceptable  contributions to project risk.

    Project Impact Analysis; modelling and quantification

    It is useful to distinguish between two types of forecasts for binary outcomes: probability[iv] forecasts and point forecasts.  We will in the following only use probability forecasts since we also want to quantify forecast uncertainty, which is often ignored in making point forecasts. After all, the primary purpose of forecasting is to reduce uncertainty.

    We assume that none of the possible events is in the form of a catastrophe.  A mathematical catastrophe is a point in a model of an input-output system, where a vanishingly small change in an exogenous variate can produce a large change in the output. (Thom, 1975)

    Current practice in public projects

    The usual approach at least for many public projects[v] is to first forecast the total costs distribution from the cost model and then add, as a second cost layer outside the model, the effects of possible events. These events will be discoveries about: the quality of planning, availability of resources, the state of corporation with other departments, difficulties in getting decisions, etc.

    In addition are these costs more often than not calculated as a probability distribution of lump sums and then added to the distribution for the estimated expected total costs. The consequence of this is that:

    1. the ‘second cost layer’ introduces new lump sum cost variables,
    2. the events are unrelated to the variates in the cost model,
    3. the mechanism of costs transferal  from the events are rarely clearly stated and
    4. for a project with a time frame of several years and where the net present value of project costs is the decisive variable, these amounts to adding a lump sum to the first years cost.

    Thus using this procedure to identify project tolerability to external events – can easily lead decision and policy makers astray.

    We will therefor propose another approach with analogies taken from time series analysis – intervention analysis. This approach to intervention analysis is based on mixed autoregressive moving average (ARMA[vi]) models introduced by Box & Tiao in 1975. (Box and Tiao, 1975) Intervention models links one or more input (or independent) variates to a response (or dependent) variate by a transfer function.

    Handling Project Interventions

    In time series analysis we try to discern the effects of an intervention after the fact. In our context we are trying to establish what can happen if some event intervenes in our project.  We will do this by using transfer functions. Transfer functions are models of how the effects from the event are translated into future values of y.  This implies to:

    1. Forecast the probability pt that the event will happen at time – t,
    2. Select the variates (response variable) in the model that will be affected,
    3. Establish a transfer function for each response variable, giving expected effect (response) on that variate.

    The event can trigger a response at time T[vii] in the form of a step[viii] (St) (i.e. change in tax laws) or a pulse (Pt) (i.e. change in supplier). We will denote this as:

    St = 0, when t <T and =1, when t > T

    Pt = 0, when t ≠T and =1, when t = T

    For one exogenous variate x and one response variate y, the general form of an intervention model is:

    yt = [w(B) / d(B)] x t-s + N(et)

    Where Bs is the backshift operator, shifting the time series s steps backward and N(et) an appropriate noise model for y. The delay between a change in x and a response in y is s. The intervention model has both a numerator and a denominator polynomial.

    The numerator polynomial is the moving average polynomial (MA)[ix]. The numerator parameters are usually the most important, since they will determine the magnitude of the effect of x on y.

    The denominator polynomial is the autoregressive polynomial (AR)[x]. The denominator determines the shape of the response (growth or decay).

    Graphs of some common intervention models are shown in the panel (B) below taken from the original paper by Box & Tiao, p 72:

    Effect-response

    As the figures above show, a large number of different types of responses can be modelled using relatively simple models. In many cases will a step not give an immediate response, but have a more dynamic response and a response to a pulse may or may not decay all the way back. Most response models have a steady state solution that will be achieved after a number of periods. Model c) in the panel above however will continue to grow to infinity. Model a) gives a permanent change positive (carbon tax) or negative (new cheaper technology). Model b) gives a more gradual change positive (implementation of new technology) or negative (effect of crime reducing activities). The response to pulse can be positive or negative (loss of supplier) with a decay that can continue for a short or a long period all the way back or to a new permanent level.

    Summary

    By using analogies from intervention analysis a number of interesting and important issues can be analyzed:

    • If two events affects one response variable will the combined effect be less or greater than the sum of both?
    • Will one event affecting more than one response variable increase the effect dramatically?
    • Is there a risk of calculating the same cost twice?
    • If an event occurs at the end of a project, will it be prolonged? And what will the costs be?
    • Etc.

    Questions like this can never be analyzed when using a ‘second layer lump sum’ approach. Even more important is possibility to incorporate the responses to exogenous events inside the simulation model, thus having the responses at the correct point on the time line and by that a correct net present value for costs, revenues and company or project value.

    Because net present values are what this is all about isn’t it? After all the result will be used for decision making!

    REFERENCES

    Box, G.E.P.  and Tiao, G.C., 1975.  Intervention analysis with application to economic and environmental problems.  J. Amer. Stat. Assoc. 70, 349:  pp70-79.

    Diaconis, P. and Mosteller, F. , 1989. Methods of Studying Coincidences. J. Amer. Statist. Assoc. 84, 853-861.

    Knochenhauer, M & Louko, P., 2003. SKI Report 02:27 Guidance for External Events Analysis. Swedish Nuclear Inspectorate.

    Thom R., 1975. Structural stability and morphogenesis. Benjamin Addison Wesley, New York.

    ENDNOTES

    [i] Events with binary outcomes have only two states, either it happens or it does not happen: the presence or absence of a given exposure. The event can be described by a Bernoulli distribution. This is a discrete distribution having two possible outcomes labelled by n=0 and n=1 in which n=1 (“event occurs”) have probability p and n=0 (“do not occur”) have probability q=1-p, where 0<p<1. It therefore has probability density function P(n)= 1-p for n=0 and P(n)= p for n=1, which can also be written P(n)=pn(1-p) (1-n).

    [ii] ‘’Change point’’ (“break point” or “turning point”) usually denotes the point in time where the change takes place and “regime switching” the occurrence of a different regime after the change point.

    [iii] A good example of this is Probabilistic Safety Assessments (PSA). PSA is an established technique to numerically quantify risk measures in nuclear power plants. It sets out to determine what undesired scenarios can occur, with which likelihood, and what the consequences could be (Knochenhauer & Louko, 2003).

    [iv] A probability is a number between 0 and 1 (inclusive). A value of zero means the event in question never happens, a value of one means it always happens, and a value of 0.5 means it will happen half of the time.

    Another scale that is useful for measuring probabilities is the odds scale. If the probability of an event occurring is p, then the odds (W) of it occurring are p: 1- p, which is often written as  W = p/ (1-p). Hence if the probability of an event is 0.5, the odds are 1:1, whilst if the probability is 0.1, the odds are 1:9.

    Since odds can take any value from zero to infinity, then log (p/(1- p)) ranges from -infinity  to infinity. Hence, we can model g(p) = log [(p/(1- p)] rather than p. As g(p) goes from -infinity  to infinity, p goes from 0 to 1.

    [v] https://www.strategy-at-risk.com/2013/10/07/distinguish-between-events-and-estimates/

    [vi] In the time series econometrics literature this is known as an autoregressive moving average (ARMA) process.

    [vii] Interventions extending over several time intervals can be represented by a series of pulses.

    [viii] (1-B) step = pulse; pulse is a 1st differenced step and step = pulse /(1-B)  step is a cumulated pulse.

    Therefore, a step input for a stationary series produces an identical impulse response to a pulse input for an integrated I(1) series.

    [ix] w(B) = w0 + w1B + w2B2 + . . .

    [x] d(B) = 1 + d1B + d2B2 + . . . Where -1 < d < 1.

     

  • Risk Appetite and the Virtues of the Board

    Risk Appetite and the Virtues of the Board

    This entry is part 1 of 1 in the series Risk Appetite and the Virtues of the Board

     

     

     

    This article consists of two parts: Risk Appetite, and The Virtues of the Board. (Upcoming) This first part can be read as a standalone article, the second will be based on concepts developed in this part.

    Risk Appetite

    Multiple sources of risk are a fact of life. Only rarely will decisions concerning various risks be neatly separable. Intuitively, even when risks are statistically independent, bearing one risk should make an agent less willing to bear another. (Kimball, 1993)

    Risk appetite – the board’s willingness to bear risk – will depend both on the degree to which it dislikes uncertainty and to the level of that uncertainty. It is also likely to shift as the board respond to emerging market and macroeconomic uncertainty and events of financial distress.

    The following graph of the “price of risk[1]” index developed at the Bank of England shows this. (Gai & Vause, 2005)[2] The estimated series fluctuates close to the average “price of risk” most of the time, but has sharp downward spikes in times of financial crises. Risk appetite is apparently highly affected by exogenous shocks:

    Estimated_Risk_appetite_BE_In adverse circumstances, it follows that the board and the investors will require higher expected equity value of the firm to hold shares – an enhanced risk premium – and that their appetite for increased risk will be low.

    Risk Management and Risk Appetite

    Despite widespread use in risk management[3] and corporate governance literature, the term ‘risk appetite’[i] lacks clarity in how it is defined and understood:

    • The degree of uncertainty that an investor is willing to accept in respect of negative changes to its business or assets. (Generic)
    • Risk appetite is the degree of risk, on a broad-based level, that a company or other entity is willing to accept in the pursuit of its goals. (COSO)
    • Risk Appetite the amount of risk that an organisation is prepared to accept, tolerate, or be exposed to at any point in time (The Orange Book October 2004)

    The same applies to a number of other terms describing risk and the board’s attitudes to risk, as for the term “risk tolerance”:

    • The degree of uncertainty that an investor can handle in regard to a negative change in the value of his or her portfolio.
    • An investor’s ability to handle declines in the value of his/her portfolio.
    • Capacity to accept or absorb risk.
    • The willingness of an investor to tolerate risk in making investments, etc.

    It thus comes as no surprise that risk appetite and other terms describing risk are not well understood to a level of clarity that can provide a reference point for decision making[4]. Some takes the position that risk appetite never can be reduced to a sole figure or ratio, or to a single sentence statement. However to be able to move forward we have to try to operationalize the term in such a way that it can be:

    1. Used to commensurate risk with reward or to decide what level of risk that is commensurate with a particular reward and
    2. Measured and used to sett risk level(s) that, in the board’s view, is appropriate for the firm.

    It thus defines the boundaries of the activities the board intends for the firm, both to the management and the rest of the organization, by setting limits to risk taking and defining what acceptable risk means. This can again be augmented by a formal ‘risk appetite statement’ defining the types and levels of risk the organization is prepared to accept in pursuit of increased value.

    However, in view of the “price of risk” series above, such formal statements cannot be carved in stone or they have to contain rules for how they are to be applied in adverse circumstances, since they have to be subject to change as the business and macroeconomic climate changes.

    Deloitte’s Global Risk Management Survey 6. ed. (Deloitte, 2009) found that sixty-three percent of the institutions had a formal, approved statement of their risk appetite. (See Exhibit 4. below) Roughly one quarter of the institutions said they relied on quantitatively defined statements, while about one third used both quantitative and qualitative approaches:

    Risk-apptite_Deloitte_2009Using a formal ‘risk appetite statement’ is the best way for the board to communicate its visions, and the level and nature of the risks the board will consider as acceptable to the firm. This has to be quantitatively defined and be based on some opinion of the board’s utility function and use metrics that can fully capture all risks facing the company.

    We will in the following use the firm’s Equity Value as metric as this will capture all risks – those impacting the balance sheet, income statement, required capital and WACC etc.

    We will assume that the board’s utility function[5] have diminishing marginal utility for an increase in the company’s equity value. From this it follows that the board’s utility will decrease more with a loss of 1 $ than it will increase with a gain of 1 $. Thus the board is risk averse[ii].

    The upside-potential ratio

    To do this we will use the upside-potential ratio[6] (UPR), a measure developed as a measure of risk-adjusted returns (Sortino et al., 1999).  The UPR is a measure of the potential return on an asset relative to a preset return, per unit of downside risk. This ratio is a special case of the more general one-sided variability ratio Phib

    Phib p,q (X) := E1/p[{(X – b)+}p] / E1/q[{(X- b)}q],

    Where X is total return, (X-b) is excess return over the benchmark b[7] and the minus and plus sign denotes the left-sided moment (lower partial moment) and the right sided moment (upper partial moment) – of order p and q.

    The lower partial moment[8] is a measure of the “distance[9]” between risky situations and the corresponding benchmark when only unfavorably differences contribute to the “risk”. The upper partial moment on the other hand measures the “distance” between favorable situations and the benchmark.

    The Phi ratio is thus the ratio of “distances” between favorable and unfavorable events – when properly weighted (Tibiletti & Farinelli, 2002).

    For a fixed benchmark b, the higher Phi the more ‘profitable’ is the risky asset. Phi can therefore be used to rank risky assets. For a given asset, Phi will be a decreasing function of the benchmark b.

    The choice of values for p and q depends on the relevance given to the magnitude of the deviations from the benchmark b. The higher the values, the more emphasis are put on that tail. For p=q=1 we have the Omega index (Shadwick & Keating, 2002).

    The choice of p=1 and q=2, is assumed to fit a conservative investor while a value of p>>1 and q<<1 will be more in line with an aggressive investor (Caporin & Lisi, 2009).

    We will in the following use p=1 and q=2 for calculation of the upside-potential ratio (UPR) thus assuming that the board consists of conservative investors. For very aggressive boards other choices of p and q should be considered.

    LM-vs-UM#0The UPR for the firm can thus be expressed as a ratio of partial moments; that is as the ratio of the first order upper partial moment (UPM1)[10] and the second order lower partial moment (LPM2) (Nawrocki, 1999) and ( Breitmeyer, Hakenes & Pfingsten, 2001), or the over-performance divided by the root-mean-square of under-performance, both calculated at successive points on the probability distribution for the firm’s equity value.

    As we successively calculates the UPR starting at the left tail will the lower partial moment (LPM2) increase and the upper partial moment (UPM1) decrease:UPM+LPM The upside potential ratio will consequently decrease as we move from the lower left tail to the upper right tail – as shown in the figure below: Cum_distrib+UPRThe upside potential ratio have many interesting uses, one is shown in the table below. This table gives the upside potential ratio at budgeted value, that is the expected return above budget value per unit of downside risk – given the uncertainty the management for the individual subsidiaries have expressed. Most of the countries have budget values above expected value exposing downward risk. Only Turkey and Denmark have a ratio larger than one – all others have lager downward risk than upward potential. The extremes are Poland and Bulgaria.

    Country/
    Subsidiary
    Upside
    Potential Ratio
    Turkey2.38
    Denmark1.58
    Italy0.77
    Serbia0.58
    Switzerland0.23
    Norway0.22
    UK0.17
    Bulgaria0.08

    We will in the following use five different equity distributions, each representing a different strategy for the firm. The distributions (strategies) have approximately the same mean, but exhibits increasing variance as we move to successive darker curves. That is; an increase in the upside also will increase the possibility of a downside:

    Five-cutsBy calculating the UPR for successive points (benchmarks) on the different probability distribution for the firm’s equity value (strategies) we, can find the accompanying curves described by the UPR’s in the UPR and LPM2/UPM1 space[12], (Cumova & Nawrocki, 2003):

    Upside_potential_ratioThe colors of the curves give the corresponding equity value distributions shown above. We can see that the equity distribution with the longest upper and lower tails corresponds to the right curve for the UPR, and that the equity distribution with the shortest tails corresponds to the left (lowest upside-potential) curve.

    In the graph below, in the LPM2/UPM1 space, the curves for the UPR’s are shown for each of the different equity value distributions (or strategies). Each will give the rate the firm will have to exchange downside risk for upside potential as we move along the curve, given the selected strategy. The circles on the curves represent points with the same value of the UPR, as we move from one distribution to another:

    LM-vs-UM#2By connecting the points with equal value of the UPR we find the iso-UPR curves; the curves that give the same value for the UPR, across the strategies in the LPM2/UPM1 space:

    LM-vs-UM#3We have limited the number of UPR values to eight, but could of course have selected a larger number both inside and outside the limits we have set.

    The board now have the option of selecting the strategy they find most opportune, or the one that fits best to their “disposition” to risk by deciding the appropriate value of LPM2 and UPM1 or of the upside-potential ratio, and this what we will pursue further in the next part:  “The Virtues of the Board”.

    References

    Breitmeyer, C., Hakenes, H. and Pfingsten, A., (2001). The Properties of Downside Risk Measures. Available at SSRN: http://ssrn.com/abstract=812850 or http://dx.doi.org/10.2139/ssrn.812850.

    Caporin, M. & Lisi,F. (2009). Comparing and Selecting Performance Measures for Ranking Assets. Available at SSRN: http://ssrn.com/abstract=1393163 or http://dx.doi.org/10.2139/ssrn.1393163

    CRMPG III. (2008). The Report of the CRMPG III – Containing Systemic Risk: The Road to Reform. Counterparty Risk Management Policy Group. Available at: http://www.crmpolicygroup.org/index.html

    Cumova, D. & Nawrocki, D. (2003). Portfolio Optimization in an Upside Potential and Downside Risk Framework. Available at: http://www90.homepage.villanova.edu/michael.pagano/DN%20upm%20lpm%20measures.pdf

    Deloitte. (2009). Global Risk Management Survey: Risk management in the spotlight. Deloitte, Item #9067. Available at: http://www.deloitte.com/assets/Dcom-UnitedStates/Local%20Assets/Documents/us_fsi_GlobalRskMgmtSrvy_June09.pdf

    Ekern, S. (1980). Increasing N-th degree risk. Economics Letters, 6: 329-333.

    Gai, P.  & Vause, N. (2004), Risk appetite: concept and measurement. Financial Stability Review, Bank of England. Available at: http://www.bankofengland.co.uk/publications/Documents/fsr/2004/fsr17art12.pdf

    Illing, M., & Aaron, M. (2005). A brief survey of risk-appetite indexes. Bank of Canada, Financial System Review, 37-43.

    Kimball, M.S. (1993). Standard risk aversion.  Econometrica 61, 589-611.

    Menezes, C., Geiss, C., & Tressler, J. (1980). Increasing downside risk. American Economic Review 70: 921-932.

    Nawrocki, D. N. (1999), A Brief History of Downside Risk Measures, The Journal of Investing, Vol. 8, No. 3: pp. 9-

    Sortino, F. A., van der Meer, R., & Plantinga, A. (1999). The upside potential ratio. , The Journal of Performance Measurement, 4(1), 10-15.

    Shadwick, W. and Keating, C., (2002). A universal performance measure, J. Performance Measurement. pp. 59–84.

    Tibiletti, L. &  Farinelli, S.,(2002). Sharpe Thinking with Asymmetrical Preferences. Available at SSRN: http://ssrn.com/abstract=338380 or http://dx.doi.org/10.2139/ssrn.338380

    Unser, M., (2000), Lower partial moments as measures of perceived risk: An experimental study, Journal of Economic Psychology, Elsevier, vol. 21(3): 253-280.

    Viole, F & Nawrocki, D. N., (2010), The Utility of Wealth in an Upper and Lower Partial Moment Fabric). Forthcoming, Journal of Investing 2011. Available at SSRN: http://ssrn.com/abstract=1543603

    Notes

    [1] In the graph risk appetite is found as the inverse of the markets price of risk, estimated by the two probability density functions over future returns – one risk-neutral distribution and one subjective distribution – on the S&P 500 index.

    [2] For a good overview of risk appetite indexes, see “A brief survey of risk-appetite indexes”. (Illing & Aaron, 2005)

    [3] Risk Management all the processes involved in identifying, assessing and judging risks, assigning ownership, taking actions to mitigate or anticipate them, and monitoring and reviewing progress.

    [4] The Policy Group recommends that each institution ensure that the risk tolerance of the firm is established or approved by the highest levels of management and shared with the board. The Policy Group further recommends that each institution ensure that periodic exercises aimed at estimation of risk tolerance should be shared with the highest levels of management, the board of directors and the institution’s primary supervisor in line with Core Precept III. Recommendation IV-2b (CRMPG III, 2008).

    For an extensive list of Risk Tolerance articles, see: http://www.planipedia.org/index.php/Risk_Tolerance_(Research_Category)

    [5] See: http://en.wikipedia.org/wiki/Utility, http://en.wikipedia.org/wiki/Ordinal_utility and http://en.wikipedia.org/wiki/Expected_utility_theory.

    [6] The ratio was created by Brian M. Rom in 1986 as an element of Investment Technologies’ Post-Modern Portfolio theory portfolio optimization software.

    [7] ‘b’ is usually the target or required rate of return for the strategy under consideration, (‘b’ was originally known as the minimum acceptable return, or MAR). We will in the following calculate the UPR for successive benchmarks (points) covering the complete probability distribution for the firm’s equity value.

    [8] The Lower partial moments will uniquely determine the probability distribution.

    [9] The use of the term distance is not unwarranted; the Phi ratio is very similar to the ratio of two Minkowski distances of order p and q.

    [10] The upper partial-moment is equivalent to the full moment minus the lower partial-moment.

    [11] Since we don’t know the closed form for the equity distributions (strategies), the figure above have been calculated from a limited, but large number of partial moments.

    Endnotes

    [i] Even if they are not the same, the terms ‘‘risk appetite’’ and ‘‘risk aversion’’ are often used interchangeably. Note that the statement: “increasing risk appetite means declining risk aversion; decreasing risk appetite indicates increasing risk aversion” is not necessarily true.

    [ii] In the following we assume that the board is non-satiated and risk-averse, and have a non-decreasing and concave utility function – U(C) – with derivatives at least of degrees five and of alternating signs – i.e. having all odd derivatives positive and all even derivatives negative. This is satisfied by most utility functions commonly used in mathematical economics including all completely monotone utility functions, as the logarithmic, exponential and power utility functions.

     More generally, a decision maker can be said as being nth-degree risk averse if sign (un) = (−1)n+1 (Ekern,1980).

     

  • The Most Costly Excel Error Ever?

    The Most Costly Excel Error Ever?

    This entry is part 2 of 2 in the series Spreadsheet Errors

     

    Efficient computing tools are essential for statistical research, consulting, and teaching. Generic packages such as Excel are not sufficient even for the teaching of statistics, let alone for research and consulting (ASA, 2000).

    Introduction

    Back early in 2009 we published a post on the risk of spreadsheet errors. The reference above is taken from that post, but it seems even more relevant today as show in the following.

    Growth in a Time of Debt

    In 2010, economists Reinhart and Rogoff released a paper, “Growth in a Time of Debt.” Their “main result was:

    1. Average growth rates for countries with public debt over 90% of GDP are roughly 4% lower than when debt is low (under 30% of GDP).
    2. Median growth rates for countries with public debt over 90% of GDP are roughly 2.6% lower than the when debt is low (under 30% of GDP).
    3.   Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate (-0.1%).

    The paper has been widely cited by political figures around the world, arguing the case for reduced government spending and increased taxes and ultimately against government efforts to boost the economy and create jobs. All based on the papers conclusion that any short-term benefit in job creation and increased growth would come with a high long-term cost.

    Then in 2013, Herndon, Ash and Pollin (Herndon et. al., 2013) replicated the Reinhart and Rogoff study and found that it had:

    1. Coding errors in the spreadsheet programming,
    2. Selective exclusion of available data, and
    3. Unconventional weighting of summary statistics.

    All this led to serious errors that inaccurately estimated the relationship between public debt and GDP growth among 20 advanced economies in the post-war period. Instead they found that when properly calculated:

    That the average real GDP growth rate for countries carrying a public-debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0:1 percent as published in Reinhart and Rogoff.

    That is, contrary to the Reinhart and Rogoff study – average GDP growth at public debt/GDP ratios over 90 percent is not dramatically different than when debt/GDP ratios are lower.

    Statistics and the use of Excel

    Even if the coding error only accounted for a small part of the total error, “everyone” knows that excel is error-prone in a way that any programming language or statistical package is not; it mixes data and code and makes you do things by hand that would be automatically done in the other settings.

    Excel is good for ad-hoc calculations where you’re not really sure what you’re looking for, or for a first quick look at the data, but once you really start analyzing a dataset, you’re better off using almost anything else.

    Basing important decisions on excel models or excel analysis only is very risky – unless it has been thoroughly audited and great effort has been taken to ensure that the calculations are coherent and consistent.

    One thing is certain, serious problems demands serious tools. Maybe it is time to reread the American Statistical Association (ASA) endorsement of “Guidelines for Programs and Departments in Undergraduate Mathematical Sciences”

    References

    Herndon, T., Ash, M. and Pollin, R. (April 15, 2013). Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff, PERI, University of Massachusetts, Amherst. http://www.peri.umass.edu/fileadmin/pdf/working_papers/working_papers_301-350/WP322.pdf

    American Statistical Association (ASA) (2000).  Endorsement of the Mathematical Association of America (MAA): “Guidelines for Programs and Departments in Undergraduate Mathematical Sciences” http://www07.homepage.villanova.edu/michael.posner/sigmaastated/ASAendorsement2.html

    Baker, D. (16 April 2013) How much unemployment did Reinhart and Rogoff’s arithmetic mistake cause? The Guardian. http://www.guardian.co.uk/commentisfree/2013/apr/16/unemployment-reinhart-rogoff-arithmetic-cause

    Reinhart, C.M. & Rogoff, K.S., (2010). Growth in a time of Debt, Working Paper 15639 National Bureau of Economic Research, Cambridge. http://www.nber.org/papers/w15639.pdf

  • Working Capital Strategy Revisited

    Working Capital Strategy Revisited

    This entry is part 3 of 3 in the series Working Capital

    Introduction

    To link the posts on working capital and inventory management, we will look at a company with a complicated market structure, having sales and production in a large number of countries and with a wide variety of product lines. Added to this is a marked seasonality with high sales in the years two first quarters and much lower sales in the years two last quarters ((All data is from public records)).

    All this puts a strain on the organizations production and distribution systems and of course on working capital.

    Looking at the development of net working capital ((Net working capital = Total current assets – Total current liabilities)) relative to net sales it seems as the company in the later years have curbed the initial net working capital growth:

    Just by inspecting the graph however it is difficult to determine if the company’s working capital management is good or lacking in performance. We therefore need to look in more detail at the working capital elements  and compare them with industry ‘averages’ ((By their Standard Industrial Classification (SIC) )).

    The industry averages can be found from the annual “REL Consultancy /CFO Working Capital Survey” that made its debut in 1997 in the CFO Magazine. We can thus use the survey’s findings to assess the company’s working capital performance ((Katz, M.K. (2010). Working it out: The 2010 Working Capital Scorecard. CFO Magazine, June, Retrieved from http://www.cfo.com/article.cfm/14499542
    Also see: https://www.strategy-at-risk.com/2010/10/18/working-capital-strategy-2/)).

    The company’s working capital management

    Looking at the different elements of the company’s working capital, we find that:

    I.    Day’s sales outstanding (DSO) is on average 70 days compared with REL’s reported industry median of 56 days.

    II.    Day’s payables outstanding (DPO) is the difference small and in the right direction, 25 days against the industry median of 23 days.

    III.    Day’s inventory outstanding (DIO) on average 138 days compared with the industry median of 23 days, and this is where the problem lies.

    IV.    The company’s days of working capital (DWC = DSO+DIO-DPO) (( Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC). Se endnote for more.)) have on average according to the above, been 183 days over the last five years compared to REL’s  median DWC of 72 days in for comparable companies.

    This company thus has more than 2.5 times ‘larger’ working capital than its industry average.

    As levers of financial performance, none is more important than working capital. The viability of every business activity rests on daily changes in receivables, inventory, and payables.

    The goal of the company is to minimize its ‘Days of Working Capital’ (DWC) or which is equivalent the ‘Cash Conversion Cycle’ (CCC), and thereby reduce the amount of outstanding working capital. This requires examining each component of DWC discussed above and taking actions to improve each element. To the extent this can be achieved without increasing costs or depressing sales, they should be carried out:

    1.    A decrease in ‘Day’s sales outstanding’ (DSO) or in ‘Day’s inventory outstanding’ (DIO) will represent an improvement, and an increase will indicate deterioration,

    2.    An increase in ‘Day’s payables outstanding’ (DPO) will represent an improvement and an decrease will indicate deterioration,

    3.    Reducing ‘Days of Working Capital’ (DWC or CCC) will represent an improvement, whereas an increasing (DWC or CCC) will represent deterioration.

    Day’s sales- and payables outstanding

    Many companies think in terms of “collecting as fast as possible, and paying as slowly as permissible.” This strategy, however, may not be the wisest.
    At the same time the company is attempting to integrate with its customers – and realize the related benefits – so are its suppliers. A “pay slow” approach may not optimize either the accounts or inventory, and it is likely to interfere with good supplier relationships.

    Supply-chain finance

    One way around this might be ‘Supply Chain Finance ‘(SCF) or reverse factoring ((“The reverse factoring method, still rare, is similar to the factoring insofar as it involves three actors: the ordering party, the supplier and the factor. Just as basic factoring, the aim of the process is to finance the supplier’s receivables by a financier (the factor), so the supplier can cash in the money for what he sold immediately (minus an interest the factor deducts to finance the advance of money).” http://en.wikipedia.org/wiki/Reverse_factoring)). Properly done, it can enable a company to leverage credit to increase the efficiency of its working capital and at the same time enhance its relationships with suppliers. The company can extend payment terms and the supplier receives advance payments discounted at rates considerably lower than their normal funding margins. The lender (factor), in turn, gets the benefit of a margin higher than the risk profile commands.

    This is thus a form of receivables financing using solutions that provide working capital to suppliers and/or buyers within any part of a supply chain and that is typically arranged on the credit risk of a large corporate within that supply chain.

    Day’s inventory outstanding (DIO)

    DIO is a financial and operational measure, which expresses the value of inventory in days of cost of goods sold. It represents how much inventory an organization has tied up across its supply chain or more simply – how long it takes to convert inventory into sales. This measure can be aggregated for all inventories or broken down into days of raw material, work in progress and finished goods. This measure should normally be produced monthly.

    By using the industry typical ‘days inventory outstanding’ (DIO) we can calculate the potential reduction in the company’s inventory – if the company should succeed in being as good in inventory management as its peers.

    If the industry’s typical DIO value is applicable, then there should be a potential for a 60 % reduction in the company’s inventory.

    Even if this overstates the true potential it is obvious that a fairly large reduction is possible since 98% of the 1000 companies in the REL report have a value for DIO less than 138 days:

    Adding to the company’s concern should also be the fact that the inventories seems to increase at a faster pace than net sales:

    Inventory Management

    Successfully addressing the challenge of reducing inventory requires an understanding of why inventory is held and where it builds in the system.
    Achieving this goal requires a focus on inventory improvement efforts on four core areas:

    1. demand management – information integration with both suppliers and customers,
    2. inventory optimization – using statistical/finance tools to monitor and set inventory levels,
    3. transportation and logistics – lead time length and variability and
    4. supply chain planning and execution – coordinating planning throughout the chain from inbound to internal processing to outbound.

    We believe that the best way of attacking this problems is to produce a simulation model that can ‘mimic’ the sales – distribution – production chain in necessary detail to study different strategies and the probabilities of stock-out and possible stock-out costs compared with the costs of doing the different products (items).

    The costs of never experience a stock-out can be excessively high – the global average of retail out-of-stocks is 8.3% ((Gruen, Thomas W. and Daniel Corsten (2008), A Comprehensive Guide to Retail Out-of-Stock Reduction in the Fast-Moving Consumer Goods Industry, Grocery Manufacturers of America, Washington, DC, ISBN: 978-3-905613-04-9)) .

    By basing the model on activity-based costing, it can estimate the cost and revenue elements of the product lines thus either identify and/or eliminate those products and services that are unprofitable or ineffective. The scope is to release more working capital by lowering values of inventories and streamlining the end to end value chain

    To do this we have to make improved forecasts of sales and a breakdown of risk and economic values both geographically and for product groups to find out were capital should be employed coming years  (product – geography) both for M&A and organic growth investments.

    A model like the one we propose needs detailed monthly data usually found in the internal accounts. This data will be used to statistically determine the relationships between the cost variables describing the different value chains. In addition will overhead from different company levels (geographical) have to be distributed both on products and on the distribution chains.

    Endnote

    Days Sales Outstanding (DSO) = AR/(total revenue/365)

    Year-end trade receivables net of allowance for doubtful accounts, plus financial receivables, divided by one day of average revenue.

    Days Inventory Outstanding (DIO) = Inventory/(total revenue/365)

    Year-end inventory plus LIFO reserve divided by one day of average revenue.

    Days Payables Outstanding (DPO) = AP/(total revenue/365)

    Year-end trade payables divided by one day of average revenue.

    Days Working Capital (DWC): (AR + inventory – AP)/(total revenue/365)

    Where:
    AR = Average accounts receivable
    AP = Average accounts payable
    Inventory = Average inventory + Work in progress

    Year-end net working capital (trade receivables plus inventory, minus AP) divided by one day of average revenue. (DWC = DSO+DIO-DPO).

    For the comparable industry we find an average of: DWC=56+39-23=72 days

    Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC) except that the CCC uses the Cost of Goods Sold (COGS) when calculating both the Days Inventory Outstanding (DIO) and the Days Payables Outstanding (DPO) whereas DWC uses sales (Total Revenue) for all calculations:

    CCC= Days in period x {(Average  inventory/COGS) + (Average receivables / Revenue) – (Average payables/[COGS + Change in Inventory)]

    Where:
    COGS= Production Cost – Change in Inventory

    Footnotes

     

  • Inventory management – Stochastic supply

    Inventory management – Stochastic supply

    This entry is part 4 of 4 in the series Predictive Analytics

     

    Introduction

    We will now return to the newsvendor who was facing a onetime purchasing decision; where to set the inventory level to maximize expected profit – given his knowledge of the demand distribution.  It turned out that even if we did not know the closed form (( In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution, we could find the inventory level that maximized profit and how this affected the vendor’s risk – assuming that his supply with certainty could be fixed to that level. But what if that is not the case? What if the supply his supply is uncertain? Can we still optimize his inventory level?

    We will look at to slightly different cases:

    1.  one where supply is uniformly distributed, with actual delivery from 80% to 100% of his ordered volume and
    2. the other where the supply have a triangular distribution, with actual delivery from 80% to 105% of his ordered volume, but with most likely delivery at 100%.

    The demand distribution is as shown below (as before):

    Maximizing profit – uniformly distributed supply

    The figure below indicates what happens as we change the inventory level – given fixed supply (blue line). We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profit will increase to a point of maximum.

    If we let the actual delivery follow the uniform distribution described above, and successively changes the order point expected profit will follow the red line in the graph below. We can see that the new order point is to the right and further out on the inventory axis (order point). The vendor is forced to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2300 to 2900 units with a mean close to the inventory level giving maximum profit for the fixed supply case:

    The realized profits are as shown in the frequency graph below:

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.

    Maximizing profit – triangular distributed supply

    Again we compare the expected profit with delivery following the triangular distribution as described above (red line) with the expected profit created by known and fixed supply (blue line).  We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profits will increase to a point of maximum. However the order point for the stochastic supply is to the right and further out on the inventory axis than for the non-stochastic case:

    The uncertain supply again forces the vendor to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2250 to 2900 units with a mean again close to the inventory level giving maximum profit for the fixed supply case ((This is not necessarily true for other combinations of demand and supply distributions.)) .

    The realized profits are as shown in the frequency graph below:

    Average profit has somewhat been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation this variability has increased by 10%, and this is again mainly caused by an increased negative skewness – again have the down side been raised.

    The introduction of uncertain supply has shown that profit can still be maximized however the profit will be reduced by increased costs both in lost sales and in excess inventory. But most important, profit variability will increase raising issues of possible other strategies.

    Summary

    We have shown through Monte-Carlo simulations, that the ‘order point’ when the actual delivered amount is uncertain can be calculated without knowing the closed form of the demand distribution. We actually do not need the closed form for the distribution describing delivery, only historic data for the supplier’s performance (reliability).

    Since we do not need the closed form of the demand distribution or supply, we are not limited to such distributions, but can use historic data to describe the uncertainty as frequency distributions. Expanding the scope of analysis to include supply disruptions, localization of inventory etc. is thus a natural extension of this method.

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.