Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Other topics – Page 2 – Strategy @ Risk

Category: Other topics

  • The risk of planes crashing due to volcanic ash

    The risk of planes crashing due to volcanic ash

    This entry is part 4 of 4 in the series Airports

    Eyjafjallajokull volcano

    When the Icelandic volcano Eyafjallajøkul had a large eruption in 2010 it lead to closed airspace all over Europe, with corresponding big losses for airlines.  In addition it led to significant problems for passengers who were stuck at various airports without getting home.  In Norway we got a new word: “Ash stuck” ((Askefast)) became a part of Norwegian vocabulary.

    The reason the planes were put on ground is that mineral particles in the volcanic ash may lead to damage to the plane’s engines, which in turn may lead to them crashing.  This happened in 1982, when a flight from British Airways almost crashed due to volcanic particles in the engines. The risk of the same happening in 2010 was probably not large, but the consequences would have been great should a plane crash.

    Using simulation software and a simple model I will show how this risk can be calculated, and hence why the airspace was closed over Europe in 2010 even if the risk was not high.  I have not calculated any effects following the closure, since this isn’t a big model nor an in depth analysis.  It is merely meant as an example of how different issues can be modeled using Monte Carlo simulation.  The variable values are not factual but my own simple estimates.  The goal in this article is to show an example of modeling, not to get a precise estimate of actual risk.

    To model the risk of dangerous ash in the air there are a few key questions that have to be asked and answered to describe the issue in a quantitative way.

    Is the ash dangerousVariable 1. Is the ash dangerous?

    We first have to model the risk of the ash being dangerous to plane engines.  I do that by using a so called discrete probability.  It has a value 0 if the ash is not dangerous and a value 1 if it is.  Then the probabilities for each of the alternatives are set.  I set them to:

    • 99% probability that the as IS NOT dangerous
    • 1% probability that the ash IS dangerous

    Number of planes in the air during 2 hoursVariable 2. How many planes are in the air?

    Secondly we have to estimate how many planes are in the air when the ash becomes a problem.  Daily around 30 000 planes are in the air over Europe.  We can assume that if planes start crashing or get in big trouble the rest will immediately be grounded.  Therefore I only use 2/24 of these planes in the calculation.

    • 2 500 planes are in the air when the problem occurs

    I use a normal distribution and set the standard deviation for planes in the air in a 2 hour period to 250 planes.  I have no views on whether the curve is skewed one way or the other.  I assume it may well be, since there probably are different numbers of planes in the air depending on weekday, whether it’s a holiday season and so on, but I’ll leave that estimate to the air authority staff.

    Number of passengers and crewVariable 3.  How many people are there in each plane?

    Thirdly I need an estimate on how many passengers and crew there are in each plane.  I assume the following; I disregard the fact that there are a lot of intercontinental flights over the Eyafjallajøkul volcano, likely with more passengers than the average plane over Europe.  The curve might be more skewed that what I assume:

    • Average number of passengers/crew: 70
    • Lowest number of passengers/crew: 60
    • Highest number of passengers/crew: 95

    The reason I’m using a skewed curve here is that the airline business is constantly under pressure to fill up their planes.  In addition the number of passengers will vary by weekday and so on.  I think it is reasonable to assume that there are likely more passengers per plane rather than fewer.

    Number of planes crashingVariable 4. How many of the planes which are in the air will crash?

    The last variable that needs to be modeled is how many planes will crash should the ash be dangerous.  I assume that maybe no planes actually crash, even though the ash gets into their engines.  This is the low end of the curve.  I have in addition assumed the following:

    • Expected number of planes that crash: 0, 01%
    • Maximum number of planes that crash: 1, 0%

    Now we have what we need to start calculating!

    The formula I use to calculate is as follows:

    If(“Dangerous ash”=0;0)

    If(“Dangerous ash”=1;”Number of planes in the air”x”Number of planes crashing”x”Number of passengers/crew per plane”)

    If the ash is not dangerous, variable 1 is equal to 0, no planes crash and nobody dies.  If the ash is dangerous the number of dead is a product of the number of planes, number of passengers/crew and the number of planes crashing.

    Running this model with a simulation tool gives the following result:

    Expected value - number of dead

    As the graph shows the expected value is low; 3 people, meaning that the probability for a major loss of planes is very low.  But the consequences may be devastatingly high.  In this model run there is a 1% probability that the ash is dangerous, and a 0, 01% probability that planes actually crash.  However the distribution has a long tail, and a bit out in the tail there is a probability that 1 000 people crash into their death. This is a so called shortfall risk or the risk of a black swan if you wish.  The probability is low, but the consequences are very big.

    This is the reason for the cautionary steps taken by air authorities.   Another reason is that the probabilities both for the ash being dangerous and that planes will crash because of it are unknown probabilities.  Thirdly, changes in variable values will have a big impact.

    If the probability of the ash being dangerous is 10% rather than 1% and the probability of planes crashing is 1% rather than 0,01%, as much as 200 dead (or 3 planes) is expected while the extreme outcome is close to 6 400 dead.

    Expected value - number of dead higher probability of crash

    This is a simplified example of the modeling that is likely to be behind the airspace being closed.  I don’t know what probabilities are used, but I’m sure this is how they think.

    How we assess risk depends on who we are.  Some of us have a high risk appetite, some have low.  I’m glad I’m not the one to make the decision on whether to close the airspace or not.  It is not an easy decision.

    My model is of course very simple.  There are many factors to take into account, like wind direction and – strength, intensity of eruption and a number of other factors I don’t know about.  But as an illustration both of the factors that need to be estimated in this case and as a generic modeling case this is a good example.

    Originally published in Norwegian.

  • The Most Costly Excel Error Ever?

    The Most Costly Excel Error Ever?

    This entry is part 2 of 2 in the series Spreadsheet Errors

     

    Efficient computing tools are essential for statistical research, consulting, and teaching. Generic packages such as Excel are not sufficient even for the teaching of statistics, let alone for research and consulting (ASA, 2000).

    Introduction

    Back early in 2009 we published a post on the risk of spreadsheet errors. The reference above is taken from that post, but it seems even more relevant today as show in the following.

    Growth in a Time of Debt

    In 2010, economists Reinhart and Rogoff released a paper, “Growth in a Time of Debt.” Their “main result was:

    1. Average growth rates for countries with public debt over 90% of GDP are roughly 4% lower than when debt is low (under 30% of GDP).
    2. Median growth rates for countries with public debt over 90% of GDP are roughly 2.6% lower than the when debt is low (under 30% of GDP).
    3.   Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate (-0.1%).

    The paper has been widely cited by political figures around the world, arguing the case for reduced government spending and increased taxes and ultimately against government efforts to boost the economy and create jobs. All based on the papers conclusion that any short-term benefit in job creation and increased growth would come with a high long-term cost.

    Then in 2013, Herndon, Ash and Pollin (Herndon et. al., 2013) replicated the Reinhart and Rogoff study and found that it had:

    1. Coding errors in the spreadsheet programming,
    2. Selective exclusion of available data, and
    3. Unconventional weighting of summary statistics.

    All this led to serious errors that inaccurately estimated the relationship between public debt and GDP growth among 20 advanced economies in the post-war period. Instead they found that when properly calculated:

    That the average real GDP growth rate for countries carrying a public-debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0:1 percent as published in Reinhart and Rogoff.

    That is, contrary to the Reinhart and Rogoff study – average GDP growth at public debt/GDP ratios over 90 percent is not dramatically different than when debt/GDP ratios are lower.

    Statistics and the use of Excel

    Even if the coding error only accounted for a small part of the total error, “everyone” knows that excel is error-prone in a way that any programming language or statistical package is not; it mixes data and code and makes you do things by hand that would be automatically done in the other settings.

    Excel is good for ad-hoc calculations where you’re not really sure what you’re looking for, or for a first quick look at the data, but once you really start analyzing a dataset, you’re better off using almost anything else.

    Basing important decisions on excel models or excel analysis only is very risky – unless it has been thoroughly audited and great effort has been taken to ensure that the calculations are coherent and consistent.

    One thing is certain, serious problems demands serious tools. Maybe it is time to reread the American Statistical Association (ASA) endorsement of “Guidelines for Programs and Departments in Undergraduate Mathematical Sciences”

    References

    Herndon, T., Ash, M. and Pollin, R. (April 15, 2013). Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff, PERI, University of Massachusetts, Amherst. http://www.peri.umass.edu/fileadmin/pdf/working_papers/working_papers_301-350/WP322.pdf

    American Statistical Association (ASA) (2000).  Endorsement of the Mathematical Association of America (MAA): “Guidelines for Programs and Departments in Undergraduate Mathematical Sciences” http://www07.homepage.villanova.edu/michael.posner/sigmaastated/ASAendorsement2.html

    Baker, D. (16 April 2013) How much unemployment did Reinhart and Rogoff’s arithmetic mistake cause? The Guardian. http://www.guardian.co.uk/commentisfree/2013/apr/16/unemployment-reinhart-rogoff-arithmetic-cause

    Reinhart, C.M. & Rogoff, K.S., (2010). Growth in a time of Debt, Working Paper 15639 National Bureau of Economic Research, Cambridge. http://www.nber.org/papers/w15639.pdf

  • We’ve Got Mail !

    We’ve Got Mail !

    This entry is part 1 of 2 in the series Self-applause

    SlideShare#2Thanks
    S@R

     

  • Working Capital Strategy Revisited

    Working Capital Strategy Revisited

    This entry is part 3 of 3 in the series Working Capital

    Introduction

    To link the posts on working capital and inventory management, we will look at a company with a complicated market structure, having sales and production in a large number of countries and with a wide variety of product lines. Added to this is a marked seasonality with high sales in the years two first quarters and much lower sales in the years two last quarters ((All data is from public records)).

    All this puts a strain on the organizations production and distribution systems and of course on working capital.

    Looking at the development of net working capital ((Net working capital = Total current assets – Total current liabilities)) relative to net sales it seems as the company in the later years have curbed the initial net working capital growth:

    Just by inspecting the graph however it is difficult to determine if the company’s working capital management is good or lacking in performance. We therefore need to look in more detail at the working capital elements  and compare them with industry ‘averages’ ((By their Standard Industrial Classification (SIC) )).

    The industry averages can be found from the annual “REL Consultancy /CFO Working Capital Survey” that made its debut in 1997 in the CFO Magazine. We can thus use the survey’s findings to assess the company’s working capital performance ((Katz, M.K. (2010). Working it out: The 2010 Working Capital Scorecard. CFO Magazine, June, Retrieved from http://www.cfo.com/article.cfm/14499542
    Also see: https://www.strategy-at-risk.com/2010/10/18/working-capital-strategy-2/)).

    The company’s working capital management

    Looking at the different elements of the company’s working capital, we find that:

    I.    Day’s sales outstanding (DSO) is on average 70 days compared with REL’s reported industry median of 56 days.

    II.    Day’s payables outstanding (DPO) is the difference small and in the right direction, 25 days against the industry median of 23 days.

    III.    Day’s inventory outstanding (DIO) on average 138 days compared with the industry median of 23 days, and this is where the problem lies.

    IV.    The company’s days of working capital (DWC = DSO+DIO-DPO) (( Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC). Se endnote for more.)) have on average according to the above, been 183 days over the last five years compared to REL’s  median DWC of 72 days in for comparable companies.

    This company thus has more than 2.5 times ‘larger’ working capital than its industry average.

    As levers of financial performance, none is more important than working capital. The viability of every business activity rests on daily changes in receivables, inventory, and payables.

    The goal of the company is to minimize its ‘Days of Working Capital’ (DWC) or which is equivalent the ‘Cash Conversion Cycle’ (CCC), and thereby reduce the amount of outstanding working capital. This requires examining each component of DWC discussed above and taking actions to improve each element. To the extent this can be achieved without increasing costs or depressing sales, they should be carried out:

    1.    A decrease in ‘Day’s sales outstanding’ (DSO) or in ‘Day’s inventory outstanding’ (DIO) will represent an improvement, and an increase will indicate deterioration,

    2.    An increase in ‘Day’s payables outstanding’ (DPO) will represent an improvement and an decrease will indicate deterioration,

    3.    Reducing ‘Days of Working Capital’ (DWC or CCC) will represent an improvement, whereas an increasing (DWC or CCC) will represent deterioration.

    Day’s sales- and payables outstanding

    Many companies think in terms of “collecting as fast as possible, and paying as slowly as permissible.” This strategy, however, may not be the wisest.
    At the same time the company is attempting to integrate with its customers – and realize the related benefits – so are its suppliers. A “pay slow” approach may not optimize either the accounts or inventory, and it is likely to interfere with good supplier relationships.

    Supply-chain finance

    One way around this might be ‘Supply Chain Finance ‘(SCF) or reverse factoring ((“The reverse factoring method, still rare, is similar to the factoring insofar as it involves three actors: the ordering party, the supplier and the factor. Just as basic factoring, the aim of the process is to finance the supplier’s receivables by a financier (the factor), so the supplier can cash in the money for what he sold immediately (minus an interest the factor deducts to finance the advance of money).” http://en.wikipedia.org/wiki/Reverse_factoring)). Properly done, it can enable a company to leverage credit to increase the efficiency of its working capital and at the same time enhance its relationships with suppliers. The company can extend payment terms and the supplier receives advance payments discounted at rates considerably lower than their normal funding margins. The lender (factor), in turn, gets the benefit of a margin higher than the risk profile commands.

    This is thus a form of receivables financing using solutions that provide working capital to suppliers and/or buyers within any part of a supply chain and that is typically arranged on the credit risk of a large corporate within that supply chain.

    Day’s inventory outstanding (DIO)

    DIO is a financial and operational measure, which expresses the value of inventory in days of cost of goods sold. It represents how much inventory an organization has tied up across its supply chain or more simply – how long it takes to convert inventory into sales. This measure can be aggregated for all inventories or broken down into days of raw material, work in progress and finished goods. This measure should normally be produced monthly.

    By using the industry typical ‘days inventory outstanding’ (DIO) we can calculate the potential reduction in the company’s inventory – if the company should succeed in being as good in inventory management as its peers.

    If the industry’s typical DIO value is applicable, then there should be a potential for a 60 % reduction in the company’s inventory.

    Even if this overstates the true potential it is obvious that a fairly large reduction is possible since 98% of the 1000 companies in the REL report have a value for DIO less than 138 days:

    Adding to the company’s concern should also be the fact that the inventories seems to increase at a faster pace than net sales:

    Inventory Management

    Successfully addressing the challenge of reducing inventory requires an understanding of why inventory is held and where it builds in the system.
    Achieving this goal requires a focus on inventory improvement efforts on four core areas:

    1. demand management – information integration with both suppliers and customers,
    2. inventory optimization – using statistical/finance tools to monitor and set inventory levels,
    3. transportation and logistics – lead time length and variability and
    4. supply chain planning and execution – coordinating planning throughout the chain from inbound to internal processing to outbound.

    We believe that the best way of attacking this problems is to produce a simulation model that can ‘mimic’ the sales – distribution – production chain in necessary detail to study different strategies and the probabilities of stock-out and possible stock-out costs compared with the costs of doing the different products (items).

    The costs of never experience a stock-out can be excessively high – the global average of retail out-of-stocks is 8.3% ((Gruen, Thomas W. and Daniel Corsten (2008), A Comprehensive Guide to Retail Out-of-Stock Reduction in the Fast-Moving Consumer Goods Industry, Grocery Manufacturers of America, Washington, DC, ISBN: 978-3-905613-04-9)) .

    By basing the model on activity-based costing, it can estimate the cost and revenue elements of the product lines thus either identify and/or eliminate those products and services that are unprofitable or ineffective. The scope is to release more working capital by lowering values of inventories and streamlining the end to end value chain

    To do this we have to make improved forecasts of sales and a breakdown of risk and economic values both geographically and for product groups to find out were capital should be employed coming years  (product – geography) both for M&A and organic growth investments.

    A model like the one we propose needs detailed monthly data usually found in the internal accounts. This data will be used to statistically determine the relationships between the cost variables describing the different value chains. In addition will overhead from different company levels (geographical) have to be distributed both on products and on the distribution chains.

    Endnote

    Days Sales Outstanding (DSO) = AR/(total revenue/365)

    Year-end trade receivables net of allowance for doubtful accounts, plus financial receivables, divided by one day of average revenue.

    Days Inventory Outstanding (DIO) = Inventory/(total revenue/365)

    Year-end inventory plus LIFO reserve divided by one day of average revenue.

    Days Payables Outstanding (DPO) = AP/(total revenue/365)

    Year-end trade payables divided by one day of average revenue.

    Days Working Capital (DWC): (AR + inventory – AP)/(total revenue/365)

    Where:
    AR = Average accounts receivable
    AP = Average accounts payable
    Inventory = Average inventory + Work in progress

    Year-end net working capital (trade receivables plus inventory, minus AP) divided by one day of average revenue. (DWC = DSO+DIO-DPO).

    For the comparable industry we find an average of: DWC=56+39-23=72 days

    Days of working capital (DWC) is essentially the same as the Cash Conversion Cycle (CCC) except that the CCC uses the Cost of Goods Sold (COGS) when calculating both the Days Inventory Outstanding (DIO) and the Days Payables Outstanding (DPO) whereas DWC uses sales (Total Revenue) for all calculations:

    CCC= Days in period x {(Average  inventory/COGS) + (Average receivables / Revenue) – (Average payables/[COGS + Change in Inventory)]

    Where:
    COGS= Production Cost – Change in Inventory

    Footnotes

     

  • Inventory management – Stochastic supply

    Inventory management – Stochastic supply

    This entry is part 4 of 4 in the series Predictive Analytics

     

    Introduction

    We will now return to the newsvendor who was facing a onetime purchasing decision; where to set the inventory level to maximize expected profit – given his knowledge of the demand distribution.  It turned out that even if we did not know the closed form (( In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution, we could find the inventory level that maximized profit and how this affected the vendor’s risk – assuming that his supply with certainty could be fixed to that level. But what if that is not the case? What if the supply his supply is uncertain? Can we still optimize his inventory level?

    We will look at to slightly different cases:

    1.  one where supply is uniformly distributed, with actual delivery from 80% to 100% of his ordered volume and
    2. the other where the supply have a triangular distribution, with actual delivery from 80% to 105% of his ordered volume, but with most likely delivery at 100%.

    The demand distribution is as shown below (as before):

    Maximizing profit – uniformly distributed supply

    The figure below indicates what happens as we change the inventory level – given fixed supply (blue line). We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profit will increase to a point of maximum.

    If we let the actual delivery follow the uniform distribution described above, and successively changes the order point expected profit will follow the red line in the graph below. We can see that the new order point is to the right and further out on the inventory axis (order point). The vendor is forced to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2300 to 2900 units with a mean close to the inventory level giving maximum profit for the fixed supply case:

    The realized profits are as shown in the frequency graph below:

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.

    Maximizing profit – triangular distributed supply

    Again we compare the expected profit with delivery following the triangular distribution as described above (red line) with the expected profit created by known and fixed supply (blue line).  We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profits will increase to a point of maximum. However the order point for the stochastic supply is to the right and further out on the inventory axis than for the non-stochastic case:

    The uncertain supply again forces the vendor to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2250 to 2900 units with a mean again close to the inventory level giving maximum profit for the fixed supply case ((This is not necessarily true for other combinations of demand and supply distributions.)) .

    The realized profits are as shown in the frequency graph below:

    Average profit has somewhat been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation this variability has increased by 10%, and this is again mainly caused by an increased negative skewness – again have the down side been raised.

    The introduction of uncertain supply has shown that profit can still be maximized however the profit will be reduced by increased costs both in lost sales and in excess inventory. But most important, profit variability will increase raising issues of possible other strategies.

    Summary

    We have shown through Monte-Carlo simulations, that the ‘order point’ when the actual delivered amount is uncertain can be calculated without knowing the closed form of the demand distribution. We actually do not need the closed form for the distribution describing delivery, only historic data for the supplier’s performance (reliability).

    Since we do not need the closed form of the demand distribution or supply, we are not limited to such distributions, but can use historic data to describe the uncertainty as frequency distributions. Expanding the scope of analysis to include supply disruptions, localization of inventory etc. is thus a natural extension of this method.

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.

  • Inventory management – Some effects of risk pooling

    Inventory management – Some effects of risk pooling

    This entry is part 3 of 4 in the series Predictive Analytics

    Introduction

    The newsvendor described in the previous post has decided to branch out having news boys placed at strategic corners in the neighborhood. He will first consider three locations, but have six in his sights.

    The question to be pondered is how many of the newspaper he should order for these three locations and the possible effects on profit and risk (Eppen, 1979) and (Chang & Lin, 1991).

    He assumes that the demand distribution he experienced at the first location also will apply for the two others and that all locations (point of sales) can be served from a centralized inventory. For the sake of simplicity he further assumes that all points of sales can be restocked instantly (i.e. zero lead time) at zero cost, if necessary or advantageous by shipment from one of the other locations and that the demand at the different locations will be uncorrelated. The individual point of sales will initially have a working stock, but will have no need of safety stock.

    In short is this equivalent to having one inventory serve newspaper sales generated by three (or six) copies of the original demand distribution:

    The aggregated demand distribution for the three locations is still positively skewed (0.32) but much less than the original (0.78) and has a lower coefficient of variation – 27% – against 45% for the original ((The quartile variation has been reduced by 37%.)):

    The demand variability has thus been substantially reduced by this risk pooling ((We distinguish between ten main types of risk pooling that may reduce total demand and/or lead time variability (uncertainty): capacity pooling, central ordering, component commonality, inventory pooling, order splitting, postponement, product pooling, product substitution, transshipments, and virtual pooling. (Oeser, 2011)))  and the question now is how this will influence the vendor’s profit.

    Profit and Inventory level with Risk Pooling

    As in the previous post we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum – ¤16541 at a level of 7149 units:

    Compared to the point of maximum profit for a single warehouse (profit ¤4963 at a level of 2729 units, see previous post), have this risk pooling increased the vendors profit by 11.1% while reducing his inventory by 12.7%. Centralization of the three inventories has thus been a successful operational hedge ((Risk pooling can be considered as a form of operational hedging. Operational hedging is risk mitigation using operational instruments.))  for our newsvendor by mitigating some, but not all, of the demand uncertainty.

    Since this risk mitigation was a success the newsvendor wants to calculate the possible benefits from serving six newsboys at different locations from the same inventory.

    Under the same assumptions, it turns out that this gives an even better result, with an increase in profit of almost 16% and at the same time reducing the inventory by 15%:

    The inventory ‘centralization’ has then both increased profit and reduced inventory level compared to a strategy with inventories held at each location.

    Centralizing inventory (inventory pooling) in a two-echelon supply chain may thus reduce costs and increase profits for the newsvendor carrying the inventory, but the individual newsboys may lose profits due to the pooling. On the other hand, the newsvendor will certainly lose profit if he allows the newsboys to decide the level of their own inventory and the centralized inventory.

    One of the reasons behind this conflict of interests is that each of the newsvendor and newsboys will benefit one-sidedly from shifting the demand risk to another party even though the performance may suffer as a result (Kemahloğlu-Ziya, 2004) and (Anupindi and Bassok 1999).

    In real life, the actual risk pooling effects would depend on the correlations between each locations demand. A positive correlation would reduce the effect while a negative correlation would increase the effects. If all locations were perfectly correlated (positive) the effect would be zero and a correlation coefficient of minus one would maximize the effects.

    The third effect

    The third direct effect of risk pooling is the reduced variability of expected profit. If we plot the profit variability, measured by its coefficient of variation (( The coefficient of variation is defined as the ratio of the standard deviation to the mean – also known as unitized risk.)) (CV) for the three sets of strategies discussed above; one single inventory (warehouse), three single inventories versus all three inventories centralized and six single inventories versus all six centralized.

    The graph below depicts the situation. The three curves show the CV for corporate profit given the three alternatives and the vertical lines the point of profit for each alternative.

    The angle of inclination for each curve shows the profits sensitivity for changes in the inventory level and the location each strategies impact on the predictability of realized profit.

    A single warehouse strategy (blue) gives clearly a much less ability to predict future profit than the ‘six centralized warehouse’ (purple) while the ‘three centralized warehouse’ (green) fall somewhere in between:

    So in addition to reduced costs and increased profits centralization, also gives a more predictable result, and lower sensitivity to inventory level and hence a greater leeway in the practical application of different policies for inventory planning.

    Summary

    We have thus shown through Monte-Carlo simulations, that the benefits of pooling will increase with the number of locations and that the benefits of risk pooling can be calculated without knowing the closed form ((In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution.

    Since we do not need the closed form of the demand distribution, we are not limited to low demand variability or the possibility of negative demand (Normal distributions etc.). Expanding the scope of analysis to include stochastic supply, supply disruptions, information sharing, localization of inventory etc. is natural extensions of this method ((We will return to some of these issues in later posts.)).

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    References

    Anupindi, R. & Bassok, Y. (1999). Centralization of stocks: Retailers vs. manufacturer.  Management Science 45(2), 178-191. doi: 10.1287/mnsc.45.2.178, accessed 09/12/2012.

    Chang, Pao-Long & Lin, C.-T. (1991). Centralized Effect on Expected Costs in a Multi-Location Newsboy Problem. Journal of the Operational Research Society of Japan, 34(1), 87–92.

    Eppen,G.D. (1979). Effects of centralization on expected costs in a multi-location newsboy problem. Management Science, 25(5), 498–501.

    Kemahlioğlu-Ziya, E. (2004). Formal methods of value sharing in supply chains. PhD thesis, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, July 2004. http://smartech.gatech.edu/bitstream/1853/4965/1/kemahlioglu ziya_eda_200407_phd.pdf, accessed 09/12/2012.

    OESER, G. (2011). Methods of Risk Pooling in Business Logistics and Their Application. Europa-Universität Viadrina Frankfurt (Oder). URL: http://opus.kobv.de/euv/volltexte/2011/45, accessed 09/12/2012.

    Endnotes