Wednesday, 17 December 2014

The Use of Simulation in Microeconomics


In 2009, I moved to the Defence Economics Team.  One of the first papers that I wrote in this period was an adaptation of a paper that I wrote as a term paper for a Masters Level course in Microeconomics in the School of Public Policy at Carleton University.  The paper is provided below.


In this blog post, classical approaches were compared to computer simulation in microeconomics. In particular, Frank’s Chapter on Perfect Competition (Ref. 1) was compared to Sterman’s Chapter on Commodity Cycles (Ref. 2).

Frank described the theory of perfect competition as satisfying four conditions:
    1. Firms sell a standardized product.
    2. Firms are price takers.
    3. Factors of production are perfectly mobile in the long run.
    4. Firms and consumers have perfect information.
He made a comparison with a physicist’s model of objects in motion on a frictionless surface to demonstrate that these assumptions are not "hopelessly restrictive" (Ref. 1, p. 352). Frank states that "in some markets, most notably for agricultural products, the four conditions come close to being satisfied" whereas "in other markets, such as garbage trucks or earth-moving equipment, at least some of the conditions are not even approximately satisfied" (Ref 1, p. 353).

Using classical methods, Frank answered the question: "How does a firm choose its output level in the short run?" Using some basic logic, Frank stated that the profit-maximizing firm will choose "the level of output for which the difference between total revenue and total cost is largest" (Ref. 1, p. 353). Then using a little differential calculus, Frank showed that profit is maximized when price equals marginal cost (Ref 1, p. 356).

In the long run, firms will enter the market depending on the profits that can be achieved. If an economic profit can be achieved, suppliers will enter the market and increase the supply, shifting the supply curve to the right. When the supply and demand come into equilibrium, the price will be lower and to maximize their profits at this new price firms will adjust their capital stocks. At the level of the individual firm, it will find that its output level is reduced and therefore its profits are reduced. Eventually, when enough suppliers enter the market, the suppliers’ economic profit will disappear, no new suppliers will enter the market and the long run equilibrium will be reached.

If the current suppliers in the market are sustaining economic losses, some will eventually leave the market in the long run. This will move the supply curve to the left causing prices to rise. The remaining suppliers will adjust their capital stock until they reach equilibrium and are in a position where the average total costs equal the price and there is neither economic profit nor loss. There is no possibility of overshoot and undershoot.

Sterman questions the realism of this prediction of long run equilibrium when he notes that "most commodities … experience cycles in prices, and production with characteristic periods, amplitudes, and phases" (Ref. 2, p.791). He cites statistics from "copper, iron and mercury; forest products such as lumber, pulp and paper; agricultural products such as coffee, cocoa and cattle. … Hog prices and production fluctuate with roughly a 4-year period while the cattle cycle averages about 10-12 years … [copper data] show regular, large, documented cycles of about 8-10 years" (Ref. 2, p.792).

Sterman states that "economists often argue the oscillations in commodity markets cannot long endure because they provide arbitrage opportunities. If there were a cycle, savvy investors could make extraordinary profits by timing their investments to buy at cycle troughs and sell at cycle peaks. As more people pursued such counter-cyclical strategies, their actions would cause the cycle to vanish. … While the logic of the argument sounds compelling, the persistence of cyclical movements in so many commodity markets over very long periods (more than a century for many markets) suggests learning and arbitrage aren’t quite that simple" (Ref. 2, p. 840).

Sterman goes on to build a simulation model to show how "commodity cycles [can] arise from the interaction of physical delays in production and capacity utilization with bounded rational decision making by individual producers" (Ref. 2, p. 841).

The two major differences between Sterman’s simulation approach and Frank’s classical method are:
    1. The assumption of ‘bounded rationality’ rather than ‘perfect information’; and
    2. The use of stocks as well as flows that create delays in physical and information transfer.
Let us look at the Frank’s classical assumption of perfect information again. Frank states that "the assumption of perfect information is usually interpreted to mean that people can acquire most of the information that is most relevant to their choices without great difficulty" (Ref 1, p. 352).

However, in Chapter 8, Frank refers to Simon’s findings that people "search in a haphazard way for potentially relevant facts and information, and usually quit once their understanding reaches a certain threshold. …When information is costly to gather, and cognitive processing ability is limited, it is not even rational to make fully informed decisions" (Ref 1, p. 254).

The field of simulation called System Dynamics is intimately aligned with Simon’s hypothesis of ‘bounded rationality’ (Ref 3). Simulation can be used to model the decision-making resources of the individual firm in terms of what they know and when they know it. This assumption of ‘bounded rationality’ rather than ‘perfect information’ allows simulation models to model the cycles experienced in real markets that does not appear in Frank’s classical model.

The second major difference between Frank’s classical approach and Sterman’s simulation approach is that Frank considers only flows (good units or dollars per unit time, (Ref. 1, p. 71)) while Sterman’s model considers stocks and flows. Mass, a supporter of simulation for economic analysis, suggests that "supply would be measured by the available inventory of a commodity while the demand would be measured by a backlog of unfilled orders" (Ref. 4, p. 95). He states that "stock variables will frequently be out of equilibrium, thereby causing continuing change in rates of flow, even once flow equilibrium between production and consumption has been reached" (Ref. 4, p. 97).

Mass describes the difference between an economic model of an idealized firm which centers around "production, consumption and prices" and a real firm in which "stocks of in-process goods and final output intervene between the processes of production and consumption. If production exceeds consumption, inventory will accumulate. Conversely, if production is less than consumption inventory will be drawn down" (Ref. 4, p. 98). Another link between production and consumption is backorders and their associated delivery delays. Mass states that "whereas price is regarded in economic theory as the fundamental market-clearing mechanism, both availability and price in fact serve jointly as market-equilibrium channels. … Upward price pressure may reflect low inventories (indicated inadequate supply) or high order backlogs (indicated excess demand)". Mass concludes that "more attention should be given in economic theory to the way in which stock variables such as inventories and backlogs trigger price and quantity adjustments" (Ref. 4, p. 99).

Sterman’s model is based on Meadows’ original work (Ref. 5). It addresses these two points (bounded rationality and stocks) in detail to mimic the cycles in commodity prices and production. His model involves five sectors:
    1. Production and Inventory;
    2. Production Capacity;
    3. Desired Capital;
    4. Demand; and
    5. The Price-Setting Process.
In the Production and Inventory sector, the firms are assumed to manage production through capacity utilization, however, there is a delay between the time the production starts and the time items are produced and placed in inventory. The firm monitors the amount of inventory in stock and ensures they have sufficient coverage to handle customer orders. The firms also monitor short-run prices and variable costs to determine the optimal capacity utilization. The capacity utilization acts like the short-run supply function in the classical model when capital is fixed. Firms must be sure that prices will remain high and expected variable costs will remain low before they decide to invest in new capital.

Capital is a stock that must be ordered, acquired and eventually discarded. However, there are many delays in the process. Once the decision is made to invest in new capacity, there is a delay for new capital to come on line and start producing. The output of this sector is the production capacity over time as the capital stock is acquired or discarded.

There is usually considerable resistance in management to invest in or discard with capital until they are convinced the need is real. They generally have a vision of the ‘desired capital’ that they would like to have and will only change that value slowly based on the expected profitability of the new capital. The expected profitability of the new capital is based on the expected long-run costs and long-run prices. Management cannot make decisions based on information they do not yet have. They must collect data, analyze it and modify their beliefs.

In the demand sector, Sterman assumes a simple linear demand curve. However, there is some delay in the adjustment of demand to a change in price that can be input to the model.

The price-setting sector is maybe the most sophisticated. The price is anchored and adjusted. It is anchored to the ‘Traders’ Expected Price’ to clear the market and then adjusted by various pressures. In this sector, the two primary pressures on price are the effect of inventory coverage and the effect of cost. Recall that inventory coverage is represented by the number of months of finished product the firms have in inventory to cover the expected sales (expressed as the shipment rate).

Now let’s look at the results obtained from this model. First, we will look at Price over Time (Figure 1). Sterman’s model allows us to seed the simulation with a random noise factor in the demand. From the results shown in Figure 1 we see a pattern that is quite comparable to the commodity prices that are displayed in Sterman’s textbook (Ref. 2, p. 793-5).

Figure 1: Price Fluctuations over Time
Industry demand is smoothed with a slight delayed reaction but one can see that higher prices lead to lower demand as expected (Fig. 2).

Figure 2: Smoothed Industry Demand over Time

This relationship can be easily seen when we put these graphs together (Fig. 3).

Figure : Price and Industry Demand over Time

The simulation technology called System Dynamics has been used to model ‘bounded rationality’ and ‘delays is stocks and information’ and thereby mimic the dynamic behaviour of commodity cycles that never completely achieve equilibrium. Commodity cycles have been persistent and well documented. Whereas classical models involve a "search for equilibrium" (Ref. 6, p. 4) and make "assumptions most often guided by tractability rather than realism" (Ref. 6, p. 3). Computer simulations have no problems with tractability. As one author has said, "improvements in computer hardware and software now allow a richer kind of modelling that will significantly enhance social science methodology" (Ref. 6, p. 1). In particular, we have seen that simulation has relatively easily modelled disequilibrium systems realistically.

The question could be asked, why hasn’t simulation been adopted by mainstream microeconomics? There is probably a combination of reasons. First, there was the tradition of microeconomics that like pure mathematics was slow to adopt the computer in its analysis methods. Second, there was the advent of econometrics that focused the ‘detail complexity’ of predicting the macro factors in the economy whereas System Dynamics simulation focuses on the ‘dynamic complexity’ of the situation. Third, there was the goal of prediction in microeconomics. Whereas simulation could mimic the behaviour of microeconomies, it was difficult to calibrate the models to get predictive accuracy. Early efforts to use these simulation models for prediction were very controversial (Ref. 6).

However, the time is ripe for the reintroduction of computer simulation into the field of microeconomics. There is a new breed of economists that are fully familiar with computer technology. The hardware and software is readily available so micro-economists who want to develop simulation models need not be computer programmers. Furthermore, the software is becoming standardized so now models can be exchanged and validated. The future of microeconomics can be seen in the recent issue of the Journal of Economic Dynamics and Control on agent-based computational economics (Ref. 8).

REFERENCES
  1. Frank, Robert H.; Microeconomics and Behavior ; 4th Edition; Irwin McGraw- Hill; Boston; 2000.
  2. Sterman, John D.; Business Dynamics: Systems Thinking and Modeling for a Complex World with CD-ROM ; Irwin McGraw-Hill; Boston; 2000.
  3. Morecroft, John D.W.; System Dynamics: Portraying Bounded Rationality; OMEGA, The International Journal of Management Science; Volume 11, No 2; p. 131-142; 1983.
  4. Mass, Nathaniel J.; Stock and Flow Variables and the Dynamics of Supply and Demand; in Elements of the System Dynamics Method, edited by Jorgen Randers; Pegasus Communications; Watham, Mass.; 1980.
  5. Meadows, Dennis; Dynamics of Commodity Production Cycles ; Wright-Allen Press; Boston, 1970.
  6. Johnson, Paul E.; Rational Actors Versus Adaptive Agents: Social Science Implications; Paper delivered at the 1998 Annual Meeting of the American Political Science Association, Boston; September 1998
  7. Pringle, Laurence P.; The Economics Growth Debate: Are There Limits to Growth; Franklin, Watts Inc; New York; 1978.
  8. Tesfatsion, Leigh; Introduction to the JEDC Special Issue on Agent-Based Computational Economics; forthcoming in the Journal of Economic Dynamics and Control; and also available here.

Modeling Consensus Development



In 2005, I moved to the Technology Demonstration Program Operational Research Team in Defence Research and Development Canada (DRDC) Headquarters.   

Every year, teams of defence scientists in DRDC make proposals for Technology Demonstration Projects.  A committee of military officers from all of the military environments review the proposals which are also briefed to them by the proposal teams.  Then each member of the committee ranks the proposals.  The rankings of the proposals are then examined by the Technology Demonstration Project Operational Research Team using a piece of consensus analysis software developed by the Center for Operational Research and Analysis called MARCUS.  The top ranked proposals are funded by Defence Research and Development Canada to the tune of approximately $4M each.  Therefore a great deal of money is at stake and it is extremely important to select the most promising proposals.

There usually ten members on the committee who rank the projects according to five independent weighted criteria.  I was concerned about the MARCUS methodology because it suffered from the problem of irrelevant alternatives.  That is, the ranking of the top proposals could change if a proposal of a low rank was either added or removed from consideration.  That is, adding or removing a proposal that would not be chosen by the committee because it was of poor quality could cause a proposal of fairly high quality not to be selected.

I developed an alternative method to MARCUS which was based on the Condorcet Method of ranking.  I called it Condorcet Elimination.

Then I built a simulation to test the Condorcet Elimination Method.  I used Monte Carlo simulation to model the voting of the committee members.  First, I developed a series of simulated proposals with a known quality level using random numbers.  Then I modelled the error in the criteria’s ability to determine the proposal with the best quality.  Finally, I introduced random errors in the ability of the committee member to evaluate the quality of the proposals according to the criteria.

All of the errors were assumed to be Normally distributed. However, I varied the standard deviation of the distribution to determine the robustness of the committee evaluation process.  I used the Condorcet Elimination Method to determine the final rankings of the committee and the level of consensus.

In general, the results showed that the committee was able to determine the top few and bottom few proposals in terms of the true quality.  However, there was generally a lack of consensus in the rankings of the proposals in the mid-range of quality.  This is problematic because it is at this mid-level where the funding is cut off.  So the lack of consensus on the quality of the proposals might result in some lower quality proposals being funded and some higher quality proposals not making the cut.

I found the more proposals there were, the less likely it was that the committee would obtain a complete consensus.  The more committee members there were, the more likely the truly best proposals would be found by the consensus of the committee.  However, if there was enough variance in the error the committee could get locked into a form of Group Think in which they seem to form a consensus that a proposal is not the highest quality is the highest in quality level.

My suggestion was to change the use of the ranking process.  I felt that if there was a consensus on the top proposals, they should be funded.  If there was a consensus on the bottom proposals, they should not be funded.  However, if the funding cut-off line was in the area in which there was not complete consensus, partial funding should be provided by DRDC and the remainder of the funding for the proposal should be provided from the sponsor of the project.  This would determine the sponsor’s “willingness to pay” using demand revealing methods.

The Future of Maritime Modeling and Simulation



After I left the logistics team, I went to another operational research team that was in charge of evaluating the models and simulations that were being used in the Center for Operational Research and Analysis.  At that time, we were examining the models used by the Maritime Operational Research Team.

I became interested in Complexity Science and Agent Based Simulation during this period.  One of the recommendations of my evaluation of future directions for the Maritime Operational Research Team was to begin investigating Agent Based Models.

Technology Innovation Project of Crowd Control



In the early 2000’s, an operational research colleague in Valcartier, Quebec won funding for a Technology Innovation Project from Defence Research and Development Canada Headquarters.  Her proposal was to evaluate the usefulness of System Dynamics and Agent Based Modelling for the problem of crowd control using non-lethal weapons.  The Agent Based Modelling was being conducted by a professor and graduate students at Laval University.  The System Dynamics modelling was conducted by a contracted employee working directly with my colleague.  She asked me to be an adviser on the System Dynamics portion of the three-year project.

We developed a complex System Dynamics model of the crowd control using non-lethal weapons problem and published two papers at the System Dynamics Society Conferences in 2007 and 2008.  We also published a paper on Design of Experiments at the 2008 International Data Farming Workshop.

In 2008, the contracted employee and my colleague published a Technical Report for Defence Research and Development Canada.

The Laval team published a couple of conference papers too

The results of both the System Dynamics model and the Agent Based Model were inconclusive with regards to the pros and cons of non-lethal weapons for crowd control.

There was a little money left over after the project was nearly complete.  So my colleague hired my partner in Policy Dynamics to review the System Dynamics model developed by the contractor and then build a new model based on his recommendations which I published as a Technical Note in the Center for Operational Research and Analysis.

Validation and Verification of Models and Simulations



In the early 2000’s, I was a member of the Synthetic Environment Working Group which was looking at standardization of all the models and simulations in DND.  We met monthly at the office of the Synthetic Environment Coordinator Officer.  We discussed standards for modelling and simulation.

In particular, I was a member of the Validation and Verification Subgroup.  In this regard, I studied the field of Exploratory Analysis.  Exploratory Analysis was developed by the RAND Corporation.  It involved a process of extensive sensitive analysis to find robust results from simulations.  I gave a presentation on the use of Exploratory Analysis to validate simulation models.

Saturday, 6 December 2014

Chief Modeller for Kosovo Air Campaign



In 1999, the Canadian Air Force joined the other NATO nations in an air campaign versus Bosnian Forces who invaded Kosovo.  The CF-18s were deployed and conducted air to ground missions over Kosovo. 

The Center for Operational Research and Analysis formed a study team around the War Game to help analyze the effectiveness of the air campaign.  There were 13 military officers and six defence scientists on the team.  The military officers collected battle damage data.  The defence scientists analyzed the data and I was in charge of modelling the air campaign.  The goal was to use a System Dynamics simulation to project the battle damage into the future and thereby determine when the point of diminishing returns was reached in the air campaign.  At that point, NATO would need to bring in ground forces.

I made three projections each day: an optimistic, an expected and an pessimistic.   

Every week, the team leader briefed the Chief of Defence Staff (a four star General) and Deputy Chief of Defence Staff (a three star General) on the projections.  The team leader told me that the Chief of Defence Staff would hold my graphs up to the light to see where the projection lines flattened out.