Sunday 18 July 2010

The Clarke-Groves Mechanism: How to Induce Honesty (notes by Simon Vicary)

In the module so far we have established conditions for the efficient provision of a public good. We have also found one mechanism, the Lindahl model, which in important respects replicates the fundamental theorems of welfare economics for an economy with a public good. However, this mechanism is almost certainly too costly to operate as a practical proposition, and is in any case vulnerable to individuals misrepresenting their preferences in an attempt to lower the taxes they pay whilst at the same time still enjoying the public good that is provided by the taxes paid by others. We have also found that majority voting may under some circumstances deliver an efficient quantity of a public good, but it cannot be relied upon as a general rule. All this work raises a deeper and rather more difficult question:

Does there exist a mechanism which will guarantee Pareto optimality in all circumstances, and which will induce people to be honest when asked to reveal their preferences?

The reason why the second part of the question is needed is that it is assumed that the government is going to provide the public good. In doing this it needs to know the MRSGx schedules for each individual (that is, individual preferences). Hence individuals have to give the government some information about their preferences. If this information is false, then clearly the provision of the public good will not be optimal. As the motivation for misrepresentation is to lower one’s tax bill this problem of getting individuals to reveal their preferences is part of what is rather loosely referred to as the free rider problem. Put this way a partial reformulation of the question could be:

Can we find a general solution to the free rider problem?

In stating the free rider problem way back in 1955 Samuelson thought the answer to this question was “no”. He did not analyse the issue in any depth however, and economists rather left the things at that for the next 15 years. Little thought was given to how one might try to solve the free rider problem, until a number of articles starting with Clarke and Groves separately in 1971, seemed to suggest that it was in fact possible to solve the free rider problem. This work is the focus for this lecture. We shall find that the solutions provided by Clarke and Groves are only partial. Indeed the conclusion of the literature seems to be that there is in fact no general way round the free rider problem. I would like to emphasise this point at the outset as the most accessible paper for you, and one you should read (Tideman and Tullock (1976)), seems to suggest the contrary.

The reasons for these pessimistic conclusions will be revealed in the fullness of time. It may, though, be useful to start by clarifying in our minds the exact nature of the free rider problem itself.


1. Interpretations of the free rider problem

McMillan (1979) outlines three strands of the free rider problem. We have encountered all of them already in one way or another.

No Government: Full information on Utility
This is the world of Nash equilibrium that we looked at early on. Technically, as a game theorist would tell you, the basic model assumes that all agents have full information about everybody’s preferences. (Even more technically any individual’s preferences are common knowledge among individuals.) The key point though is that individuals act in isolation, deciding on how much they should contribute or donate to a public good. As we saw, contributions are typically too low, and the provision of the public good is sub-optimal. Everyone would be better off if each person contributed a bit more to the public good, but it is in no one’s interest to do this unilaterally. For a summation public good, individuals find themselves in a prisoner’s dilemma.

Here people take, or attempt to take, a free ride on the contributions of others. Put more precisely, an increase in total contributions by all other individuals in the community results in any one given individual lowering their own contribution. The reason for sub-optimality is that people think only of themselves. That is they decide on their contributions on the basis of costs and benefits to themselves alone. They do not take into account the fact that their contributions also benefit other consumers of the public good.

Government: No Information on Preferences
In a world of complete information the problem of government provision would be trivial. Therefore, to capture what is likely to be a benevolent government’s problem we assume the government does not know individuals’ preferences. It has in some way to rely on individuals telling the government what their preferences are. This could come about directly (through such things as opinion polls), or indirectly through observing the way people behave (for example inferring the value people place on the environment by looking at the extra amount they are willing to pay for such things as organically produced vegetables, for houses in traffic free areas etc. Once preferences have been found, the Samuelson condition (in principle!) can be applied to deliver what might be an efficient quantity of the public good.

As we saw with the Lindahl model, however, it seems individuals do not in fact have an incentive to reveal honestly their true preferences. Again, free riding is to blame. In this case, it appears people will want to under-state their preference for the public good so as to lower their tax bill. The crux of the problem lies in what seems at first glance like a very good idea: that tax payments should be related in some way to the benefit one receives from public expenditure. This seems to provide the source of the gain from free riding in this context. Any attempt to get round this problem would therefore have to penalise individuals for deviating from their (unknown) true preferences. As any misrepresentation is made to inflict some cost on the rest of the community, it might seem reasonable that any charge people pay for misrepresentation reflects the cost they impose on others.

As we shall see, this is the key idea Clarke and Groves exploited. But before we tackle this problem head on we need to fill in some background.

Large Numbers
McMillan also mentions the large number problem of Olson, as a third variant on the free rider problem (it gets worse as community size increases). This is slightly different from the other two, and will not concern us over much in what follows.


2. Second price auctions and how to induce honesty

The question of inducing honesty is not unique to the public goods problem. In fact there are a whole host of variants on this theme in economics, coming under the general heading of asymmetric information. After pioneering papers by Akerlof, Spence and Stiglitz (for which they got a Noble Prize), this became a very active area of research in the 1980’s, and indeed continues to be so. The classic and original paper in this area was by Vickrey in 1961, and it will be useful to start with a simplified version of a key argument in that paper. It concerns the first seemingly unrelated area of auctions.

Suppose you have a single item to sell. Often for unique single items the sale is by auction. However, there are many different types of auction. The classic method, probably the one that first comes to your mind, is what is often called the “English Auction”. Here individuals make ascending bids for the item. This is done openly. As the highest price bid rises, potential buyers drop out, and the “winner” is the last person to stay in the auction/bidding. He or she gains the object and pays the last price they bid. A second possibility would be to require potential buyers to submit a sealed bid. In this case potential buyers submit a bid to the auctioneer. This is not observed by other bidders. The object is sold to the highest bidder, and they pay the price they bid. This is referred to as a first price sealed bid auction. There are quite a few other possibilities.

Now a moment’s thought should convince you that in these auctions people will not bid honestly. The best way to see this is to think about the first price sealed bid auction. Suppose each individual has a maximum price they are willing to pay for the object. Will they write this maximum price on their submission to the auctioneer? Obviously not, because the maximum price is such that one is indifferent between: (a) not having the object and (b) having the object but paying this maximum price. There is therefore no possible way in which a person who submits their maximum price can gain from participating in the auction. Lowering the bid must result in some non-zero probability of positive gain, and is therefore to be preferred. Suppose, however, that you wanted to find out what people’s maximum willingness to pay is. How would you design the auction to do this?

Vickrey showed that the following auction (now sometimes known as a Vickrey Auction) would do the trick. Technically it is a Second Price Sealed Bid Auction. This works in the same way as a first price sealed bid auction, except that the winner (the person who submits the highest bid) pays the second highest price bid. It is not too difficult to see why this elicits honesty. I won’t go into details. Can you gain by raising your bid? Not if you have the highest valuation, and you might lose if this is not the case. What if you lower your bid? If you are not the person with the highest valuation you cannot gain, and you can only lose if you are that person. In short by submitting anything other than your true valuation you can never gain, and in some circumstances you will lose. Faced with a second price sealed bid auction honesty really is the best policy.

There are a number of observations to make about the second price auction which will be useful to bear in mind as we go through the Clarke-Groves mechanism.

(a) It might be thought that a second price auction is unsatisfactory from the point of view of the seller. This is misleading at best, and generally false. It turns out that all the auctions mentioned and many others produce the same expected revenue for the seller. This was shown by Vickrey, and his result has been generalised subsequently. The error in the statement lies in assuming that bidding is invariant with respect to the type of auction faced. This is obviously not true. The amount you might have to pay if you bid highest must have some impact on what you are prepared to bid.
(b) Things differ if we think of the seller as a bidder. The point is that the seller may have a reservation price for the article, and not be prepared to sell for less than this price. Suppose the rules of the auction allow the seller to set a reservation price? Will the seller set the reservation price honestly? The answer is in general no. By raising the reservation price in the range between the highest and second highest bids, the seller can increase revenue. It is possible therefore that he/she will want to put down a higher reservation price than his/her true valuation.
(c) Putting these two points together we see that we can get honesty from individuals in a group when the money raised goes outside the group. However, if the money stays within the group (as happens when we add the seller), general incentives to be honest do not seem to exist.
(d) How do we interpret the price paid by the winner? One way to look at it is as a compensating sum within the set of bidders. Suppose James has the highest bid and would win the auction if he participated. Suppose too that Mary is the second highest bidder and places a value of £20 on the object. If James participates he deprives Mary of the object. Put quaintly, he deprives the rest of the community of an article which it values at £20 (the object is a private good). So when James participates he pays a sum of money that would compensate the rest of the community (the set of bidders in this case) for his participation.
(e) However, following on from the last point, and in a way linking up with point (c) it is vital that although James pays a compensating sum, the compensation is not paid to anyone within the group. A moment’s thought again should convince you that as soon as compensation is actually paid the incentive to be honest collapses. Some one will have some sort of incentive to raise their bid in order to get hold of some of the compensating payments.

This may seem a rather contrived set of points to make, but together they provide a link to the question of how we can devise a mechanism that induces an honest revelation of preferences in a public good economy.


3. The Clarke-Groves Mechanism: Discrete Case

To see how the above principles can apply, let us take the simpler example of a discrete public good. Suppose we have a good that is either provided in one unit (G = 1) or not at all (G = 0). Examples might be a bridge, the restoration of a church tower, the saving of Private Eye etc. Suppose it costs £100 to deliver the good, and suppose there are three agents whose value placed on G is given in the table:

Table 1
Individual Valuations of the Public Good
Individuals Valuation Assigned Tax Net Benefit
A 40 35 5
B 70 35 35
C 20 30 -10

A Clarke-Groves scheme for this problem would work as follows:

(a) Each person is asked to submit their valuation of the public good (The valuations in the table are known only to the individuals themselves).
(b) The good is provided only if the sum of declared (there is no way of knowing for sure whether submitted valuations are true or not) valuations is as least equal to the cost of provision £100.
(c) If the good is to be provided, there are two parts to each individual’s (individualised) payment: (i) an assigned share; (ii) the compensating sum.
(d) The first part of any tax payment is an assigned share. The table gives the assigned numbers for our example. These assigned taxes are imposed. Individuals have no control over them. A natural assumption would be to assign them equally across individuals, but this is not necessary and not what we do in the table. Willingness to pay net of assigned taxes appears in the 4th column of the table.
(e) The compensating sum for each individual refers back to points (d) and (e) above. It is worked out by summing the willingness to pay for all other individuals in the community, and subtracting from this the total amount of assigned taxes they paid if the good is provided. Given that the good is to be provided we have to work out whether our individual’s participation in the community would make any difference to the outcome. If not, then no compensating sum is paid. If so then the imposed taxes are increased by the compensating sum in question.
(f) Given point (c) made about the second price sealed bid auction it follows that the money raised by the compensating sum should be thrown away. Or at least it should go outside the community.

Now apply this scheme to the example in Table 1. Table 2 takes the story on

Table 2
Working out the Compensating Sum
Individuals Sum Net Benefit n-1 Decision Compensation Paid
A 25 Y 0
B -5 N 5
C 40 Y 0

Suppose that our three individuals declare their preferences honestly. As the total sum of benefits exceeds £100, or as total net benefits exceed zero the good will be provided. Look now at individual A. Suppose A did not participate in the “vote” (you can imagine that A’s taxes are available to B and C). In this case the sum of net benefits over B and C results in 25 (= 35 – 10). This is positive, so A’s participation makes no difference to the community decision, and hence she pays no compensating sum. By repeating this procedure we find that B’s participation alters the decision the community makes. In fact B’s participation imposes a cost of 5 (= – (5 – 10)) on the community. Hence B must pay £5 as a compensating sum. C, as can easily be checked pays no compensating sum, which is just as well as she is made worse off by the provision of the good.

The final question about the mechanism is simply this. Does it indeed elicit an honest revelation of preferences? It can again be checked that by declaring a valuation different from the true one, individual cannot gain, and might lose. Note first that the compensating sum paid is beyond the individual’s control. An individual can only alter tax payments by altering the provision of the good. So A, for example, could lower her declared valuation. If she does this she pays no tax, but she also loses the benefit of the public good. As with the good provided she enjoys a net benefit of £5, she can only lose by doing this. Raising her declared valuation does not alter the tax paid, and therefore will make no difference to her welfare.

Given that the good is provided individual B pays a compensating sum of £5. His net gain from having the public good is 30 = 70 – 35 – 5. Lowering the valuation either has no effect on provision (with a constant compensating sum) or causes the good not to be provided, in which case he loses £30. Raising his declared valuation makes no difference to anything.

Finally consider C. C loses from the provision of the public good. As C’s valuation makes no difference to the decision, lowering the declared valuation makes no difference to the outcome, and no difference to her tax bill. Neither does raising her declared valuation.

It appears then that there is no way for any agent to do better than to make a truthful declaration to the authorities. Doing otherwise either has no impact on utility or it makes the individual worse off than he/she would otherwise be. Is this a quirk of the example, or does this property hold generally? It is of course a general property of this mechanism.

To prove that this mechanism really does induce honesty, note two relevant factors:

(a) The public good may or may not be provided. The condition for provision is: , or , where A is the cost of provision, and ci is individual i’s assigned tax should the good be provided.
(b) To find out whether an individual is pivotal, the inequality in (a) must be compared with the inequality . If the signs of the two inequalities are the same then the individual is not pivotal, and pays no compensation tax. If they do differ then the individual is pivotal in the sense that he/she changes the outcome as opposed to what the rest of community would decide.

There are four cases to examine, depending on whether the individual is pivotal or not and also whether the good is provided or not.

Case 1: G = 1, Pivotal Individual
First take the case of the good being provided, and the individual being pivotal. This is the case of individual B in the example. The individual in this case must make a compensating payment . By construction:



(ci is the assigned tax.) In this case we have:

with

Hence it must follow that (substitute the tax identity into the first inequality):



The left hand side is the return if the individual is honest. The right hand side represents the return when the declared valuation is so low as to mean the public good is not provided (and our individual ceases to be pivotal). This being so, it would never be in the individual’s interest to lower her declared valuation below the true value. To do so incurs the danger of losing the benefit . As tax payments are fixed either exogenously (ci) or by the declaration of other agents ( ), there is no prospect of gain. Naturally there is no gain to be had by increasing one’s declaration. We conclude that if you are pivotal, and the good is provided, you can only lose by doing anything other than declaring the truth.

Case 2: G = 1: Non-pivotal Individual
Now suppose the individual is not pivotal, but that the good is provided (this is the case of A and C in the example). In this case no compensating taxes are paid. If the public good is not provided then the following inequalities hold:

with

Raising declared valuation makes no difference to the outcome (the good is still provided) and no difference to taxes paid (the individual is still non-pivotal). Thus our individual’s utility is:

vi – ci

It might be possible for i to lower declared preferences so that the good is not produced. This could apply to C in the example who loses as a result of the good being produced. However, were i to succeed in this (actually it’s not possible for C in the example, but she doesn’t know this) then he/she becomes pivotal and has to make a compensating payment:



The only thing that happens now to i is that the compensating payment is made. The good is not provided, and no assigned taxes are paid. To find out whether it is worth lowering declared preferences therefore i must compare:

vi – ci with



But



Hence



The left hand side of the inequality is the individual’s utility when he/she is honest and the good is provided. The right hand side represents the return when preferences for the public good are under-stated, and it is not provided. Hence it follows that if our individual succeeds in causing the public good not to be produced then he/she is worse off. Our conclusion now is that if the good is going to be delivered then there is no way any one can gain by misrepresenting preferences, and there is always the possibility of making oneself worse off.

Case 3: G = 0 Pivotal Individual
The method used to show that honesty is the best policy is the same for when the good is not going to be produced. If the individual is pivotal she “stops” the good being produced when the rest of the community would want this to happen. The analysis uses the following two inequalities:

with

Lowering declared preference for the public good makes no difference to the outcome, but raising declared preference might cause the good to be produced. If so, then no compensating taxes are paid.

Compare the utility (utility with honesty) with vi – ci¬ (utility if overstatement succeeds in getting the good produced). By subtraction you should be able to convince yourself that:

vi – ci <

Our individual can only lose by overstating preferences for the public good.

Case 4: G = 0 Non-pivotal Individual
As with the last case lowering declared preferences makes no difference to the outcome, but raising preference could cause the good to be produced. In this case we use:

with

However, if the good is produced our individual becomes pivotal and pays a tax equal to:



The comparison is between utility when honest (0) and when overstatement succeeds in getting the good delivered. That is, between:

0 and

Hence yet again honesty is the best policy. Our final conclusion is that no matter what the circumstances, individual dishonesty at best makes no difference. Otherwise, it results in our individual being made worse off. In this way the Clarke-Groves mechanism ensures an honest revelation of individual preferences and in this way offers a final solution for the free rider problem.


4. The Clarke-Groves Mechanism: Continuous Case

Before examining some of the limitations of the Clarke-Groves scheme, we shall just note how it gets extended to a continuous public good. Actually relatively little changes, and there is no point in going into details. See Cornes and Sandler if you are interested. It is simply a matter of applying the principles already learnt.

Consider Figure 1. It is assumed that each individual has a quasi-linear utility function Ui = xi + vi(G), and, to keep things simple we assume a constant MRTGx = 1. The procedure works as follows:

(a) The government asks each individual to state their preference for the public good. In this context this means to write out their MRS schedule.
(b) Having gained each MRS schedule the government applies the Samuelson criterion by choosing the output at which the sum of the MRS equals the MRT.
(c) In determining each individual’s tax bill there are again two components: a pre-determined tax share over which the individual has no control, and a compensating sum, determined by the impact our individual’s statement of preferences has on the outcome.
(d) Once the level of G is determined individuals pay the tax. The pre-determined tax revenue is spent on providing the public good, and the revenue from the compensating payments is thrown away.

Let us see how this works out on the diagram.  on the diagram represents our individual’s pre-determined tax share. Suppose people are honest.

(a) By equating the sum of declared MRS’s to the MRT the output of G is determined at 0Q.
(b) Now imagine that i does not participate in the process. The rest of the community would collectively, under this procedure decide on an output 0A, found by equating to the sum of their marginal tax rates (1 - MRTGx. This step corresponds to finding out whether the rest of the community would want a discrete public good given their (stated) preferences and costs.
(c) On the diagram our individual is pivotal in the sense that her participation raises provision from 0A to 0Q. As this is so, she must make a compensating payment.
(d) This compensating payment must equal, as before, the welfare loss suffer by the other n – 1 individuals in the community as a result of i’s participation.
(e) This aggregate loss is measured using standard welfare analysis (the area between the MC to the other individuals, 1 -  and their aggregate marginal benefit schedule , area JKL).
(f) The line SS (synthetic supply) is draw such that area WXY is the same as JKL. It is a mirror image of the aggregate marginal benefit schedule for the other n – 1 agents.
(g) Thus the area JKL = WXY gives the compensating payment i must make in this case.
(h) The output chosen will always be such that the Samuelson condition is met.

This scheme produces honesty in just the same way as we saw earlier. To see this, consult Figure 2 where we reproduce the relevant part of Figure 1. Suppose our individual decides to over-state his preference for the public good. The provision rises from OQ to OQ1. The benefit of this rise to our individual is QYNQ1 whilst the cost is QYRQ1. Our individual loses by the area YRN. This area will always be positive for any overstatement of the MRS schedule. A similar conclusion can be reached by assuming our individual under-states preferences for the public good. This is left as an exercise.

The conclusion is that under the assumed conditions the Clarke-Groves mechanism induces individuals to report honestly their preferences for the public good.

Before dealing with some of the weaknesses of the mechanism, one point is worth making. In running through the analysis we found that it was always in the interest of our selected individual i to tell the truth. This analysis of course applies to all other individuals in the economy. However, we did not however enquire too deeply into what these other individuals were doing. Were they telling the truth about their preferences, or were they disguising their preferences in some way? Actually, it doesn’t matter. While we sort of assumed other people were revealing their preferences, the conclusion goes through if they are being dishonest. The reasoning we followed really dealt simply with what these other people had declared. If we were to assume a false set of declarations by the other people our reasoning would proceed exactly as originally, and we would find it still in our individual i’s interest to reveal preference truthfully. We conclude:

Regardless of what other people have done it is in the interest of each individual separately to reveal honestly their preferences for the public good.

In the language of game theory we would say: Truth telling is a dominant strategy in the Clarke-Groves mechanism.

Another pretentious way to say this is that the Clarke-Groves mechanism is Strongly Individual Incentive Compatible.

A natural question to ask is this: Do there exist other mechanisms with this property? It can be proved that the answer to this question is “no” Green and Laffont (1979). If we are interested in mechanisms that induce individuals to reveal their preferences for the public good under all circumstances, then we can restrict ourselves to Clarke-Groves mechanisms. It is useful to bear this in mind in what follows.

If the mechanism is so wonderful, why is it not used more often? Let us find out why.


5. The Gibbard-Satterthwaite Theorem and its Implications

Our approach will be a little indirect, and requires us first to go back to the Arrow Theorem. Around about the time the Clarke-Groves mechanism was being developed (theoretically!) a remarkable theorem in the field of social choice was proved by Gibbard and Satterthwaite. It can be described as follows.

Suppose we have a community of n individuals, and suppose our community must choose between a number of alternatives S ={x1, x2,…….,xn}, where n ≥ 3. This problem is what Arrow originally had in mind. Suppose as with Arrow that the community has a mechanism for deciding which social state to choose given the declared preferences of the individuals. We can summarise this in the form of a function from the declared preference to the set of social states:

g: P→S x = g(P1, P2, ……,Pn)

where g is our function and Pi is the preference declared by individual i. g could be described as a social choice function. Voting, Lindahl and all the other procedures we have discussed so far would be classed as social choice functions.

We say a social choice function is manipulable at P = {P1, P2, ……,Pn} if there exists a false declaration of preference Pi* for some individual i such that:

g(P1, P2, …, Pi*,…,Pn) ≻i g(P1, P2, ……,Pn)

That is, individual i can make himself better off by making a false declaration of preferences. The work in our lecture up to now seems to suggest that the Clarke-Groves mechanism is non-manipulable, in that it is always in an individual’s interest to reveal preferences truthfully.

We would want any social choice function to have some desirable properties. Reflecting Arrow in some way, two can be stated:

Non-imposition
∀ x ∈ S, ∃ P such that x = g(P)

That is, take any social state. There must be some configuration of preferences that result in this social state being chosen. If you like, the constitution does not debar some alternatives. This, effectively, is Arrow’s unrestricted domain axiom.

Non-Dictatorship
∄ i such that x ∈ g(P) ⇔ x≻i y ∀ y ∈ S and for all preferences that could be stated by all individuals.

This is Arrow’s non-dictatorship axiom. There is no individual whose preference determines social choice regardless of the preferences of other people.

The Gibbard-Satterthwaite Theorem
If the number of social states is greater or equal to three it is impossible to construct a social choice function that has the properties of non-dictatorship and non-imposition.

Put another way, if a vote is to be taken over more than two alternatives, then no matter how the voting system is designed it will be possible in some circumstances for somebody to gain by misrepresenting their preferences. You cannot eliminate strategic voting for any “sensible” voting system.

As well as its interest for actual voting systems, the Gibbard-Satterthwaite Theorem poses a problem. The Clarke-Groves mechanism involves many alternatives (this is even true of the binary choice problem we examined in detail given that one aspect of the problem was the tax people had to incur), it is clearly non-dictatorial, and has the non-imposition property. Yet we found that it is non-manipulable in the sense defined. There is clearly something funny going on here. Our next section finds out what this is.


6. Limitations of Clarke-Groves Mechanisms

How do we reconcile the Clarke-Groves mechanism with the Gibbard-Satterthwaite Theorem? The answer lies in a half hidden assumption that was slipped in to the Clarke-Groves model.

No Income Effects
In explaining the Clarke-Groves mechanism for a continuous public good, as we did with Figure 1, we assumed utility to be quasi-linear. This ensures that the MRS functions never move around. It turns out that this assumption is quite vital to get the model to work. It is possible to extend the result so that truth telling is a dominant strategy with a wider class of utility functions. However, this cannot be done for general utility functions. (This is proved in Green and Laffont p 81). The reconciliation is therefore that the Clarke-Groves mechanism only “works” under a restrictive set of conditions. If you want to relate this point to earlier work, the assumption of quasi-linear utility violates the unrestricted domain assumption.

As an indication of the sort of theoretical trade offs involved, a parallel mechanism was devised in a heavily technical paper by Groves and Ledyard in 1977. This did work for general utility functions, but truth telling was only a Nash equilibrium strategy, not dominant strategy equilibrium. (How this would work in practice when people have to state preferences without knowing what other people’s preferences are is unclear). It seems therefore that some of the sharp properties of the Clarke-Groves model have to be sacrificed if it is to work for all utility functions. This general point is exactly what we would predict given the Gibbard-Satterthwaite Theorem.

There are, however, other limitations

Clarke-Groves Mechanisms do not work for all public goods
Examples of this would where income distribution is a public good (either as a matter of social policy or as a question of deciding on the finance of a given project). Here there are individualised benefits from the public good, and individuals have an incentive to misrepresent preferences in order to gain them.

The Budget Constraint Problem
Even if we were to put these problems (as well as the cost of operating the system) aside there is still a serious problem with the Clarke-Groves mechanism. It is simple to state and easy to see. The mechanism does not in general produce a Pareto Optimum. The reason for this lies in the compensating sum that in general we must expect to have paid. The revenue gained in this must be thrown away, otherwise individuals will have an incentive to misrepresent their preference in order to gain this revenue.

By revenue, as this is microeconomics, we mean real goods and services. But if real goods and services are being disposed of then obviously the economy has not got to a true Pareto optimum. In a general sense, the explanation for what is happening here is reasonably straightforward. Valuable information is being hidden from the planners. In order to extract it some cost has to be incurred, and one way to think of this “budget surplus” is as a cost of extracting the information. This failure to reach full Pareto optimality is in fact a standard property of models with information asymmetry.

As an aside, if we look at Figure 1 we can see that there is one happy case where full Pareto optimality would be achieved. This is where the imposed individualised price of the public good implied by the tax sharing rule () corresponds with the Lindahl price. In this case, there is no budget surplus problem.

Vulnerability to Coalitions
Just note. The Clarke-Groves mechanism is invulnerable to individuals misrepresenting their preferences. It is not invulnerable to groups of individuals misrepresenting their preferences.

To conclude…

Recall that Clarke-Groves mechanisms are the only ones that guarantee that truth telling is always in the interest of the individual agent. If there is a “solution” to the free rider problem it is here. However, we have found that they “work” only for a restricted set of utility functions, they do not produce Pareto optimality and they are vulnerable to manipulation by coalitions of individuals. If these problems are serious then the conclusion we have to come to is that except in special cases the free rider problem of revelation of preference is insoluble.

The Lindahl Model (notes by Simon Vicary)

1. The Story so Far

Our last lecture found out that voluntary action (the market mechanism) was likely to bring about a sub-optimal provision of the public good, as a result of individuals trying to take a free ride on the contributions of others. One obvious solution to this inefficiency is to recommend some form of government intervention. Indeed many of the services provided by governments have strong public good characteristics, and these are also important in areas in which the government intervenes rather than provides (e.g. environmental regulation). Defence, as we have seen, in many respects gets close to a pure public good, and the same can be said about “law and order”. The issue is rather more blurred with areas like education and health, but even so there are certainly some public good elements to what the government provides.

However, government provision of a public good raises a whole series of new questions. In fact it is a massive area of enquiry in its own right, in some respects incorporating political science. In this module we cannot hope here to do anything other than provide some basic ideas. The traditional starting point for economists in thinking about the public sector and in particular public expenditure is the Lindahl Model. In some ways this is a rather elusive concept, as it can be looked at in a variety of different ways:

• An alternative way of representing the Samuelson analysis
• A positive theory about how government expenditure decisions are arrived at
• A normative theory about how government expenditure should be arrived at
• An analytical device for judging alternative mechanisms by which government expenditure decisions are arrived at (somewhat similar to perfect competition in a world of private goods)


2. The Benefit Approach to Taxation

Modern public economics grew out of the old sub-discipline of public finance. A key issue in this older tradition concerned the concerned the basis of what could be called a fair tax. There were two approaches:

• The Benefit Approach. Here a fair tax was based the benefit an individual received from government expenditure. People who benefited more from public expenditure would, according to this school, have to incur a higher tax bill. This line of approach was particularly popular with continental economists, especially Italians and Scandinavians, although British thinkers adopted it in the 18th century stemming from a social contract view of the state.
• The Ability to Pay Approach. Here a just tax was based on an individual’s ability to pay. Those with a higher ability to pay should pay more taxes than those with a lower ability. This school of though was more popular in Anglo-Saxon countries, especially in the later 19th century possibly because of the stronger influence of utilitarian thinking.

Modern welfare economics has rather dispensed with debates of this kind, but both lines of thought have left their mark. The key idea of the benefit approach was that taxes should represent a payment for services delivered. The relationship of the government to the citizen was therefore one not in principle different from that between individuals in the community involved in mutually beneficial exchange. The state as such is only justified to the extent to which it provides benefits to individual citizens. The mainline value judgements adopted by modern welfare economics are Paretian, and the question of ultimate “justice” is not pursued any further than this. Taxes in the modern view are desirable if they lead to Pareto efficiency (subject to any distributive objectives about which Paretianism itself remains silent).

The benefit idea was revived by the Swedish economist Knut Wicksell in 1896, and in the hands of his pupil Lindahl, together with later developments led to a re-run of the Fundamental Theorems of welfare economics when public goods are present. Wicksell started from the following presumptions:

• People should not be coerced into paying for goods or services they do not want
• Public goods are more efficiently provided by the group as a whole, so individuals need to communicate their preferences to others
• To ensure that no coercion takes place, the appropriate principle a government must make in determining its expenditure-tax decisions is unanimity.

Although Wicksell wrote before the Paretian approach got established as the mainline form of welfare economics, his thinking fits very neatly into this approach. In a way, it represents an extreme form of Paretianism. When there are collective decisions to be made, how do we guarantee that no one is going to be made worse off? Unanimity (or giving each person a veto) would seem to be the only way. Wicksell realised, of course, that unanimity would be difficult in practice, but it was a principle to aim for, and in making decisions there should be “approximate unanimity”. The difficulties can be seen if we take another two person economy, and suppose individuals are bargaining over tax bills and provision of the public good. On Figure 1 we illustrate.

The area ABCD represents the area of Pareto improvement over the initial position without government. To see this note that the horizontal axis represents the provision of the public good, and the vertical individual A’s share of the total cost of whatever provision is decided upon. The indifference curve going through the origin represents the utility A receives in a state of anarchy (no government) (no taxes no government expenditure). With a veto over any decision A can be sure of never getting a utility level below this. The indifference curve going through the point (0, 1) fulfils a similar function for individual B.

The “model” of public expenditure determination goes as follows. The government announces a tax-expenditure package, say, point F. If both agents are better off than their initial position then they vote to accept this package, and point F is the next starting point. If the proposed package lies outside ABCD then at least one person will veto it, and it will be rejected. The government then produces another package. Starting from F another area of Pareto improvement can be traced out by the indifference curves going through F, and the government reformulates another package in the hope of finding another Pareto improvement. Once such a package has been found to which no one objects (vetoes) then this becomes the new starting point, and the process starts again. Equilibrium is finally reached when a package has been found for which for any counter proposal made there is always an objector. Such a point will be found somewhere on the set of Pareto optima, the dashed line (locus of tangencies) on Figure 1. It is not too difficult to show that tangency of the indifference curves on Figure is equivalent to the fulfilment of the Samuelson condition for Pareto optimality.

This, in simple terms is how the Wicksell model of ideal public expenditure should go. No one would argue for this to taken literally, but in a curious way it is quite instructive:

• Taken literally it is a process that guarantees Pareto optimality. However, its very artificiality may suggest that there are serious difficulties in the way of achieving this in practice.
• If each iteration is costly, as we would expect, then this would be a very expensive way of determining public expenditure
• Although stylised, the model picks up an important aspect of politics in modern democratic states. The determination of public expenditure is a major issue, as is the related determination of the tax burden. The process of determining these variables is accompanied by a lot of bargaining, both within and between government departments, and also, significantly, by lobbying by private interests
• Although Wicksell’s model is in a way within the benefit tradition, in one respect he is quite modern. He was careful to point out that his scheme was valid only once the question of the optimum distribution of income had been settled. Hence he separates out allocation and distributional questions in very much the same way as the fundamental theorems aim to do
• There is an obvious problem with the model even on its own terms. Why should people be honest in responding to successive government packages? Why pretend not to want the public good in an attempt to lower one’s own tax bill? This is a problem to which we shall return
• The final package (t, G) is indeterminate. We could end up anywhere on the line BD. There is no obvious way within the limits of the model to say where this might be. Incidentally, this point shows too that Wicksell’s point about income distribution has not been fully settled: the actual way this process proceeds could have quite significant implications for the distribution of welfare between individuals.

The Buchanan-Tullock Optimal Constitution

Wicksell’s idea proved quite influential, and, as is the way with these things, it developed in a number of differing ways. One such was the “optimal constitution” approach developed by Buchanan and Tullock in their 1962 book The Calculus of Consent. A central part of this book was an optimal voting scheme justified by unanimity. The idea was that while it would be too costly to rely on unanimity to reach every single tax-expenditure made by governments, the principle of unanimity could be preserved if everyone gave their consent to the process by which such decisions were made.

It is worth digressing a little to see this development of Wicksell’s idea. Buchanan and Tullock’s approach can be characterised in two ways:

• Like, Wicksell, they took Paretian value judgements seriously
• They consciously adopted a view of the state that revived the social contract tradition that originated in England in the 17th century.

The practical counterpart of their theory was the formation of the constitution of the United States in 1789.

Starting from Wicksell’s thinking they took it that Paretian value judgements should be treated seriously. No one should be coerced into submitting to laws or, for our purposes here, having to pay taxes without their consent. If we start from this point then in principle each person should have a veto over any law proposed, or any proposal to raise taxes, as with Wicksell. However, the cost of doing this is likely to be prohibitive, and probably going to satisfy no one. One country which did use the veto in its parliament (sejm) was Poland in the 17th and 18th centuries. This was hardly a good example of unanimity in action. The veto (liberum veto) was abolished in 1791 just four years before the country disappeared in a (final) partition between Austria, Prussia and Russia.

See Wikipedia http://en.wikipedia.org/wiki/Sejm for some details.

Wishing for unanimity as a principle to underlie collective decisions, but recognising that it was impractical to have unanimity in every decision that has to be made, Buchanan and Tullock shifted the principle back one stage. People give their consent to each collective decision not if they approve of the decision itself, but if they have given their consent to the manner in which it was made. That is, everybody gives their consent to the “constitution”. This, as we have already seen, is to be thought of as the procedure by which this actual decisions (here most relevantly on taxation) are made. The picture they have is therefore quite parallel to the formation of the constitution of the USA, and, as a more recent possible example, to the establishment of democracy in the Eastern Europe.

This idea, though, raises the question of what sort of constitution would emerge from unanimous consent. Not much is said about the legal details in their book, but the principle by which decisions should be made is set out in the central chapter of the book, and illustrate on Figure 2 Here we have two cost functions. The decision-making function represents the cost of making a decision as determined by the percentage of individuals whose consent would be required. Quite possibly this becomes infinite as this approaches unanimity. The second function, the so-called “external cost” represents the possibility of lost of benefits from not being able to achieve Pareto optimality, or perhaps the expected cost of have a decision that makes the individual worse off. However we look at it, unanimity guarantees that no one will be made worse off or exploited by others, and it is reasonable to assume that this is downward sloping. Total cost is simply the sum of these two. So our individual would want a constitution that required the degree of consent given on the horizontal axis, the “optimal constitution”.

Various points can be made about this piece of analysis:

• If we think in terms of voting rules, the “optimal constitution” point could well vary as between the degree of consent required for different types of measure. It is notable, for example, that often a higher percentage of a vote is required for constitutional amendments.
• The schedules in Buchanan are not well defined. It is not clear what they mean in other than a very general sense.
• There is no reason why “majority rule” should have any special status as a voting scheme under this approach. This is argued by Buchanan and Tullock themselves, although they may overplay their hand because……..
• Voting rules that require more than 50% consent are not reversible. That it, they cannot be overturned by another coalition of voters. It 40% approve of a measure, then 40% could also vote to have it reversed. It is possible therefore that even though the schedules are ill-defined they have not been drawn accurately. There may be a jump in the decision-making function at 50%.
• In one sense Buchanan and Tullock do not solve the problem they pose. If our diagram represents what one individual thinks, there is no reason why the “optimal constitution” will not differ for another individual. It is not clear what happens at this point.
• One can regard the Buchanan-Tullock optimal constitution as an attempt to escape the dilemma of the Arrow Theorem. Unanimity might be easier to achieve for a constitution than for actual day-to-day decisions. However, there is no guarantee of this within the framework B-T develop.
• Buchanan and Tullock compound this last problem by asserting that at the constitutional stage people have already established a set of property rights. They negotiate about the constitution on this basis. This makes the possibility of unanimity less likely. To take the US example, an unpleasant one, it is reasonable to suppose that slaves in the 18th century USA would, had they been asked, have expressed rather different views about the constitution from the slave owners who were instrumental in writing the constitution.
• One can interpret Rawls’ 1971 A Theory of Justice as a way of clearing up this obscurity in B-T. His view was that a constitution should be based on what individuals would agree to behind a veil of ignorance. That is, people would know how society would be run, what laws and rules it would have, but would not know their identity within that society. They would have an equal chance of being any “named” individual. So they could be a slave owner, if it were a case of joining 18th century USA, but it would be much more likely that to be a slave. Who would consent to slavery in this case? Behind the veil of ignorance, it becomes more plausible that unanimity could be achieved. We could therefore interpret Rawls as trying to find a way round the Arrow problem

Another approach which leads into modern general equilibrium theory was initiated by Wicksell’s fellow Swede Erik Lindahl. To this we now turn.


3. The Lindahl Model: Two Person Economy

Lindahl’s work published in 1919 refined that of Wicksell. We can get an idea of the key points by looking again at the Wicksell diagram. Consider Figure 3. The indifference map is reproduced as in Figure 1, but now individuals are asked a different question. Instead of “Do you approve this total tax-expenditure package?” they are asked, “If you had to pay x% of the tax bill what quantity of the public good would you want to see provided?” Although the fundamental features of the Wicksell approach are preserved, there are one or two significant differences in the way we can look at the model. The key difference is that after setting personal tax rates the government/planners look at the responses individual make. If people are unanimous then the process stops and the tax rates and provision of the public good are determined accordingly. If people are not unanimous, then the government adjusts tax rates and puts the question again to the electorate. A natural procedure would be to raise the tax rates for those who want a high provision of the public good, and lower the tax rate for those who want low provision. The process continues until people are unanimous.

To see how this might work out, first imagine we alter the share of tax that individual A must pay, and trace out her responses. This we do on Figure 3. Clearly, as the tax share falls in some sense the price of the public good to our individual must also fall. Hence by tracing out the locus of tangencies as we do we must derive some sort of a demand curve for the public good. We can repeat this procedure for individual B. As lowering tA means tB = 1 – tA must rise the demand curve we draw will be upward sloping. The result is shown on Figure 4. Naturally something significant must be happening at the point at which the two demand curves intersect. This point indeed represents the (unique) Lindahl equilibrium on the diagram where people are unanimous about how much of the public good should be provided. Various comments follow…

• The significance of the intersection is that again as with Wicksell individuals are unanimous about the tax expenditure package. However, in this case the unanimity takes a slightly different form. Given the tax shares no one wants to alter the quantity of the public good. If you could imagine this in a political system, this could be interpreted as meaning that no one would want to lobby to alter the level of government expenditure.
• This last point can be illustrated by drawing in the indifference curves for point L, the Lindahl equilibrium on Figure 4. Given the way the demand curves were constructed, not only will we have tangency but the each indifference curve will be horizontal at this point. Given the tax share, each person consumes exactly the quantity of the public good they would want to consume. With Wicksell it is possible to have an equilibrium in which one person would want more G given their tax share, and their partner less (Draw a diagram to confirm this point).
• Suppose, to keep life simple, we have a constant cost economy, so the public good price, pG, is fixed. If the tax share for individual is tA then the expression tApG represents the effective price of the public good for individual A. A similar point applies to B. (The assumption of constant cost is not essential to this argument. These personalised prices can simply be defined in the same way for a variable cost economy. In this case it might perhaps be better to refer to t as the cost share rather than the share of the tax bill).
• The “personalised prices” are referred to a Lindahl Prices. Usually the writer uses this term to refer to the equilibrium prices, but this semantic issue is not important. The key point is that with (equilibrium) Lindahl prices individuals get exactly the quantity of the public good that they would choose if they had to pay their Lindahl price. In practice, of course, people cannot choose the quantity of a public good in the same way as they do the quantity of a private good. However, the Lindahl model produces an outcome in which things are no different from what they would be if individuals had this choice.
• This last point means that there is a sense in which the Lindahl model converts a public good economy to one which is analytically equivalent to a private good economy.

This being the case, a natural question to ask is whether the Lindahl model has the equivalent of the two Fundamental Theorems of Welfare Economics. That is, is it the case that all Lindahl equilibria are Pareto optimal? And is it also true that any specified Pareto optimum can be realised as a Lindahl equilibrium? Not only do the last couple of points suggest the questions, they also suggest a way of answering them. For if a notional equivalence between a Lindahl economy and a private goods economy can be established then the proofs of the original fundamental theorems can be used to shown that the Lindahl model is the public goods analogue of the competitive mechanism for a private goods economy.


4. The Fundamental Theorems of Welfare Economics Revisited

It is easy to show that at any Lindahl equilibrium the Samuelson condition must hold:

Given the replication property referred to in the last section, at any Lindahl equilibrium, and given any individual i, with a private good x and a public good G, the following condition must hold:



pG is the actual price paid for delivery of the public good. We assume that the public good is produced by competitive firms. Hence:



Now sum this equation over all n individuals:



This is the Samuelson condition.

Technically, however, this does not provide us with a proof of the first Fundamental condition. The Samuelson condition is a necessary, not a sufficient, condition for a Pareto optimum. However, this argument suggests, correctly, that in all standard cases the analogue of the first theorem holds. In fact under pretty much the same conditions as with a private good economy we have:

Theorem 1
Any Lindahl Equilibrium is Pareto Optimal.

The second theorem is a bit more difficult, but in its technical details the proof goes through in pretty much the same way as the original theorem. The key technical assumption is again convexity, which is needed to ensure the existence of a set of (Lindahl) prices. Once these have been found, then utility and profit maximisation do the rest of the work. We can show the second theorem diagrammatically using the diagram developed by Cornes and Sandler, Figure 5.

The dotted line in this diagram is a 45o line, and the line AC is the set of Pareto Optima. Our problem is this. Can we specify any point on AC, and then (at least in principle) achieve this point as a Lindahl equilibrium?

Assuming constant cost, a Lindahl set of prices for this economy would be shown as a straight line going through the origin. (Individuals pay a fixed share of whatever it is that is delivered). Suppose we are interested in reaching point A. As the diagram shows, the tangent going through point A does not go through the origin. Hence if we are to reach point A some prior redistribution must occur. To see how this happens on the diagram, recall the Warr Neutrality proposition. If we re-distribute from individual 2 to individual 1 there will be no change in final utility and all that happens will be that individual 2 cuts donations to the public good by the exact amount of the transfer, whereas individual 1 raises donations by the same amount. What does all this mean for Figure 5?

When there is a re-distribution the indifference curves must shift. Suppose say $10 is transferred from individual 2 to individual 1, and suppose 2 cuts donations to the public good by $10. In this case individual 1 would be as well off as before if she raised her contribution to the public good by $10. Now suppose we start at one particular point on 1’s indifference curve. After the transfer the equivalent point on 1’s new indifference map must be on the 450 line below and to the right of the old point (recall that individual 2 is lowering donations by $10, and 1 is raising them by the same amount). Our conclusion is:

When a transfer from 2 to 1 occurs on the diagram 1’s indifference map shifts down to the right along a 450 line.

A similar argument shows that 2’s indifference curve shifts down along the same 45o line, and by the same amount.

Armed with this conclusion, return to Figure 5. Suppose now we redistribute from individual 2 to individual 1. The indifference maps for the two individuals, and point A move down in a 450 line. As they do so the tangency going through point A follows. When point A reaches point B, the tangency as drawn goes through the origin. Hence if the re-distribution succeeds in moving point A to point B (or more precisely if the re-distribution ensures that the tangency goes through the origin), then the specified Pareto optimum can be achieved as a Lindahl equilibrium. As no matter which point we choose on the (dotted) line of Pareto optima, this will always be possible the second welfare theorem is proved for this sort of economy. To summarise

Theorem 2
Any specified Pareto optimum can be achieved as a Lindahl equilibrium.

Paradise regained it seems. However, things can’t be as easy as this, and although the Lindahl mechanism does indeed have a technical equivalence to the competitive mechanism, there are, even abstracting from the costs of operating the system a number of serious problems it faces in being implemented. We consider these in the next section.


5. The Incentive Problem

There are two points to make. One is rather technical, and just worth noting. The other is easier to understand and highlights a problem we shall take up later on in the module.

The (Non-shrinking) Core

In examining competitive equilibrium, mathematical economists developed the idea of the core. The concept itself is not too difficult to grasp. Suppose any set of agents was able to make any set of binding contracts they liked amongst themselves. What sort of allocation would result? The idea of the core results from the following observation. Suppose some allocation, call it A, is about to be realised as a result of a set of binding contracts individuals are about to make with one another. Suppose, however, that there exists a sub-set of individuals who can make a set of contracts amongst themselves that ensures they will all be better off than in A regardless of what the rest of the community does. In this case we would expect this subset to sign the relevant set of contracts. Allocation A would then not be realised. In thinking about what set of contracts people might agree upon, we should therefore require for that any set of contracts or allocation B to be a candidate for the final outcome for the economy, no such coalition should exist. That is, there should be no group of individuals who, by making a suitable set of contracts amongst themselves, can make themselves better off than they would be at B.

The set of allocations/contracts with this property is called the core of an economy. The interest of the core lies in its absence of any institutional detail, apart from the enforcement of contracts. If competitive allocations all lie inside the core we would expect them to have some sort of stability property. No group of individuals will be able to make alternative arrangements of mutual benefit to themselves. On the other hand, if allocations other than competitive ones lie in the core it is possible that outcomes other than the competitive ones are conceivable. So, given that people can make any set of contracts they like how do allocations in the core compare to the competitive allocations?

Using a box diagram it is easy to show that not all core allocations are competitive (starting from an initial endowment of goods). However, in 1963 Scarf and Debreu proved a result that had been suspected for some time. Take a private goods economy. As the number of individuals in the economy increases to infinity the set of core allocations shrinks to the set of competitive allocations.

Given that the Lindahl mechanism has an analogous role to the competitive mechanism for a public goods economy, it is natural to ask whether the same results hold. The answer is simple:

• Lindahl Equilibria allocations are in the core
• In general the core contains allocations other than Lindahl allocations
• The core does not shrink to the set of Lindahl allocations as community size expands

The analogy does therefore not go through completely, because of the last proposition. Intuitively, this can be understood in the following way. To eliminate an allocation from the core, there must exist a group of individuals who can make themselves better off regardless of what others in the community might do. In a private goods economy, the rest of the community can attempt to force some allocation of the “rogue” subgroup by refusing to trade with them, but this is the limit of what they can do. With a public good economy, the rest of the community has an extra weapon: they can refuse to supply any public goods (remember the rogue group will still benefit from any provision by the rest of the community). The extra sanctions available mean that other allocations are possible (Think for example of a Wicksell equilibrium that is not a Lindahl equilibrium).

This subsection is more in the nature of a footnote. We now turn to a more important point (for our purposes).

The Incentive Problem

Recall the way the Lindahl process works. People are asked to say how much of a public good they might want given the tax price (or Lindahl price) they face. To achieve the Pareto optimal outcome we must suppose that individuals are honest about how much G they want at each stage of the process. In this way the individual demands for the public good can be traced out accurately. But how plausible is that individuals will reply honestly? The answer is not very likely. Figure 6 illustrates.

Here we reproduce Figure 4 with the demand functions for individuals A and B. The Lindahl equilibrium is marked L, and individual A’s indifference curve at L is also drawn in. Remember that the direction of preference for A on the diagram is downwards. Other things being equal he wants to pay lower taxes. Individual B’s demand curve is drawn in. We can suppose she is being honest, but this is not needed. All we need is that B’s demand apparent curve is upward sloping, as drawn. Will A be honest? Suppose A responds to the government’s questions dishonestly, revealing the dotted line marked DA’ as his demand curve. If so the Lindahl process will end up at point Q. Given B’s declared demand curve, A has maximised utility reaching ICA’. For A, utility rises as a result of his dishonesty. It is also worth bearing in mind that A is being dishonest about something only he knows, so there is no danger of being caught out!

The conclusion is that there is no reason, apart from self-imposed morality, for individuals to behave honestly. This being so we would expect people to understate their preference for the public good (compare the false and true demand curves for A on Figure 6), and that if anything like a Lindahl process was ever used it would lead to under-provision of the public good.


6. Conclusion

The Lindahl mechanism can be looked at in a number of ways. In these notes we have focused on its role as public good analogue to the competitive mechanism for private goods. As such, it provides a notional ideal against which actual allocation mechanism can be measured. Although rather abstract, there are number of issues in taxation and political economy that it highlights.

Public Attitudes to Taxation
One key problem in determining the optimal level of government expenditure is the views the public or, from the politicians’ point of view, the electorate has on public goods. If say the question is about an expansion of health or education expenditure, what does the public want? In fact a common feature of opinion surveys is a form of schizophrenia on taxation. People value health, education etc., but are somewhat cagey about the question of taxation to pay for extra expenditure. A common response is: “Yes, I think extra expenditure on health is a good idea”. However, when asked about how this should be paid for, it is not uncommon to get the reply that “the rich” (i.e. someone else) should pay for it. This is unhelpful in working out what the optimum provision should be. Even with benign politicians, if we think the political process does in some remote way resemble the Lindahl model then there are going to be inefficiencies. The cause in this case is not dishonest politicians, but a dishonest electorate.


Not one key feature of the Lindahl model: government expenditure is linked directly to taxation. The only way you would get extra health-care expenditure in a Lindahl world is through being willing to pay for it, just as in an “ordinary” market.

Hypothecated Taxation
One proposal to overcome this problem is to hypothecate taxes. That is, to assign certain tax revenues to certain public expenditures. This is not common in the UK. Possible examples are the revenue from the road fund (car licence) which is supposed to go on road maintenance etc., and national insurance payments which finance the state pension and other social insurance benefits (job seekers’ benefit). In neither case is the link taken very seriously. More significantly, in the USA local elections are held to determine whether say a rise in sales tax should be enacted so as to finance extra expenditure on schools.

Whilst hypothecation is some way from a true Lindahl system, it is a step in that direction, and shares the key feature that tax and expenditure decisions are linked.

Implicit Lindahl Prices
Here we have not explored the political economy aspects of the Lindahl model in any detail. They are in any case a little diffuse. However, one insight from our analysis is worth mentioning. A key feature of most political systems is lobbying by various interest groups. This can take various forms, but often the lobbying is for some form of public provision of a good or service (which often does have some public characteristic). What determines who lobbies, and what sort of cause would they lobby for?

The Lindahl model gives us a clue as to where to start. If the government increases expenditure on say health, it will raise taxes to do so, or in Gordon Brown’s case national insurance contributions (usually taken to be a tax in practice). The point is that implicitly there is a price each person pays for the extra health expenditure. How does the net marginal benefit compare across individuals? Are there consistent differences between individuals we might expect, based perhaps on income levels?

There are many issues here. The Lindahl model does provide a basis for understanding the process of lobbying in democratic systems. Even when the system is not “democratic”, there are likely to be some albeit implicit interest groups. It would be foolish to suppose, though, that a Lindahl mechanism would end lobbying. It is true that given Lindahl prices no one would want to lobby for an extra amount of a public good. However, you do not need to pay more than casual attention to political debates to realise that much lobbying concerns taxation. A clear case of this was the fuel tax protests a few years ago, not to mention the debate over Ken Livingstone’s imposition of a road charge in London. Note, however, how the Lindahl model highlights the dishonest nature of much of this lobbying. “I want to pay lower taxes, so you (non-motorist or whoever) must pay more.” The second clause of the sentence is usually left out.

The point here is that much lobbying is for income distribution purposes. Under a Lindahl mechanism this problem emerges, as we have seen, in the incentive of individuals have to understate their preference for a public good. Hillman rightly places the Lindahl model at the centre of the economist’s attempt to understand the public sector, and refers to it as providing a “consensus” solution. It is unclear, though, how far we can take this idea if people are willing to use the “democratic” process to alter income distribution in their favour.
Pectus Excavatum

Basic Welfare Economics (notes by Simon Vicary)

1. Preliminaries: the Role of Value Judgements

• The need for evaluative criteria.

• Positive vs Normative statements

• Positive Statements describe or attempt to describe how the world is

• Normative statements are statements of value. They can be aesthetic (beautiful) or ethical (ought)

• Hume's Law: “One cannot derive a ‘ought’ from an ‘is’”

The first section of Book Three of Hume’s Treatise of Human Nature (1740) is entitled “Moral distinctions not derived from reason”. This section concludes with the following:

“In every system of morality which I have hitherto met with, I have always remark'd, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz'd to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought or ought not, expresses some new relation or affirmation, 'tis necessary that it should be observ'd and explain'd; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceiv'd by reason.”

• Value judgements are what we impose;

• No "objective" justification for welfare/policy statements;

• Paretian approach “weak value judgements”


2. Paretian Welfare Economics

(i) Individualism: Social welfare is based solely on the welfare of the individuals in society
(ii) Non-Paternalism: the individual is the best judge of his or her welfare
(iii) Pareto Improvement: if in moving from state A to B one individual is better off and no one is worse off then state B should be chosen

Note that utilitarianism, in the form that recommends the maximisation of the sum of individual utilities (total community happiness) conforms to these axioms. In this sense it is a special case of Paretianism.

• These value judgements are not trivial.
• Maybe worth seeing where they lead us….The philosopher Rawls describe this process of finding out where value judgements lead as a process of trying to achieve “reflective equilibrium”.


3. Exchange and the Edgeworth-Bowley Box Diagram

• The 2x2x2 economy: not as restrictive as it might appear
• The Edgeworth-Bowley Box Diagram
• Exchange in the Edgeworth-Bowley Box Diagram
• The efficient distribution of goods
• MRSxya = MRSxyb
• The Set of Pareto Optima
• This shown in Figure 1 (lecture summary)


4. Walras’ Law and Competitive Equilibrium

(Skip this section)

Consider an exchange economy. There are j commodities, and each agent i has an endowment of commodities i, and each agent solves the following utility maximisation problem:



subject to:



As you may recall from earlier work, the solution to this problem takes the form of the demand functions for the m commodities:

for j = 1, 2, …….m and i = 1, 2, ……..,n

where

Note that these demand functions are the gross demands for each commodity. The net demand or excess demand for j by individual i is given by:



The aggregate net demand/excess demand for j is given by:



This plays an important role in what follows although we will keep the argument fairly informal. In a single market we have equilibrium whenever demand is equal to supply. For a general equilibrium (the whole economy) we simply extend this idea and require that demand equals supply in each and every market. Put another way, excess demand should be zero for all markets. Put formally:

∀ j, zj(p1, p2, …….., pm) = 0

(This idea can be extended to allow for zero prices and excess supply, but we will ignore this technical issue here). We can talk therefore of a competitive equilibrium. This has a rather precise definition that we will need later. In essence, though, it is simple enough. For an allocation to be a competitive equilibrium we require the following properties:

• Individual firms and consumers face a given set of prices
• Given their endowment of goods and services all consumers have maximised their utility subject to their budget constraint (chosen the best of the available consumption bundles)
• Given the prices faced, all firms have maximised their profits. That is, the technology to which they have access does not enable them to increase their profit beyond what they have achieved
• Demand equals supply for all goods and services. This is the last condition.

In a pure exchange economy everything relevant for finding and analysing an equilibrium is contained within the excess demand functions. With production, matters are a little more complicated. When we get more formal we will keep to an exchange economy, but the key results and ideas we examine carry through to a production economy. One such is a key feature of any general equilibrium system, and one that is widely applied in economic theory:

Walras’ Law
The value of total aggregate excess demand across all commodities is identically equal to zero:

To prove this note that from the budget constraint for individual i we must have:



from which it is easy to see that:



Now sum this equation over all n individuals:



Now change the order of summation so that we first sum over individuals rather than commodities:



Now consider the term:



This is the value of total excess demand for commodity j. Our last equation therefore tells us that the sum of these values across all individuals is equal to 0. This is Walras’ Law. We can write this as:



There are two important consequences of Walras’ Law.

• First suppose that we have achieved equilibrium in the first m – 1 markets. zj = 0, 1, 2,……., m – 1. In this case it must also be true that we have equilibrium in the mth market as well (as long as pm > 0).
• Recall that demand functions are homogeneous of degree zero in prices and income (here only prices really matter). Suppose therefore that we have market equilibrium and divide each price by p1, the equilibrium price for commodity one. In this case it must follow that if:



then



In other words if we are trying to determine equilibrium prices in any general equilibrium system then we cannot uniquely determine the absolute value of prices. We can only hope to determine relative prices uniquely. In a way this helps us in our calculations, because what we can do is to take one good as the numeraire, by setting its price equal to unity (one), and solving for all prices relative to this price. Essentially this is what we see in the final equation.


5. Production and Full Pareto Optimality Conditions

We take a simple economy in which production takes place. This is enough to illustrate the key points we want to make. We assume a 2 by 2 by 2 economy: two individuals; two goods; and two factors of production. We also assume that the supply of factors of production to the economy as a whole is fixed.

To start with we need to make a simple point about production. We do this using an adaption of the standard box diagram we have seen before. Look at Figure 2. Here we have two industries, x and y, the outputs for which are produced using two factors of production L and K. Clearly the economy could not be in a Pareto optimum if it were possible to increase the output of one industry whilst at the same time maintaining the output in the other industry. How can we see such points on our diagram?

A moment’s reflection should convince you that the reasoning we apply to this diagram is exactly the same as that we applied to our original box diagram. Individuals are replaced by industries, indifference curves by isoquants, and commodities by factor of production. The condition for Pareto optimality on this diagram is graphically the same as before. If isoquants are tangential to one another then it is impossible to increase the output of x without at the same lowering the output of y. Hence tangency of isoquants is a condition for Pareto optimality.

The slope of an isoquant has a number of names. Here we call it the marginal rate of technical substitution of factor L for factor K. It measures the amount of factor K we need to replace a given lost of L input if output is to remain the same. In symbols:



MPi stands for the marginal product of factor i in the industry, and justification of the last equality is exactly analogous to the justification of the statement that the MRS is equal to the ratio of marginal utilities. Let us suppose this equality is met. It follows that we can trace out a function relating y to x, such that given any value of x the output of y is maximised. This function if drawn in (x, y) space is called the production possibility curve. Bear in mind one important point when looking at a production possibility curve: it is drawn under the assumption that this condition is met. Put another way, it assumes that that the economy is efficient in production (Here this means no more than that it is impossible to increase the output of any good without at the same time lowering the output of some other good.)

We now turn to a consideration of the full conditions for Pareto efficiency in our economy. Figure 3 shows a production possibility curve, AB. Total community output is at point 0B, thus we assume the economy is productively efficient. An Edgeworth-Bowley Box diagram is inscribed below and to the left of this point, showing how the total amount of x and y produced can be divided up between the two agents A and B. At point P we have equality of the MRS for each of our two individuals. Is this enough to ensure overall Pareto optimality?

Unfortunately the answer is no. As things stand at the moment we have

• Efficiency in exchange
• Efficiency in production

However, consider now the slope of the production possibility curve, the Marginal Rate of Transformation. ( )

Suppose the MRT = 2 and the (common) MRS equals 1. A moment’s thought (albeit a bit tricky moment as we have to think in terms of ratios) should convince you that the economy can gain a Pareto improvement by shifting resources from the x industry to the y industry. Suppose we lower output of x by one unit. In this case the MRT tells us that we will enjoy an increase of 2 in the output of y, assuming we keeping production efficiency. If we lower the output of x by one unit, then one of our individuals will lose a unit of x in their consumption. The MRS condition tells us that in order to compensate our individual for this loss we must give him/her one unit of y. However, we actually have two extra units of y to distribute to the individuals. Thus we can compensate our individual for the loss of x and then give the remaining unit of y to one or other of our individuals to make them better off. Thus somebody can gain, and no one loses. Our original position could not have been Pareto optimal.

This argument can be repeated for any situation in which MRTxy ≠ MRSxy. Hence to the two conditions for Pareto optimality we already have we must add:

MRTxy = MRSxy

If we allow for variable factor supply we would have to add more conditions (Varian discusses this point, and indicates what these conditions involve), but for now these conditions should do:

• Efficiency in exchange
• Efficiency in production
• MRTxy = MRSxy Overall output efficiency condition

So far our discussion has been purely technical. We have said little about how these conditions can be achieved. We now turn to this point.


6. Pareto Optimality and Competitive Equilibrium

The Pareto Optimality conditions are simply conditions for a Pareto optimum, nothing more. They give no institutional detail and as such say nothing about how a Pareto optimum might be achieved. However, if we recall our basic microeconomic theory, there are some hints as to how this could be done.

Consider first the condition;



Suppose x and y are purchased in a market, and suppose that they each pay the same price for the goods. We know from utility maximisation that at the consumer optimum:



Thus if both A and B face the same prices utility maximisation ensures that this particular condition for Pareto optimality is met.

Similar observations apply to the second condition:



Simply assume producers have to buy inputs in a competitive factor market. Suppose the price of factor L is w (the wage rate), and factor K, r (the rental on capital). Cost minimising firms will choose a technique of production such that:



Thus if firms all face the same factor prices, cost minimisation ensures that marginal rates of technical substitution are equalised across sectors of the economy. Hence the second condition for Pareto optimality is met. The economy must be producing somewhere on its production possibility curve.

Overall output efficiency (MRTxy = MRSxy) is less easy to explain, although the underlying principle is the same. The tricky part of the argument is to sort out the value of the MRTxy. If you are interested here it is.

Suppose we decide we want to lower the output of x and raise the output of y, and that we will do this by transferring L of factor L from the x industry to the y industry.

The loss in x and the gain in y will be given by:



It follows from this that:



The minus sign reminds us that x and y must vary inversely.

By applying the same argument to factor K we find:



The question arises as to whether the ratio of the output changes will be the same for the two factors. The answer to this question is yes, if the second condition for Pareto optimality has been met. Recall that this tells us that:



Now by using an argument similar to that for which we get the slope of an indifference curve it turns out that:

and

Given that the MRTS is the same in the x and y sectors, ratio of the change in y to the change in x will be the same regardless of whether it is L or K that is being transferred. Returning now to the equations for , note that this term is nothing other than the (discrete approximation) to the MRTxy. Hence we can write:



Now recall from price theory that:

and

It follows that:



Our conclusion is that the marginal rate of transformation is equal to the ratio of marginal costs. Armed with this conclusion we can now show how prices might guide the economy to a Pareto optimum.

Suppose that the prices consumers pay are the same as those received by firms (suppliers). We know that under perfect competition profit maximising firms equate price to marginal cost. Hence as a result of profit maximisation by firms we can write:



We also know that utility maximising consumers will choose a consumption bundle such that:



Our final conclusion is that if we have perfect competition with profit maximising firms and utility maximising consumers then our third condition for utility maximisation will be met. It appears therefore that there is an intimate relationship between prices in perfect competition and Pareto optimality. In the next section of the lecture we explore more precisely the exact nature of this relationship.


7. The Fundamental Theorems of Welfare Economics

The argument of the last section was suggestive, and points the way to two of the most important results in economic theory: the Fundamental Theorems of Welfare Economics.

Theorem 1
Any competitive equilibrium is also a Pareto Optimum.

Theorem 2
Any given Pareto optimum can be realised as a Competitive Equilibrium.

Varian (p522-523) gives a proof of Theorem 1 for a simple exchange economy. The proof for more complicated economies is similar, and carries through to economies with production. For an exchange economy the argument goes as follows:

• Start with a competitive equilibrium, in which all agents choose an optimal point in their budget set
• Suppose the allocation is not Pareto optimal. This means that there is a Pareto improvement available to the economy with its existing endowment of goods and services
• As some agents are better off with the Pareto improvement, the consumption of goods in that state cannot be affordable given the prices realised in the competitive equilibrium
• Some individuals may be indifferent as between their competitive consumption and their consumption in the Pareto improvement. Under monotonicity of preferences this means that the value (at the equilibrium prices) of their consumption in the Pareto improvement must be no lower than it is in the competitive equilibrium
• Consequently at the prices in the competitive equilibrium the total value of consumption in the Pareto improvement must be greater than in the competitive equilibrium
• But the total value of consumption in both the competitive equilibrium and the Pareto improvement must equal the value of the economy’s endowment of goods and services
• The last two points contradict one another. This contradiction is a consequence of assuming that we had a Pareto improvement over a competitive equilibrium. Hence this assumption is false. Any competitive equilibrium is also a Pareto optimum, and the Theorem is proved Q.E.D.


There are a couple of points to note about this theorem.

o The first can be verified by checking through the argument again. The number of assumptions employed is very limited. In fact there is only one special one (over and above standard assumptions like utility maximisation): monotonicity. This is often weakened to a “better point” assumption, but otherwise we do not even need to make special assumptions like convexity of preferences. Partly this is because our starting point of competitive equilibrium already places some structure on the economy we are dealing with, but nevertheless technically what is needed to prove the theorem remains very limited.
o Secondly, although the Theorem tells us that a competitive equilibrium is Pareto optimal, it does not tell us anything about the distribution of income. Indeed, a Pareto optimum is quite possible if one individual has virtually all the wealth in an economy, and the rest are left with subsistence wages. The actual competitive equilibrium we end up with may not be attractive at all. For this reason the First Fundamental Theorem of Welfare Economics needs to be supplemented with its companion, the Second Theorem.

This theorem is a good deal more difficult in its details, although the principle underlying the proof is simple enough to grasp. The problem lies in the fact that we start from a Pareto optimum. This has as such no institutional detail. In particular, there are no prices to work with. The strategy of the proof is as follows:

• First establish a set of prices. Under a standard set of assumptions, these prices correspond to the prices discussed in section 6 of the lecture
• Show that at these prices firms maximise profits by choosing the input output combinations in the Pareto optimum
• Show that by some suitable choice of consumer endowments consumers will choose the consumption bundles given in the Pareto optimum
• Equality of demand and supply is usually established within the Pareto optimum itself in that total consumption has to equal total endowment (exchange economy), or that total consumption must be feasible given the endowments and technology of the economy. Hence once the last two points have been established, the equality of supply and demand in each market is automatically established

Many more assumptions are needed to prove this second theorem, but the key technical assumption turns out to be convexity both in preferences and in technology (if we are dealing with a production economy). The role of convexity is to establish prices from which the rest of the proof follows. Varian shows how the theorem works in a box diagram, and an analogous argument is illustrated in Figure 4. Point P is the Pareto optimum on the production possibility curve AB. The set of (total) consumption points that can achieve a Pareto improvement on P is shown by the shaded area above P. If individual preferences are convex, then this set is also convex, as illustrated. Convexity in production (either diminishing returns to scale as Varian discuss or constant returns with differing factor intensities as assumed by trade theorists) results in a production possibility curve as illustrated, with the set of feasible outputs (consumption levels) being convex. If in these circumstances point P is a Pareto optimum then the two sets do not overlap, and in this case we can draw a straight line separating them (MN). This idea carries to any number of goods/factors (technically it is a theorem of separating hyper-planes). The slope of the line MN on the diagram gives us a set of relative prices. Using these prices we need to show that point P arises from profit maximisation by firms and, with suitable endowments, utility maximisation by individuals. Doing this completes the proof of the Theorem.

The consequences of this theorem are far reaching as far as the competitive mechanism is concerned. We can in fact specify any Pareto optimum we please, incorporating any notion of fairness or justice in income distribution we like. Regardless of what we want, the competitive mechanism will be able, given suitable initial endowments of goods and factors, to deliver the Pareto optimum we want.


8. Underlying Assumptions

I have mentioned the technical assumptions (monotonicity, convexity) needed to deliver the Fundamental Theorems. Before discussing their significance, it might be useful to list the implicit assumptions they employ:

• Perfect Competition: actually this isn’t really an implicit assumption, as it was assumed in both theorems that all agents were price takers. However, it needs to be stressed that competitive markets are required for the fundamental theorems to be valid.
• Egoism: no individual’s utility is influenced by the utility of other agents (people are neither benevolent nor malevolent).
• No externalities: individual utility is affected neither by the consumption chosen by any other individual, nor by the input-output combination chosen by any firm. Similarly no firm’s output is affected by consumption choices per se of individuals or input-output combinations chosen by any other firm. In short, individual utility is affected only by individual consumption, and firm output only by firm input.
• No problems with uncertainty: It is simplest to begin with to assume no uncertainty. The theorems can actually deal with uncertainty in some contexts, but this raises complicated issues best put aside for the moment (that is, in this module).


9. Comments on the Fundamental Theorems

The two fundamental theorems of welfare economics taken together establish one of the most important and profound results in economic theory. With any such result, though, it is important to appreciate what the theorems tell us and what they do not tell us.

A Bit of History
The idea of the Fundamental Theorems is an old one, going back at least to Adam Smith and his “invisible hand”:

“Every individual...generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention.”
The Wealth of Nations, Book IV Chapter II

The key idea here is that by pursuing one’s one interest (own gain) an individual achieves a desirable social end (the “public interest”) which was “no part of his intention”. This is what happens in the Fundamental Theorems. Each individual maximises his or her own utility, and each firm maximises profits. No one is concerned with the public good, but under the assumptions the outcome is a desirable one. The quotation, however, shows that Smith seems to have had in mind something slightly different, and perhaps a little more restricted, than what we now understand by the term “invisible hand”.

The general idea though is an old one, but if we look at Smith’s quotation, and consider more generally the understanding people took from his arguments in the early 19th century (and bear in mind that The Wealth of Nations was an extremely influential book), there are some loose ends. What exactly do we mean by the “public interest”, and what exactly do we mean by “his own gain”? Kenneth Arrow in 1951 was the first person to formulate and prove the theorems in their modern form, and his great contribution was to make precise (at least in one sense) the terms Smith used. This was an important step in that by making the ideas precise it is easier to see both why the idea works, and the circumstances under which it does work. It also, of course, makes it easier to see when the invisible hand might fail to achieve a fully desirable outcome.

Allocation and Distribution
One aspect of the theorems needs stressing. Suppose we adopt the Paretian approach to welfare economics. As a result of the fundamental theorems we can see that questions involving economic analysis can in principle be split into allocation and distribution issues. Hence if we suppose the distribution of income to basically “fair”, then the competitive mechanism ensures that a Pareto optimum is achieved. The general idea then would be to make the economy more like the “competitive mechanism” of the theorems, and not to worry about the distribution of income. Alternatively, if we suspect that the economy does not work well, and we decide to improve matters by adopting a more competitive mechanism. It is not a sensible argument against such reform that the distribution of income would be “unfair”. By appropriate policies it is in principle possible to correct for unfairness in the distribution of income through some appropriate form of redistribution.

It is not then a good argument against the competitive mechanism that there is undue inequality. Adopting the second theorem we can see that any distribution of income is in principle compatible with this mechanism, should we wish to re-distribute appropriately. We can, then, discuss distributional questions without concerning ourselves about allocation questions.

As you may find if you persevere with public economics the following statement is something of a simplification. However, as a first approximation it is not bad, and is an important consequence of the Paretian approach to welfare economics:

Questions of how resources get allocated as between different industries (or how individual consumption can vary) can be separated from the questions of how resources are distributed amongst the individuals in the economy.

It is worth pointing out that this distinction lay behind the classic division of the functions of a modern state made by Musgrave in his classic textbook The Theory of Public Finance, 1959. These divisions were:

• The allocative branch
• The distributive branch
• The stabilisation branch

These do not of course correspond exactly to the precise divisions of the civil service. Most state activities are a mixture of these three. Nevertheless it is easy to see that some arms of the state are more closely involved with one function.

The Bank of England is clearly concerned predominantly with stabilisation, and indeed the changes in its functions enacted by Gordon Brown when he became Chancellor in 1997 made this clearer: as you know, the Bank was given the task of targeting inflation by means of adjusting interest rates (stabilisation), but its financial regulatory functions (allocation) were transferred to (what is now) the Financial Services Authority.
The Social Security Budget is obviously concerned predominantly with distribution, as its main activities involve the transfer of money to (mostly) the old, but also to the unemployed and disabled.
Defence expenditure is chiefly allocative in nature, however.

The division is not watertight, even in these clear-cut cases. Still less is this so for education and health expenditure. Nevertheless, in helping us to think clearly about public policy the Pareto-Musgrave distinction is a good one to use in the first instance.


What is the Competitive Mechanism?
The competitive mechanism of the fundamental theorems corresponds effectively to the textbook model of perfect competition that you met early on in your study of economics. The theorems themselves confirm what has been hinted at all along: namely that in some sense perfect competition is a “good thing”. However, this observation should remind us to be careful in equating the “competitive mechanism” of the theorem with the market-type mechanisms we observe around us in the so-called real world. There are a number of reasons to suspect that actual market systems may not be quite as efficient as is in principle possible. We examine this question briefly in the next section, but first let us take another dip into the history of economic thought.

One topic much debated in the 1920’s to the 1940’s was the question of socialism. For our purposes it is interesting to note that ideas put forward by two economists Lange and Lerner, who claimed that an appropriate form of socialism (their form) was a better approximation to the textbook ideal of perfect competition than actual market mechanisms. They recommended not Stalinist central planning, but a form of market socialism in which managers ran socialised firms. Managers were supposed to do all the usual things mangers do, such as make output and employment decisions. However, they were not allowed to set prices, this being done by the central “planner”, who would adjust price of all products according to the state of the market. In this way the socialised firms would act as price takers, just like firms in perfect competition, and the economy would, in principle, conform to that postulated by the fundamental theorems.

This is not the place to pursue this idea in depth. Suffice it to say that the practical application of our theorems could take a number of forms. The theorems do not necessarily provide an argument for “market” economies as we observe them. Indeed Friedman’s Capitalism and Freedom (1962), a standard text arguing for markets does not rely on these theorems. Arrow himself is counted as a market socialist and clearly had more in mind than arguing for markets when he introduced the theorems. We will pursue this question in the final section of the lecture.


10. Market Failure and Prices.

The theorems seem to make quite strong assumptions (such as universal perfect competition), and for this reason they could be taken to provide some ammunition for those who are critical of markets. Rather than jumping to any sort of policy conclusions it might be wiser to think of our theorems as providing a framework for thinking about policy. On this argument we see where conditions approximate to those postulated, and there rely on the market to allocate resources; and where conditions are far from those assumed some form of public policy may be desirable. This approach reflects a widely used concept in economic theory: market failure.

To understand this general idea, take the second welfare theorem, and note that part of its proof involves the idea that attached implicitly to every Pareto optimum is a set of prices, or, if we want to be a little les committal, opportunity cost ratios. Bator (1958) took market failure to mean a failure of agents within a market system to use these appropriate opportunity cost ratios as the prices that determined their own allocation decisions. The idea is not too difficult to grasp. We know from an earlier section that attached to each Pareto optimum there exist a set of opportunity cost ratio (MRS = MRT etc.). Each corresponds in some sense to prices that might be observed in an actual market. If the actual prices do not in fact equal these so-called shadow prices, then the economy will not achieve the desired Pareto optimum, and we have market failure. Why this might be so varies, and a quick list here will help.

Briefly, and technically, markets can fail as a result of:

• Failure by existence. It may in fact be impossible to find a set of prices that sustains a desired Pareto optimum. This is an interesting mathematical problem, but not one over which policy makers lose much sleep.
• Failure by signal. There may be prices attached to the desired Pareto optimum, but agents respond to prices in the wrong way. This can happen when there are non-convexities in the economy. The most likely possibility for this to occur is when there are economies of scale
• Failure by incentive. Firms may recognise that at the Pareto optimal prices the profit maximising output is indeed the Pareto optimal output. However, it is possible that they make a loss when they produce this output (so it is really loss minimising). It is better not to produce at all. Firms have no incentive to choose the desired output.
• Failure by structure. In practice, firms choose price. Will they choose the desired price for the Pareto optimum? Not if they enjoy monopoly power. This is also the case with most models of oligopoly as well. The structure of the industry gives firms no incentive to set prices at the desired level. It is (at least potential) failure in this sense that underlies much of Industrial Economics.
• Failure by enforcement. This is the form of market failure we shall mostly be concerned with in this module. The broad category here is externalities. These may stem from the non-existence of proper markets for goods, perhaps through a failure to establish clear property rights. However, in some cases (those we will be mostly concerned with), it may well be impossible to do this. This is particularly so when we are dealing with public goods, to which most of the module is devoted.


The causes of market failure are attributed to some combination of the following:

• Ownership externalities
• Technical externalities
• Public good externalities