Sunday, 2 September 2012
Custom Garage Tools
Sunday, 15 May 2011
Test Driven Development (TDD) a critical review of the claimed advantages gained by using this technique.
Test-driven development (TDD) is a software development process concerning the creation of concise, clean and testable code. The core principle of TDD asserts that testing should be done as part of the development process to drive the software’s progression (Beck, 2004). More specifically, tests should be created for isolated functionality prior to the implementation of code for that functionality (Erdogmus, Morisio, Torchiano, 2005). The TDD approach guides developers along a series of iterative steps to optimise the development and testing processes. The first step stipulates that a simple test must be established for an isolated requirement. Such a test will inevitably fail to compile due to the absence of production code. However, the process continues with the development of code to enable the test to compile; producing a fail result. Once the test compiles, the production code for the requirement in question can be fully implemented, so as to pass the test. The final stage of the TDD process involves the refactoring of both the test and production code in an attempt to reduce duplicate code and ensure that the existing design is optimal (Beck, 2004). In addition, as the code is refactored, the test should be continually run to guarantee that the code continues to behave as expected. This cycle is then repeated for every required function to continue the development process. For each function the developer must create a test, get that test to fail, write code to pass the test and then refactor the implemented code, whilst ensuring that the test, and all previously established tests, are still passed (see Figure 1.1 for a simplified diagrammatic explanation).
The question naturally arises, what functionality should be tested? I.e. how complex should each test be? In answer to this, TDD dictates that tests should be as simple as possible, focusing on a discrete behaviour. The principle being to ensure that the each behaviour is tested in isolation (Astels, 2003). Subsequently, TDD implies that tests should be designed to minimise dependencies. Methods or components outside the context of the behaviour in question should not be considered by the test. This therefore, ensures that the developed section of code behaves as expected in isolation (Janzen, Saiedian, 2008). With regards to the definition of the boundaries surrounding a particular test, design patterns known as testables (such as mocks, stubs and fakes), can be utilised to isolate the behaviour being tested, ensuring the test is kept as simple as possible (Astels, 2003).
Thus far, this paper has considered TDD to be a development process employed at the genesis of a project to ensure the program code is testable from the bottom up. However, TDD can also be applied part way through a project, or indeed to legacy code. Under such circumstances, TDD can be utilised to great benefit, via the refactoring of existing code and the introduction of numerous function specific tests (Linnamaa, 2008). However, the implementation of TDD to an existing project does generate additional risks, particularly if the code being altered has no pre-existing testing procedures. Nevertheless, to deliver the benefits associated with TDD, this technical debt must be paid.
However, there are also a number of disadvantages associated with TDD procedures. Most notably, the use of TDD can greatly slow the development process by imposing strict testing procedures (Erdogmus, Morisio, Torchiano, 2005). Subsequently, the costs associated with the project can be significantly escalated due to the delay in development. Additionally, TDD places a significant burden on the developer in terms of maintaining the numerous tests since each test will require maintenance as the code being tested develops and changes (Linnamaa, 2008). This requirement, of continuously tweaking the test code to accommodate for changes in the program code, further delays the development process. Moreover, many projects evolve during the course of their development; at the beginning of a project the solution is not always foreseeable. Consequently, developers will be forced to redo tests, creating additional time delays (stackoverflow.com). Furthermore, many dependencies between classes and methods need to be broken to create testable code using TDD. For larger, more complex projects this breaking of dependencies to isolate individual test cases can be extremely difficult and may in-fact add to the overall complexity of the project (Erdogmus, Morisio, Torchiano, 2005)
In summary, it is clear that there are a number of distinct drawbacks which can arise from the implementation of TDD, particularly with regard to the amount of time invested in the development phase. Nevertheless, despite these undesirable characteristics, TDD offers a logical and structured design approach, forcing developers to focus on the task at hand. Implicitly, TDD encourages developers to produce simple, maintainable code. TDD is therefore an extremely effective development style, helping to ensure the development process remains structured and agile.
APPENDIX:
Figure 1.1:
REFERENCES:
S. W. Ambler, (2008), Introduction to Test Driven Design (TDD), AgileData, accessed on the 24.11.2010, http://www.agiledata.org/essays/tdd.html
S. W. Ambler, (2007), Test-Driven Development of Relational Databases, IEEE Computer Society, Vol. 24, No. 3, p. 37 – 43.
D. Astels, (2003), Test-Driven Development: A Practical Guide, Prentice Hall PTR.
K. Beck, (2004), Test-Driven Development: By Example, Pearson Education, 5th Edition.
H. Erdogmus, M. Morisio, M. Torchiano, (2005), On the Effectiveness of the Test-First Approach to Programming, IEEE Transactions on Software Engineering, IEEE Computer Society, Vol.: 31, Iss. 3, p. 226 – 237.
D. S. Janzen, H. Saiedian, (2005), Test-Driven Development Concepts, taxonomy, and Future Direction, IEEE Computer Scoiety, VOl. 38, ISS. 9, p. 43 – 50
D. S. Janzen, H. Saiedian, (2008), Does Test-Driven Development Really Improve Software Design Quality? IEE Software, Vol. 25, no. 2, p. 77 – 84.
L. Linnamaa, (2008), Test-Driven Development, University of Helsinki Computer Science Department,http://www.cs.helsinki.fi/u/linnamaa/linnamaa-Test-Driven-Development-final.pdf
R. C. Martin, (2007), Professionalism and Test-Driven Development, IEEE Computer Society, Vol. 24, No. 3. p. 32 - 36.
M. Natté, (2009), Introduction to Unit Testing, .Net blogs, accessed on 26.11.2010, http://martijnnatte.wordpress.com/2009/07/09/introduction-to-unit-testing/
stackoverflow.com, accessed on 26.11.2010,
http://stackoverflow.com/questions/64333/disadvantages-of-test-driven-development
The cloud: who's fault is it when errors occur, and what is the way forward?
When considering a situation in which data is stored with a third party data service (in a ‘cloud’), if the data becomes lost or corrupted it can be difficult to determine who is at fault. Hence, when determining which party caused the problem it is necessary to examine each case independently.
To assess how the culprit of a data problem is determined, it is necessary to examine different potential data problem scenarios. For instance, if data was to become lost or corrupted because of an operational error such as failing to back up the data, or a hardware failure such as a server crash, it is clear that blame should primarily reside with the service provider, since such issues are out of the user’s control. However, it is also possible that the user could be partly responsible; if an inappropriate technical system design was in place. For instance, critical data which cannot tolerate down time should be part of an architecture which prevents down time. Thus it is possible that both the user and the provider are to blame. This was the case when the Amazon network, in particular the Elastic Compute Cloud, failed. Only users with inappropriate system architectures suffered significant data problems as a result of the downtime (K. Maurer, 2011). Hence, both Amazon and user’s with inappropriate IT structures were at fault (K. Maurer, 2011). In addition, there are cases where data loss/corruption can be entirely the fault of the user. For instance, the user could directly cause data problems through the use of a problematic system architecture, or the inappropriate use of the provided services. Furthermore, it is possible that neither party are responsible for the loss or corruption of data. Consider the situation of a external phishing attack which breaks through reasonable security measures, whilst the fault lies at the providers end, it is largely not the providers fault; both the service provider and user are victims. Such was the case when the Playstation Network was hacked in April, earlier this year (BBC News, 2011). Therefore, it is clear that the blame regarding the genesis of the problem could lie with either the user or the service provider, or possibly both. This therefore further emphasises the need to play the blame game on a case by case basis.
Whilst it is not possible to draw blanket conclusions regarding who is to blame for data loss/corruption problems, it is possible to deduce where liability lies. In the majority of cases, regardless of who is to blame, the user is likely to be held responsible (M. Mowbray, 2009). This is largely due to the heavily one sided user agreements, which typically demonstrate judicious application of disclaimers to ensure minimal responsibility is accepted (M. Mowbray, 2009). Consequently, the user will have to bear responsibility for the majority of failures, without compensation, even if they are void of blame. This issue is demonstrated by the recent news headlines involving Sony and the hacking of the Playstation Network (BBC News, 2011). Although both parties were victims in this instance, since Sony cannot be held liable for the security failure (Sony is only required to take “appropriate measures” (Sony Playstation, 03.05.2011)) it is the Sony user’s who must bear responsibility of the failure. Further examples of responsibility falling on to users include the collapse of Linkup in 2008 (Richard Chow et al., 2009) as well as the loss of personal information by Danger in 2009 which affected millions of T-Mobile customers (J. Kincaid, 2009). In both cases, the users were held responsible despite being completely void of blame. Hence whilst it is possible in some cases to determine that the provider is to blame, it is very unlikely that they will be held responsible.
M. Mowbray, 2009, "The Fog over the Grimpen Mire: Cloud Computing and the Law", 6:1 SCRIPTed 129, http://www.law.ed.ac.uk/ahrc/script-ed/vol6-1/mowbray.asp
Richard Chow et al., 2009, Controlling Data in the Cloud: Outsourcing Computation without Outsourcing Control, Proceedings of ACM CCSW’09, November 13, www.parc.com/publication/2335/controlling-data-in-the-cloud.html
K. Maurer, 2011, Amazon’s Cloud Collapse: The Blame Game and the Future of Cloud Computing, April, http://blog.contentmanagementconnection.com/Home/32236
J. Kincaid, 2009, T-Mobile Sidekick Disaster: Danger’s Servers Crashed, And They Don’t Have a Backup, October, http://techcrunch.com/2009/10/10/t-mobile-sidekick-disaster-microsofts-servers-crashed-and-they-dont-have-a-backup/
L. H. Mills, 2009, Legal Issues Associated with Cloud Computing, Nixon Peabody attorneys at law LLP, May, http://www.secureit.com/resources/Cloud%20Computing%20Mills%20Nixon%20Peabody%205-09.pdf
BBC News Technology, 2011, Playstation outage caused by hacking attack, 25 April, http://www.bbc.co.uk/news/technology-13169518
The legal implications of data and applications being held by a third party are not well understood. What are the issues?
The third party provision of computational and network resources for the purpose of storing data and applications comes under the umbrella term of cloud computing. Conceptually, cloud computing can be thought of as a remote computing utility; an underlying delivery mechanism to enable data and software to be accessed remotely via the internet (M. Mowbray, 2009). The theories underpinning cloud computing have become increasingly popular over recent years, supported by a larger more general architectural shift within the computer industry towards increased flexibility, mobility, and cost efficiency (R. Buyya, C. S. Yeo, S. Venugopal, 2008). However, despite significant support for the theories behind cloud computing, it has been slow to develop in practice (Richard Chow et al., 2009). The main reason for the delayed progression stems from an air of fear and uncertainty surrounding the storage of sensitive data and applications outside of the user’s control (Richard Chow et al., 2009). These concerns discourage many companies from storing their data in the ‘cloud’, serving to impede momentum and may ultimately compromise the concept of cloud computing (R. Buyya, C. S. Yeo, S. Venugopal, 2008).
A key proprietor of the concerns surrounding cloud computing is the issue of data privacy laws differing across country borders. An organisation utilising cloud computing services is likely to find its data is stored in a different country to its own. The data is therefore bound by the privacy laws and jurisdiction of the country within which it is stored (M. Mowbray, 2009). Hence, in cases where data does not completely conform to these foreign laws, jurisdictional and legal disputes are going to arise. This is clearly an unattractive factor for organisations considering whether or not to add data to the ‘cloud’.
Additionally, encompassed within this wider jurisdictional issue is the potential for foreign governments to access the data; the data is put at the mercy of the data privacy laws of the country within which it is stored (M. Mowbray, 2009). This issue is exacerbated by the fact that much of the cloud computing services are based in countries such as the US, where laws exist to enable government officials to access data without notification to the data owners; for example the 2001 Patriot Act, in the USA (M. Mowbray, 2009). This point is illustrated by the reluctance of the French government to allow officials to use Blackberry email devices, since these devices use servers based in the US and the UK (M. Mowbray, 2009). Moreover, some regions such as the EU have stringent rules concerning the movement of data across borders (European Data Protection Act), which creates further problems (J. Kiss, 2011). Although this issue is unlikely to discourage organisations from accepting cloud computing, when considered as part of the wider jurisdictional issue, it is clear to see why many organisations are reluctant to participate.
A further reason for concerns about cloud computing stem from the highly one sided nature of current user agreements. The current trend for the user agreements of companies offering cloud computing services is to offer very little in terms of assurance should data be lost, or become corrupted (M. Mowbray, 2009). The aforementioned user agreements also ensure minimal liability with respect to the security of data, most simply offer ‘appropriate measures’ (Microsoft Terms of use, 02.05.2011). This point is nicely demonstrated by the Amazon Web Services terms of use, which accepts no liability “for any unauthorized access or use, corruption, deletion, destruction or loss of content or applications” (Amazon Web Services Terms of Use, 01.05.2011). This therefore serves to discourage those considering adding data to the ‘cloud’, since little responsibility is taken by cloud service providers to ensure the safety or security of the data they maintain. Essentially, users are losing control over all operational issues such as backing up data, and data recovery, without receiving any guarantees regarding data safety/security from service providers (L.H. Mills, 2009).
A further issue which serves to hinder the progress of cloud computing relates to the use of subcontractors and the sharing of information. Most cloud service providers sub-contract much of their data storage for efficiency and cost-minimisation purposes (M. Mowbray, 2009). Beyond potential integration issues, this sharing of data may potentially raise additional judicial issues if subcontractors are located in different countries (M. Mowbray, 2009). This is issue is amplified by the lack of user say with regards to the selection/use of subcontractors, since most cloud providers simply use stylised blanket statements, contained within the terms of use; the Google terms of service states “right for Google to make such content available to other companies, organisations or individuals with whom Google has relationships for the provision of syndicated services” (Google Terms of Service, 01.05.2011). More importantly, however, the sharing of data creates additional opportunities for the data get lost, corrupted, or stolen (Richard Chow et al., 2009). These factors therefore serve to increase fears regarding the loss of control and further discourage organisations from accepting the ‘cloud’ as the future of data storage.
In summary, it is clear that the concerns surrounding cloud computing stem from a perceived loss of control. This loss of control is routed in the jurisdictional issues arising from overseas data storage, coupled with heavily one sided user agreements which fail to provide adequate reassurance as to the safety and security of the data being maintained. For third party data storage to fully mature, ‘cloud’ providers such as Amazon and Microsoft need to bear a greater burden of responsibility in terms of the safety and security of stored data (Richard Chow et al., 2009), as this would alleviate many of the fears associated with cloud computing. As competition in cloud markets increases, this is likely to be the case, with providers seeking to differentiate themselves on service quality by offering more attractive guarantees. Additionally, measures may need to be taken to regulate cloud computing, to adequately mitigate the risks users are exposed to (P. T. Jaeger, J. Lin, J. M. Grimes, 2008).
Google Terms of Service, accessed at 13:47 01.05.2011, http://www.google.com/accounts/TOS
Michael Armbrust et al., 2005, A View of Cloud Computing, Communications of the ACM, April, Vol. 53, No. 4.
M. Armbrust et al., 2009, Above the clouds: A Berkeley View of Cloud Computing, February 10, University of California at Berkeley, Technical report no: UCB/EECS-2009-28, http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.html
Richard Chow et al., 2009, Controlling Data in the Cloud: Outsourcing Computation without Outsourcing Control, Proceedings of ACM CCSW’09, November 13, www.parc.com/publication/2335/controlling-data-in-the-cloud.html
M. Mowbray, 2009, "The Fog over the Grimpen Mire: Cloud Computing and the Law", 6:1 SCRIPTed 129, http://www.law.ed.ac.uk/ahrc/script-ed/vol6-1/mowbray.asp
Microsoft Terms of Use, accessed at 17:49 02.05.2011, http://www.microsoft.com/About/Legal/EN/US/IntellectualProperty/Copyright/default.aspx
Amazon Web Services Terms of Use, accessed at 15:33 01.05.2011, http://aws.amazon.com/terms/
J. Kiss, 2011, Keeping your legal head above the cloud, January, the Guardian , accessed at 18:19 02.05.2011, http://www.guardian.co.uk/media-tech-law/cloud-computing-legal-issues
L. H. Mills, 2009, Legal Issues Associated with Cloud Computing, Nixon Peabody attorneys at law LLP, May, http://www.secureit.com/resources/Cloud%20Computing%20Mills%20Nixon%20Peabody%205-09.pdf
what responsibilities Google take for storing your data
The terms and conditions concerning Google docs are encompassed within the broad terms and conditions governing the use of all Google services, and an extension document which relates solely to Google Docs (Additional Terms of Service). Principally, Google accepts very little in terms of responsibility; the terms of service are replete with disclaimer’s removing the burden of responsibility from Google.
The main responsibilities adopted by Google are outlined in section 11 of the additional terms of service document (overwrites section 11 of the original terms of service). Section 11 states that whilst copyright and ownership of submitted data is retained by the author, by submitting the data, the author is granting Google with the ability to “reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute” the data as Google deems appropriate (Google Additional Terms of Service, 27.04.2011). This condition grants Google with a significant power over the data it receives; Google is entitled to alter any data it receives to suit its own agenda. That aside, this power/license assigns Google with the responsibility of displaying and distributing the data as part of the provision of the Google service.
However, this responsibility is significantly minimized by both section 4 and section 15 of the agreement. Section 4 contains a number of clauses enabling Google to revoke its provision of service, with no justification required; “Google may stop providing the services to you or to users generally at Google’s sole discretion, without prior notice” (Google Terms of Service, 27.04.2011). Similarly, section 15 claims zero liability in the case of incomplete or unsatisfactory service provision; “Google, its subsidiaries and affiliates, and its licensors do not represent or warrant to you that: (A) your use of the service will meet your requirements, (B) your use of the services will be uninterrupted, timely, secure, or free from error” (Google Terms of Service, 27.04.2011). Hence, whilst Google assumes responsibility over the provision of its services and the display/distribution of the data it receives, Google remains uncommitted to upholding this responsibility. That said, Google does still have some liabilities regarding the service it provides, for instance UK users are partially protected by UK consumer protection laws (M. Mowbray, 2009).
As previously alluded to, the vast majority of the Google terms of service concerns removing liability from the behalf of Google. For instance, section 15 of the broad terms of service states that Google will not be held responsible for any loss or corruption of data. Similarly, section 15 also prevents Google from being held responsible for any negative effects induced from the use of its services, be they direct or indirect. For example, Google is not responsible for any “intangible loss”, such as “business reputation” (Google Terms of Service, 27.04.2011).
Likewise, Google makes no promises with regards to monitoring the content of the data it utilizes, nor the enforcement of the Google program policies in the case of inappropriate data. Section 8 of the Google terms of service states that Google has “no obligation to pre-screen, review, flag, filter, modify, refuse or remove any or all Content”. Furthermore, section 8 goes onto assert that responsibility over the content of data remains with the author/user. Thus, Google cannot be held liable for any inappropriate data it displays/distributes. More importantly, it should be noted that the refusal of responsibility over the content of data ensures that Google cannot be held liable should any legal action arise concerning ownership of the data. This is further achieved via the enforcement that the user must confirm they possess the “rights, power and authority” necessary to submit the data in question (Google
With regards to the storage and security of user data and personal information, section 7 dictates that data protection practices are governed by the Google privacy policy. The privacy policy offers very little in terms of how data will be protected beyond the vague promise of “appropriate security measures to protect against unauthorized access to or unauthorized alteration, disclosure or destruction of data” (Google Privacy Policy, 29.04.2011). This clearly promises very little, acting as a ‘get out clause’ in the event of a security breach. Interestingly the wording of the Goggle privacy policy resembles that of the Sony Playstaion Network privacy policy, which similarly promises to “take appropriate measures to protect your personal information” (Sony Playstation, 01.05.2011). The recent news headlines involving Sony and the hacking of the Playstation Network (BBC News, 2011) can therefore be seen to illustrate that Google accepts very little responsibility in terms of ensuring user data and information is kept secure from unauthorized access.
A further important aspect of the terms of service when considering the responsibilities residing with Google concerns the ability to instigate changes to the terms and conditions. Section 19 of the original document dictates that changes can and will be made to the terms of service “from time to time”, after which the continued use of Google Docs is interpreted as an acceptance of the amended terms. This is clearly a highly important clause when considering the responsibilities Google acknowledge, since the sparse and limited responsibilities which Google does accept can be retracted or amended with minimal effort being made to inform users.
From an in-depth examination of the Google terms and services it is clear that Google is extremely careful to remain void of responsibility when considering the storage and integrity of user data. All in all, Google goes to great lengths to ensure minimal liability surrounding the data it is given; judicious application of disclaimers ensure that what little responsibility is acknowledged is rendered almost mute.
Google Terms of Service, accessed at 12:15 27.04.2011, http://www.google.com/accounts/TOS
Google Additional Terms of Service, accessed at 13:44 27.04.2011, http://www.google.com/google-d-s/intl/en/addlterms.html
Google Privacy Policy, accessed at 19:11 29.04.2011, http://www.google.com/intl/en/privacy/privacy-policy.html
Sony Playstation, accessed at 15:46 01.05.2011, http://legaldoc.dl.playstation.net/ps3-eula/psn/e/e_privacy_en.html
M Mowbray, "The Fog over the Grimpen Mire: Cloud Computing and the Law", (2009) 6:1 SCRIPTed 129, http://www.law.ed.ac.uk/ahrc/script-ed/vol6-1/mowbray.asp
BBC News Technology, 2011, Playstation outage caused by hacking attack, 25 April, http://www.bbc.co.uk/news/technology-13169518
Monday, 25 April 2011
Microeconomic Analysis - Lecture Notes
Topic 2.1 Consumer Theory…………………………………………………....1
Topic 2.2 The Utility Function…………………………………………………5
Topic 2.3 Demand Analysis…………………………………………………..11
Topic 2.4 Revealed Preference Theory…………………………………….17
Topic 2.5 Measuring A Utility Curve………………………………………..20
Topic 3.1 Production Function……………………………………………….24
Topic 3.2 Cost Minimisation………………………………………………….27
Topic 3.3 Homogeneous Production Function………………………..…30
Topic 4.0 Choice under uncertainty………………………………………..35
Topic 4.1 Decision Rules………………………………………………………36
Topic 4.2 Expected Values…………………………………………………...38
Topic 4.3 Expected Utility…………………………………………………….40
Topic 5.0: Theory of the Industry…………………………………………..45
Topic 5.1: Monopoly…………………………………………………………...46
Topic 5.2: Price Discrimination……………………………………………...49
Topic 5.3: Oligopoly…………………………………………………………….51
Topic 5.4: Collusive Oligopoly……………………………….……………...58
Topic 6.1: Profit Maximisation………………………………………………62
Topic 6.2: Managerial Discretion Models…………………………………66
Topic 7.1: Robinson-Crusoe Models………………………………………..69
Topic 7.2: Robinson-Crusoe and Man Friday Economic Model……...71
Topic 7.3: Several Outputs……………………………………………………73
Topic 7.4: International Trade………………………………………………76
Topic 8.0: General Equilibrium Theory………………………………….…79
Topic 8.1: Model Set-Up…………………………………………………….…80
Topic 8.2: Conditions for a Competitive Equilibrium…………………..81
Topic 9.0: Welfare Economics……………………………………………….89
Topic 9.1: The Pareto Criteria……………………………………………….90
Topic 9.2: Welfare Maximisation……………………………………………91
Topic 9.3: Welfare Properties of General Equilibrium…………………94
Topic 9.4: Criticisms of the Pareto Postulates…………………………..96
Index……………………………………………………………………………….98
good 1
good 2
Topic 2.1: Consumer Theory
Neoclassical Consumer Theory (NCT)
- Usual indifference curves and utility function for year one.
Examine consumer preferences
Indifference Curves
Examine restrictions that must be placed on individual preferences in order to
derive the conventional indifference curves:
We work with bundles
of n goods,
x = (x1 , x2…, xn)
y = (y1, y2…, yn)
i) Preference Relation
Describes preferences of the representative individual
- Denoted by R (or ≥ or >=)
xRy Bundle x is at least preferred as bundle y
xIy Indifferent between x and y
xRy and yRx
xPy x is strictly preferred to y
xRy and y NOT R x
ii) Indifference Set
For some given bundle r = (r1, r2…, rn) we define the following:
Upper Set of r: Ur = {x: xRr}
Preferences Income Prices
Choices
Indifference Curve Budget Constraint
Demand for goods
good 1
good 2
Ur
Lr
Ir
Lower Set of r: Lr = {x: rRx}
This gives the indifference set of r: Ir: Ur AND Lr
Or Ir = {x: xRr and rRx}
Or Ir = {x: xIr}
In general, we have (2-good case)
iii) Restrictions on R
The question we ask is what minimal conditions must be placed on
individual preferences R such that Ir is just the conventional
indifference curve?
a. Completeness (or comparability)
For any two bundles, x and y, either the individual thinks xRy or
yRx – or possibly both
b. Transitivity
If xRy and yRz, then xRz.
This ensures the indifference curves do not cross.
c. Reflexivity
For any x, xRx
This ensures x is in its own indifference set
If (a), (b) and (c) are satisfied, then R is said to be a weak
preference ordering i.e. the bundles can be ordered from the
most preferred to the least preferred with possibility or
indifference (i.e. weak ordering)
d. Axiom of Greed (or (local) non-satiation or
monotonicity)
“More is preferred to less”
For any x and y, xi ≥ yi – for all i, and xj > yj, for some j then
xPy
good 1
good 2
xPr
rPx Ir
r
r1
r2
E.g. n = 4
x = (2, 5, 7, 3) and y = (2, 3, 4, 3)
Then xPy by Axiom of Greed
Axiom of Greed is a powerful assumption with 3 main
implications:
1. Indifference Set is a line
Suppose indifference set is not a line, then xPy, so x NOT
I y = contradiction
2. Indifference Set must slope downwards to the right
3. Individual must be on budget constraints
e. Continuity and Smoothness
Continuity – no breaks
MRS21 always exists
(Marginal Rate Substitution)
Smoothness – no kinks
MRS21 changes gradually
(Marginal Rate Substitution)
1
2
1
2
f. Strict Convexity
Marginal Rate of Substitution (MRS)
diminishes as we move the indifference
line – “variety is the spice of life”
- consumers chose some of every good
We now have an indifference curve
1
2
Topic 2.2: The Utility Function
The Utility Function
i) Utility
Pleasure or satisfaction an individual gets from consumption.
For analytical purposes, it is useful to have a utility function –
summarising the information in the indifference curves.
For each bundle of goods, x, the utility function assigns a number
U = u(x), such that:
xPy u (x) > u (y)
xIz u (x) = u (z)
Utility, u, is measured on an ordinal number scale.
e.g. 1st, 2nd, 3rd, 4th…
i.e. no significance can be given to precise magnitudes only that
one utility is higher (or the same) as another. It gives a ranking
only.
NOTE: It used to be thought that utility could be measured exactly
(i.e. on a cardinal scale)
EXAMPLE:
X = {x, y, z, w}
xIyPzPw
The following utility functions are equivalent representations of
these preferences:
Utility Functions
Bundles U1 U2 U3
X 4 16 10
Y 4 16 10
Z 3 9 8
U 2 4 6
Two points to consider:
a. Utility is unique up to an order – presuming transformation (i.e.
positive monotone transformation)
e.g. U2 = U12, U3 = 2 x U1+2
b. Interpersonal utility comparisons are not possible.
e.g. suppose individuals A and B have the same preference over
X, as above.
Lets give A U1 and B U3 – This does not mean that B prefers X
more than A, as we could have given A U2
a. Existence of Utility Function
Any weak preference ordering (i.e. satisfying completeness,
transitivity, and reflexivity) can be represented by a utility
function, providing preferences are continuous. (See seminar
sheet one, on lexicographic preferences)
b. Quasi-concavity
If axiom of greed and strict convexity holds, the utility
function is quasi-concave
i.e. bundles of goods giving constant levels of utility will
give conventional indifference curves.
c. Differentiable
For this we need smoothness
FROM NOW ON WE USE x AND y TO DENOTE GOODS, RATHER
THAN BUNDLES
iii) Implications of Differentiability
Marginal Rate of Substitution
The utility function is U = u (x,y)
The total differential of this is:
dU = (∂u/∂x) * dx + (∂u/∂y) * dy
Along an indifference curve, utility does not change.
i.e du = 0
Impose this constraint:
0 = (∂u/∂x) * dx + (∂u/∂y) * dy
dy * (∂u/∂y) = - dx * (∂u/∂x)
(dy/dx) (= slope of indifference curve) = (-∂u/∂x) / (∂u/∂y)
i.e. MRSYX = - MUx
MUy
Hence, if we want to find the MRSyx (i.e. rate at which y can be
substituted for x at the marginal rate, keeping utility constant),
then we can find this from the marginal utilities.
Handout Notes
Optimisation
The basis for much of economic analysis is optimisation. Agents maximise (or
minimise) some objective function subject to constraints that define a feasible set of
alternatives available to the agent. The following problem illustrates the way in
which mathematics is used in economics.
Example: The Consumer
max u(x, y) subject to px x + py y ≤ M (x, y ≥ 0) (1)
x, y
x and y are choice variables whose values are pre-determined by the agent, while px
and py are parameters (ie. parametric prices) whose values are outside the agent's
control (ie. they are market determined and exogenous to the agent). In the above
problem, income M is also a parameter.
To solve this problem we form the Lagrangian L, which is a function of x, y and the
Lagrange multiplier λ:
max L(x, y, λ) = u(x, y) + λ (M - px x - py y) (x, y ≥ 0) (2)
x, y, λ
The solution to (2) solves (1). Since L is a function of more than one variable, we
solve this problem using partial differentiation:
∂L / ∂x = ∂u / ∂x - λ px = 0
∂L / ∂y = ∂u / ∂y - λ py = 0
Since ∂u / ∂x is the marginal utility of x (which we can write MUx), and ∂u / ∂y is the
marginal utility of y (which we can write MUy), the above conditions give:
MUx / px = MUy / py (= λ) (3)
This is the Principle of Equi-Marginal Returns.
The third condition ∂L / ∂λ = 0 gives the constraint px x + py y = M (4)
Solving (3) and (4) together it is possible to derive the consumer's demands for goods
x and y, denoted x* and y*. These are a function of the parameters of the model (ie. px,
py and M), so that we can derive the conventional demand functions as follows:
x* = x(px, py, M) and y* = y(px, py, M)
As well as the first-order conditions, we should check that the second-order
conditions are satisfied, to make sure we have a maximum rather than a minimum or
point of inflection (or more generally a saddle point).
Handout Notes
Mathematical Properties
(a) Sets
A set is convex if a straight line connecting any two points in the set lies entirely
within the set. More formally, a set Z is convex if for any x, y ∈ Z then {μ x + (1 - μ)
y} ∈ Z, where 0 ≤ μ ≤ 1.
convex non-convex
A set is closed if it includes the boundary, while a set is bounded if a circle can be
drawn around the set, no matter how large the circle.
closed (not convex) not bounded
(b) Functions
A function is continuous if there are no 'breaks' in the function. More formally, if lim
f(x) exists and equals f(h) as x tends to h for each h in the domain of the function f.
A function is smooth if it continuous and there are no 'kinks' in the function. A
smooth function has a continuous derivative, and it can therefore be differentiated.
Smoothness and twice differentiability are essentially equivalent.
A continuous function is convex if it 'looks convex' which viewed from ‘below’.
More formally, f is convex if for any x and y, then:
f(μ x + (1 - μ) y) ≤ μ f(x) + (1 - μ) f(y) where 0 ≤ μ ≤ 1.
Strict convexity rules out any straight-line segments (substitute '<' for '≤' in the above). The converse implies concavity and strict concavity. x y x y
b c: x is an inferior good (ex^m < 0) iii) Marshallian Demand Curves In case good x, we examine how demand for x (x*) varies the price of x (px). Changes in x* = Dx (px, py, M), holding py and M (income) constant. This gives the offer curve, from which the demand curve is constructed. offer curve = locus of optimal position Dx = demand curve retraces the offer curve on a different plane. If offer curve bends backwards, we get upwards sloping demand curves (Giffen Good) y y* a b c x* x x* M a . b . c . y y* a b c x*a x*b x*c x slope - px py x* a . b . c . offer curve Px x*a x*b x*c Demand Curve Dx
The Hicksian Demand Functions:
x* = hx (px, py, uo)
y* = hy (py, px, u0)
We can construct the Hicksian demand curve
In case of x, we vary px, keeping py and uo fixed.
x
uo
x*
y*
y
For any price py, the Hicksian Demand Curve (hx), tells us the
demand for x at each price px, which gives utility uo at minimum
expenditure.
Since utility is constant, hx gives the substitution effect. Hence,
hx is also known as the (income) compensated demand curve.
Since the substitution effect is inversely related to the price, then
hx always slopes downwards to the right.
v) Slutsky Equation
We can decompose the total effect from a price change into
substitution and income effects.
In algebraic form:
∂Dx = ∂hx - ∂Dx * x
∂px ∂px ∂m
Handout Notes
The Marshallian demand function for good x is: x* = Dx (px, py, M).
There is an elasticity associated with each argument of the demand function, px, py
and M.
(a) Income Elasticity of Demand (ex
M)
ex
M = (∂x / x) / (∂M / M) = (∂x / ∂M) / (x / M)
Since x and M are positive, the sign is determined by ∂x / ∂M.:
(i) ∂x / ∂M > 0, then ex
M > 0, and x is normal (if ex
M >1 then luxury, but
otherwise the normal good is known as a necessity).
(ii) ∂x / ∂M < 0, then ex M < 0, and x is inferior. (b) Cross-Price Elasticity of Demand (ex y) ex y = (∂x / x) / (∂py / py) = (∂x / ∂py) / (x / py) In this case, the sign depends on ∂x / ∂py, that is, how demand for x changes in response to a change in the price of the related good, y: (i) ∂x / ∂py > 0, then ex
y > 0, and x and y are substitutes.
(ii) ∂x / ∂py < 0, then ex y < 0, and x and y are complements. (c) [Own-Price] Elasticity of Demand (ex) ex = (∂x / x) / (∂px / px) = (∂x / ∂px) / (x / px) In this case, the sign is negative, as the downward-sloping demand curve means that ∂x / ∂px < 0, so that ex < 0. For the Giffen good the elasticity is positive, but in general the sign is ignored, as it carries no information. Again, there are two cases: (i) |∂x / ∂px | > 1, then x is elastic.
(ii) |∂x / ∂px | < 1, then x is inelastic. Demand for an elastic good is relatively price responsive, and as a result the total revenue paid (the price multiplied by the quantity) decreases as the price increases. A firm would be silly to raise the price of such a good as the revenue it receives would fall!
sufficient conditions only, but they are not necessary.
(ii) If preferences are transitive, then at most one of (a) and (b) can hold, but
possibly neither.
y
x
y
x
px
A B
A B
px
x
B
A
px
Consumer
Surplus of A
Consumer
Surplus of B
Topic 2.5: Measuring A Utility Curve
A fall in the price of a good makes the individual better off.
Q: But much how much?
Frequently need to answer this
question, such as cost-benefitanalysis
e.g. new airport being built
Problem: Utility is measured
ordinally i.e. just gets given a
ranking.
i) Change in Consumer Surplus
Consumer Surplus: individual valuation of goods over and above
the price paid (Δ Consumer Surplus is in monetary units)
Problem: as we move between A and B the value of money
changes
Justification: - as price of x falls (-px falls) and real incomes
change, the real value of money is altered.
- Dx holds nominal income constant but not real
income
ii) Monetary Valuation
Examine individual’s monetary valuation of the utility of change. To
keep the real value of income constant, we keep relative prices
constant (effectively eliminating income effect).
Two measures –as relative prices can be held constant at A or at B:
a. Compensating Variation (CV)
For a price fall, the CV is a sum of money which can be taken in
a new position and leave the individuals as well off as before.
Here, relative prices are fixed at a new level (at B).
b. Equivalent Variation (EV)
For a price fall, EV is the sum of money given in the initial
position which gives the utility level.
Relative Prices
are those at A
Note: cv and
ev are likely to
differ.
In practice,
tend to use cv,
as B is
observed.
iii) Measuring the Monetary Valuation
The CV is the area under the Hicksian (or compensated) demand
curve, over the relevant price range.
A .
x
y
{
B .
D .
UA UB
cv
py
cv = py * Δy
Δy = cv
py
A .
x
y
{
B .
C .
UA UB
ev
py
This makes sense as the Hicksian Demand Curve eliminates the
income effects, so the real value of money is constant.
As the handout on ‘Measuring a Utility Change’ shows:
- E.V. is the area indifferent to hx, where utility is UB
- We note: CV < ΔCS < E.V. - This is because x is normal. - If x is inferior: CV > ΔCS > E.V.
This is the theory, but Hicksian Demand Curves are not directly observed – so
what do we do?
In practice we use the aggregate Marshallian demand curve. However,
error is likely to be small if:
a. x is normal for some individuals and inferior for others
b. price change is small
c. x accounts for a small share of expenditure
d. preferences are quasi-linear (i.e. zero income effects)
X
Y
.
.
Handout Notes
Topic 3.1: Production Function
Maximum output from combining inputs:
Q = output
inputs { L = labour (services)
{ K = capital (services)
Assume:
- Q, L, K are perfectly divisible (continuous rather than discrete)
- free disposal (learning factors idle is costless)
There are two broad approaches:
1) Linear Programming Approach
Finite number of production processes (Pi) Each Pi combines factors in
fixed proportions (fixed-proportions technology)
Example:
Suppose P1 and P2 only:
1 Unit of Q P1 P2
Labour (L) 1 2
Capital (K) 2 1
P1 has a capital-labour ratio of 2, and P2 of ½
Considering this, we can now sketch the feasible region for L and K that
produces Q=4 using P1 and P2
P1 P2 Q L K
A 0 4 4 8 4
B 1 3 4 7 5
C 2 2 4 6 6
D 3 1 4 5 7
E 4 0 4 4 8
0 2 4 6 8
8
6
4
2
0
K
L
- E (P1)
- D
- C
- B
- A (P2)
goes flat as could take on
capital and not in use it
Q=4
goes flat as could take on
labour and not in use it
Q=4
isoquant
[FEASIBLE REGION]
• Positions to the right of the line (in the feasible region) are technically
possible, but only points on the boundary line itself are efficient
• Positions outside the feasible region are technically not possible with
the existing state of knowledge
• Only points on the boundary are efficient and this is isoquant
• There is an isoquant for each output level.
Consider the addition of processes P3 and P4
1 Unit of Q P3 P4
Labour (L) 5/4 7/4
Capital (K) 5/4 7/4
How does this effect the isoquant?
P3 P4 Q L K
F 4 0 4 5 5
G 0 4 4 7 7
Note: G lies within the feasible region P4 is inefficient, and never used.
F lies below the isoquant P3 is efficient and changes the shape of
the isoquant (- - - line shows new).
As more and more production processes are added, then in limit, the
isoquant becomes strictly convex to the origin.
2) The Neoclassical Approach
As the no of Pi ∞, then we get the neoclassical isoquant.
0 2 4 6 8
8
6
4
2
0
K
L
- E (P1)
- D
- C
- B
- A (P2)
Q=4
*
F (P3)
* G (P4)
combines
P2 and P3
combines
P1 and P3
K
L
Q=Q0
The isoquants are summarised by production function, Q=f(L,K)
It gives the maximum output (Q) from L and K
[The slope of the isoquant is the Marginal Rate of (Technical) Substitution
of K for L (MRTSKL)]
- Rate at which K substitutes for L at the margin keeping output
constant
MRTSKL diminishes as moves down the isoquant.
The total differential of the production function, Q=f(L,K)
dQ = ∂f * dL + ∂f * dK
∂L ∂K
Along the isoquant, output does not change (dQ=0), and rearranging
dK = - ∂f/∂L
dL ∂f/∂K
MRTSKL = - MPL / MPK
As move down the isoquant:
L MPL
K MPK
} MRTSKL
Topic 3.2: Cost Minimisation
Firms minimise costs of producing output:
• Implies efficiency
• Necessary condition for public maximisation
If the firm is not minimising costs, then it is not on the short run total cost
(SRTC) curve, and profits are correspondingly lower.
To find the cost-minimisation position in (L, K) – space, then we need the
concept of an iso-cost curve:
Co = w * L + r * K
The is where: w = wage rate on units of labour services
r = rental on unit of capital service
We can rearrange this iso-cost to get:
K = (w/r) * L + (C0/r)
NOTE: The smaller the cost (Co)), the closer the iso-cost is to the origin
K
L
slope = - w
r
iso-cost
Co
r
£
Q
MR
Short Run Total Costs (diminishing
marginal productivity)
MC
SR Total
Revenue
π
FC
The Cost Function
The various cost curves, TC, AC and MC, are summarised algebraically by the
cost function.
It is derived as follows:
min (w*L + r*K)
L,K
s.t. Q = f(L,K) ≥ Qo
L* and K* produce Qo at minimum
cost. It occurs where there is a
tangency between isoquant (giving
Qo) and iso-cost curves
Now we will solve this problem (cost-minimisation) in general terms.
NOTE: In seminar 3 the cost-minimisation is solved giving explicit form to
production function.
Cost-minimisation problem has necessary conditions.
1) Tangency Condition: MRTSKL = - w/r
(slope of (slope of
isoquant) iso-cost)
2) Constraint Condition: f(L,K) = Qo
Generally 1) and 2) can be solved by L and K, to give compensated factor
demands.
L* = L (w,r, Qo)
K* = K(r,w,Qo)
Therefore the minimum cost of producing Qo is:
C (cost) = w * (L*) + r * (K*)
C = w * L(w,r,Qo) + r * K (r,w, Qo)
C = c(w,r, Qo)
Since this holds true for an Q0 the cost function is:
C = c(w,r,Q)
K
L
Co
r
Qo
Iso-cost
curve
Cost
Min
L*
K*
The cost function gives the minimum cost of producing any output Q, given w
and r.
Plotted C against Q gives the total cost curve:
Likewise we can get the average cost:
AC = c(w,r,Q)
Q
LRMC = dc(w,r,Q)
dQ
Economies of Scale
From C=c(w,r,Q), consider the elasticity of cost (c) with respect to output(Q).
Suppose we find e < 1 • LMC / LAC < 1 • LAC > LMC
Economies of Scale
C (£’s)
Q
LMC
LAC
Diseconomies
of Scale
Economies of
Scale
e = d c/c = Proportionate Change Cost .
d Q/Q Proportionate Change Output
C
Q
e = (dc)/(dQ) = LRMC = Long Run MC
c/Q LAC Long Run AC
C (£’s)
Q
LRTC (Long Run Total Costs)
C
Q
K
0
L
L
K
L
K
Topic 3.3: Homogeneous Production Function
Useful class of production function
For Q = f(t, k), we say f is homogeneous of degree n if:
f(sL, sK) = sn, f(L, K)
where s is (> 0) is scale parameter
When n > 1 increasing returns to scale
When n = 1 constant returns to scale
When n < 1 DRS n = 1 is known as linearly homogenous p.f. i. Cobb-Douglas Production Function Most widely used form: Q = A * La * Kβ A = efficiency parameter (technological progress) Aβ > 0 = distribution parameters (see seminar 3)
The C-D p.f. is homogenous of degree θ and β
Proof: Consider L0, K0 such that Q0 = A * Lθ * Kβ
Now increase input by s, then:
QN = A * (s * L0)θ * (s * K0)β
= A * sθ * L0
θ * sβ * K0
β
= sθ * sβ * A * L0
θ * K0
β
= (sβ + θ) * Q0 (Q0 = A * L0
θ * K0
β)
NOTE: CRS when θ + β = 1
ii. Linearly Homogenous Production Function
e.g. Q = L½ * K½ Exhibit CRS
They have the following properties:
a) MPL and MPK depend on L/X only (see sheet for proof, NOT examinable)
Implication of this is that the
isoquants’ are just ‘blownup’
versions of one another.
WHY? Slope of isoquant
= MRTSKL = (-MPL)/(MPK)
which depends on L/K
only.
b) Output Expansion Paths are Linear
OEP = locus of all cost-minimisation positions keeping relative factor
prices constant.
Cost-minimisation condition: slope of isoquant = slope of isocost
MRTSKL = -w/r
c) Factor Demands are Non-Giffen
NOTE: a profit-maximising firm must minimise costs
Suppose initially at A:
Focus on demand for labour. Position A gives one point on DL
Now, suppose w and Q . Consider substitution and output effect (s.e
and o.e)
Fall in w decreases slope of isoquant. Then bring back down Qπ man
curve.
Total effect: A C
Sub effect: A B
Output effect: B C
K
0
L
A
B
Slope = -w/r
Qπman
s.e slope = -w/r N
Da
L
w = wage rate
K
0
L
OEP
Slope = - w/r
Note: In response to price fall the demand for labour must increase. This
is because OE always reinforces the SE.
d) Euler’s Theorem
For a linear homogeneous pf the following holds:
Q = (∂f/∂L) * L + (∂f/∂K) * K
See handout
Implication:
In perfect competition, the factors are employed up to where marginal
products equal real factor payment
i.e. MPL = ∂f = w and MPX = ∂f = r
∂L p ∂K p
Substitute these into Euler’s equation,
Q = (w/p)L + (r/p)K
= p.Q = w * L + r*K
Firms in perfect competition must make normal profits
Hence, also knows a Product Exhaustion Theorem
For a homogeneous p.f. with I.R.S (increasing returns to scale) the
relationship is:
Q < (∂f/∂L)L + (∂f/∂K)K This implies that a competitive firm experiencing IRS will make a loss! Hence, never observe competitive firm operating under this. WHY? IRS economies of scale, decreasing LAC LAR = LMR LAC LMC output K Q* Break Even Loss AC > AR
As gap gets
bigger, π rises
Perfect competition price taker, constant LAR
Maximum profits: MR = MC
Q* actually minimises loss.
For profit max:
- MR = MC (first order condition)
- MR cuts MC from above (second order condition)
NOTE: This market is likely to observe a monopoly.
Natural Monopolies: telecoms, gas, water etc
If unregulated, then monopolies naturally occur in markets with very large
fixed costs giving rise to decreasing long run average costs over the whole
market.
Handout Notes
Result 1: For a linearly homogeneous production function (i.e.
exhibiting constant returns to scale) the marginal products, MPL and
MPK, depend on the labour-capital ratio only, i.e. L / K.
For production function Q = f(L, K), we know sQ = f(sL, sK), where s is the scale
parameter. So let s = 1 / K (i.e. whatever value K takes, set s equal to one over this).
Then, Q / K = f(L / K, 1), that is Q / K = g(L / K), or Q = K g(L / K).
Now, use product, quotient and chain rules to differentiate Q = K g(L / K) to get:
MPL = ∂Q / ∂L = K g’(L / K) (1 / K) = g’(L / K).
MPK = ∂Q / ∂K = g(L / K) + K g’(L / K) (-L / K2) = g(L / K) - g’(L / K) (L / K).
where g’(L / K) is the first derivative of g(L / K) with respect to L / K.
We see from the above that MPL and MPK depend on L / K only, and hence result.
Result 2 (Euler’s Theorem): For a linearly homogeneous production
function, Q = f(L, K), the following holds: Q = (∂f / ∂L) L + (∂f / ∂K)
K.
For a homogeneous production function: snQ = f(sL, sK), where Q = f(L, K).
Partially differentiate sn f(L, K) = f(sL, sK) with respect to s, to get:
n s n-1 f(L, K) = (1 / s) [(∂f / ∂L) L + (∂f / ∂L) K].
(Here, we use the fact that ∂f / ∂s = (∂f / ∂L) (∂L / ∂sL) (∂sL / ∂s), where ∂L / ∂sL = 1
/ s and ∂sL / ∂s = L. Koutsoyiannis (page 479) conducts the same proof, but makes an
error, as she leaves out the (1 / s) term from the right-hand side).
For a linearly homogeneous production function, n = 1, which (since n s n-1 = 1)
gives:
Q = (1 / s) [(∂f / ∂L) L + (∂f / ∂L) K].
Further, this holds for any s, so set s = 1 to get the result.
Topic 4: Choice under uncertainty
Certainty: Associate known outcome (payoff with each action)
e.g. consumer chooses bundle x (action) then gets utility u(x)
(payoff) with certainty
Uncertainty: breaks 1 to 1 correspondence
Two types: 1) Strategic Interdependence
- Do not know how other agents will react (e.g.
oligopoly)
- Endogenous
2) State Contingency
- Do not know what state of nature will occur
(e.g. investor does not know if economy will
grow or not)
- Exogenous
Here we will focus on state contingency
Notation:
States of Nature
Action S1 S2 S3
a1 x11 x12 x13
a2 x21 x22 x33
xij = pay offs
E.g. Action: grows what or not
State of Nature: summer, rainy, sunny, cold
h(a1)
h(a2)
h(ai) 7
5
3
1
2/3 1
Ω
If relatively pessimistic
(or > 2/3) chose a1
Avoids power station
problem
Topic 4.1: Decision Rules
Consider simple decision rules, based on individual psychologies.
1) Extreme Pessimist
Choose action that gives less-worst outcome i.e. maximum strategy
E.g.
Action S1 S2 S3
a1 100(ii) 50 -20(i)
a2 180(ii) -50(i) 50
Would chose a1
2) Extreme Optimist
Adopt maximise strategy
Would chose a2
(ii)
3) Hurwicz Criterion
(i) and (ii) may give implausible predictions, e.g. extreme optimist
may take actions disaster
Example: Nuclear Power Station
f
Extreme optimist builds.
Hurwicz criterion avoids this problem, as a Hurwicz index is formed
E.g.
Index Worst Best
h(a1) = Ω2 + (1-Ω) 5 = 5-3Ω
h(a2) = Ω1 + (1-Ω) 7 = 7-6Ω
Ω(0≤Ω≤1) is pessimism-optimism index
S1 S2
Build £5milion -£5million
Do not Build £3million £3million
Action S1 S2 S3 S4
a1 2 3 5 4
a2 4 5 1 7
Previous Examples:
h(build) = 5 – 5,000,005Ω
h(not build) = 3
h(build) > h(not build) => 5-5,000,005Ω > 3
i.e. extremely optimistic
4) Minimax Regret
More basis in economics
Chose to minimise opportunity cost, i.e. minimise maximum regret.
Example
Action S1 S2 S3
a1 1 4 0
a2 7 2 5
a3 3 1 8
Form the Regret Matrix:
Action S1 S2 S3
a1 6 0 8
a2 0 2 3
a3 4 3 0
- All numbers add up to 7, the highest number in the set
- E.g. a3,S1 is 3 in the set, and add 4 to make it 7
Apply minimax strategy
- chose a2
Avoids Nuclear Power Station problem.
Topic 4.2: Expected Values
Problems with the decision rules, is the likelihood with which different states
occur is likely to affect choice.
1) Subjective Probabilities
Suppose agent has subjective probabilities, pi, about each state si
To support this, we require:
- List of states exhaustive
- States are mutually exclusive
- States are exogenous
For states (s1, s2,…, sn) we define a lottery
p = (n1, n2,…, nn) where
Leads to another decision rule, expected values. Agent chooses
action with highest expected value.
Example
Action S1 S2 S3 S4
a1 2 1 3 5
a2 6 4 1 2
a3 1 2 3 1
Suppose p = (1/10, 2/10, 3/10, 4/10)
EV (a1) = 1/10(2) + 2/10(1) + 3/10(3) + 4/10(5) = 3 3/10
EV (a2) = 1/10(6) + 2/10(4) + 3/10(1) + 4/10(2) = 2 1/2
EV (a3) = 1/10(1) + 2/10(2) + 3/10(3) + 4/10(1) = 1 3/10
Would therefore choose a1
2) Problems with expected values
a) St Petersberg Paradox
Choice is:
a1: £1million with certainty
a2: coin tossed repeatedly. If there is n consecutive
heads, and a tail, prize is £2n
Expected value maximises chooses a2
EV (a2):
Number of heads and then a tail
n = 0 1 2 3 4 5 6
Payoff £1 £2 £4 £8 £16 £32 £64
Probability 1/2 1/4 1/8 1/16 1/32 1/64 1/128
1=n
Σ
1=1
pi = 1
EV(a2) = £1/2 + ¼(£2) + 1/8(£4) + 1/16(£8) + 1/32(£16) +
1/64(£32) + 1/128(£64)
= £1/2 + £1/2 + £1/2 + £1/2 + £1/2 + £1/2 = £∞
Hence EV(a2) > EV(a1) = £1million
b) Variance of Payoff
Once probabilities are introduced, then in effect the payoff from
action is a random variable
Expected value maximisation has advantages:
- takes into account all states
- takes account of central technology
Also has disadvantages:
- ignores the variance of payoff
Expected value maximiser is indifferent between a1 and a2 but
a2 has much later variance, i.e. risk.
Example:
S1 S2
a1 -£1 +£1
a2 - House + House
EV (a1) = ½(-£1) + ½(+£1) = 0
EV (a2) = ½(-House) + ½(+House) = 0
Thus, expected values ignore attitude to risk.
-ve Expected Value +ve
a1
a2
Share the same peak
Topic 4.3: Expected Utility
1) Expected Utility Theorem
Change of notation – Agent chooses between prospects, P
{ * P = [(p1, x1)( p2, x2),…, (pn, xn)] * }
pi
= probability
xi
= payoff
i = state of nature
Thus,
S1 S2
a1 60 40 p = (1/4, ¾)
a2 30 50
Choice between prospects:
P = [(1/4, 60)(3/4, 40)]
Q = [(1/4, 30)(3/4, 50)]
The expected utility of a prospect, P, is as follows:
EU (P) = p1.u(x1) + p2.u(x2) ,…, + pn.u(xn)
The central limit theorem of choice under uncertainty, associated
with von-Neumann, and Morgenstein is:
Expected Utility Theorem
Given: weak preference ordering over prospects
axiom of greed
three technical axioms
Then: EU(P) > EU(Q)
P is preferred to Q
2) Attitude to risk
The Utility Function u=u(w), where w is wealth, is the expected utility
function
The shape of u(w) tells us about the individuals attitude to risk.
Suppose two prospects, P and Q, with the same expected value, but P
has no variance and Q has some risk.
a. u(w) is strictly concave
u=u(w)
EV
w
u
EU(P)
EU(Q) EU(P) > EU(Q)
EV(P) w
EV(Q)
u=u(w)
u
EU(P)
EU(Q)
By EU Theorem: P is preferred to Q (event though they have
the same expected value).
[A risk averse individual, as they have a strictly concave
ex-post utility function]
This individual derives dis-utility from the presence of risk
basis for insurance.
b. u(w) is strictly convex
This individual is said to be risk-loving.
Derives pleasure from risk
c. u(w) is linear
Here the EU(P) =
EU(Q)
This individual is said
to be risk-neutral
In this EU max gives
the same result as EV
max
Note: u(w) is not the same as the utility function used in consumer
theory
This is clear: as in the expression for expected utility we added
the u(w) together, so u(w) must be measured cardinally.
We say that u(w) is unique up to an affine (or linear)
transformation. It is stronger than the monotonic (or order
preserving) transformations in consumer theory.
u=u(w)
EV
w
u
EU(P)
EU(Q)
Here, EU(Q) > EU(P)
3) Measuring Risk Aversion
The degree of risk aversion depends on curator of the ex-post utility
function.
An index of risk aversion is the Arrow-Pratt measure of relative risk
aversion. Hence:
RRA = -w * u’’(w)
u'(w)
u = u(w)
u’(w) = ∂u
∂w
u’’(w) = ∂2u
∂w2
For risk averse individuals, RRA > 0, and RRA increases with risk
aversion.
Example:
Calculate RRA for u(w) = log10w
du = 1 * 1 .
dw w ln10
d2u = 1 ( -1
dw2 = ln10 w2 )
Hence, RRA = 1
This individual displays risk aversion. Indeed any utility function of the
form u(w) = a+b.log10w has RRA = 1
The RRA does have advantages:
- independent of linear transformations
- independent of how wealth is measured (i.e. £’s, £m’s or bags
of sugar!)
Handout Notes
Handout Notes
Topic 5.0: Theory of the Industry
Focus on the competitive relations between firms – market structure.
May be only loosely related to the number of firms.
- e.g. a bakery may have a local monopoly
- e.g. British Gas had a monopoly on gas supply but operated in an
oligopolistic energy market.
Topic 5.1: Monopoly
Monopoly faces the market demand curve. In
inverse form: p = p (q)
This has implications.
i. Marginal Revenue < Price Proof: MR = dTR = d(p.q) dq dq = d[p(q).q) dq = p(q).1 + dp .q dq = p + dp .q (* see diagram below) dq = p [1 + dp . q ] dq p Use the fact that the elasticity of demand, e, is equal to: - dq . p dp q Hence, MR = p[1 – (1/e)] and since e>0, then MR
MR = p(1 – 1/∞) = p
d) Price Discrimination gives the firm a way out of the dilemma
- See below.
ii. Allocative Efficiency
For profit maximisation, MR=MC, but also MR = p(1 – 1/∞) < p p > MR=MC
This implies inefficiency! The marginal benefit of the last unit produced
(price) exceeds the marginal cost (MC)
The price ratio pm/MCm is called the mark-up
Lener suggests 1-1/mark-up is a measure of monopoly power. However, we
can simply this for profit maximising firm:
1 – 1 = 1 – MC/P
P/MC
= 1 – P (1 – 1/e)
P
= 1 – 1 + 1/e
= 1/e
Hence, monopoly power is given by the inverse of the elasticity of demand.
For a competitive firm, e = ∞, so market power is zero.
MR
D
q
£
pM
MCM
MC
qM
Dead-weight welfare loss
(measures allocative
inefficiency of monopoly)
iii. Price Discrimination
To avoid cutting the price as it expands output, monopolists can price
discriminate, so that it changes different prices in different ‘markets’.
Topic 5.2: Price Discrimination
Necessary Conditions:
- More than one ‘market’
- willingness to pay must differ
- Resale possibilities must be limited
- barriers between markets:
- legal
- physical
- informational
- Price making not necessary
Suppose two markets
Condition for profit maximisation:
MR1 = MR2 = MC
Sum MR curves ‘horizontally’ then by construction q1
* + q2
* = q* it gives p1
*
and p2
*.
Notes:
a) If MR1 > MR2 then can not maximise profits. Switch units from market
2 to market 1
b) Since MR1 = MR2
p1(1-1/e1) = p2(1-1/e2)
When e1 > e2 then p1 < p2. i.e. change lower price in more elastic market (see diagram) c) Price discrimination reflects differences in demand (not costs). Eliminates by resale d) In practice there are three types of price discrimination: - 3rd Degree Price Discrimination Prices vary between individuals, but every unit sold sells at the same price (direct price discrimination) E.g. OAP haircuts on a Monday. - 2nd Degree Price Discrimination q1 p1 p1 * D1 q1 * MR1 q2 p2 p2 * D2 q2 * MR2 £ MR1+2 q* q1q2 Market 1 Market 2 Firm
Microeconomic Analysis Prices vary between units or good, but every individual fares the same price schedule (indirect price discrimination) E.g. Bulk-buying discounts Often 2nd Degree Price Discrimination is accompanied by subtle differences in quality of a good E.g. Leg-room on a aeroplane flight - 1st Degree Price Discrimination Both of the above E.g. Theatre tickets - standby tickets for students (3rd Degree Price Discrimination) - Different seats at different prices (2nd Degree Price Discrimination) For a perfectly price discriminate monopolist, MR=price, so the average revenue (AR) and marginal revenue (MR) are the same. MR D=MR MC q p This then eliminates the welfare cost
Microeconomic Analysis Topic 5.3: Oligopoly Key Features: interdependence of firms Strategic Uncertainty: the payoff from an action depends on how other firms react Duopoly: Firms A and B Inverse market demand function p = p(q) = p(qA + qB) Consider total differential dp = dp * dqA + dp * dqB dqA daB Divide through by dqA dp = dp + dp * dqB dqA dqA dqB dqA Total effect on price of a change in A’s output = direct effect + individual effect as B changes its output in reaction to A’s change in ouput dqB reflects the strategic uncertainty. It is known as A’s conjectural dqA variation. A must therefore guess the variation in B’s output. A similar expression for B exists, whereby dqA/ dqB is known as B’s conhectural variation. There are many models of oligopoly. They vary according to two characteristics - firms set output as prices - firms move simultaneously or in sequence. Note: Also oligopolies may collude. i) Cournot Model Possibly the earliest model (Cournot (1838)) Developed in context of a natural spring where MC = 0. Firms set output and move simultaneously. Assumed that firms ignore interdependence, i.e. ‘zero conjectural variations’ i.e. dqB = 0 (firm A’s belief) and dqA = 0 (firm B’s belief) dqA dqA But of course, these are only true in equilibrium (see handout) Notes:
Microeconomic Analysis a. Cournot equilibrium is an example of a Nash Equilibrium: each agent is doing the best for him/herself given the actions of all other agents (it is an equilibrium or ‘position of rest’). b. We ask four questions of equilibrium: i. Exist? ii. Unique? iii. Stable? – depends whether A’s reaction function cuts B’s r.f. from above) iv. Efficent? – see below c. Each firm behaves as if independent, but reaction functions tell us they are not. A’s rf: qA = 24 – qB 2 => dqA = -½ ≠ 0
dqB
So A always reacts (out of equilibrium)
ii) Stackelberg Model
One firm is sophisticated and recognises interdependence. Firms set
output, but more sequentially.
Sophisticated firm moves first (‘leader’), and other firm reacts
(‘follower’).
Leader knows follower’s reaction function. Assume ‘A’ is leader and ‘B’
is follower.
Example (corresponds to handout)
max πA = p.qA - cA
qA
s.t. p = 50-q
q = qA + qB
and qB = 12.5 – qA / 4 (B’s reaction function)
2 4 qA
1
3
5
qB
A’s rf
B’s rf
Equilibrium
Sub constraints into objective function:
max πA = {50 – qA – [12.5 - qA ] } . qA – 2.qA
qA 4
i.e. max πA {37½ - ¾ qA} . qA – 2qA
qA
dπA = 37½ - 3/2 qA – 2 = 0
dqB
qA
3 = 23⅔
qB
5
= 6 7/12 (from B’s rf)
Illustrate the above equilibrium
Iso-profit lines for A: a line showing all positions of constraints for
firm A
a. Each iso-profit line has a maximum on the reaction function
b. A’s profits increase as we move down A’s reaction function
Numerical Example:
Π’s ↑
M qA
qB
iso-profit lines for A
Note: The Stackelberg equilibrium is an equilibrium i.e. position of rest
iii) Allocative Efficiency
For simplicity, suppose MC’s = ⌂
e.g. Cournot’s natural spring
Demand: p = 100-q
203/7
232/3
qA
A’s rf
qB
73/7
6½
B’s rf
Stackelberg Equilibrium
Cournot Equilibrium
iso π
iso π
qA
A’s rf
qB
B’s rf
Perfect Competition
p = MC => q= 100
Monopoly
MR = MC => q = 50
Cournot Equilibrium
qA = 331/3, qB = 331/3
Therefore, q = 662/3
Inefficient, but not as inefficient as monopoly.
Stackelberg Equilibrium
Leader: qA = 50
Followers: qB = 25
q = 75
Inefficient, but more efficient than Cournot
iv) Bertrand Model
Like Cournot (simultaneous, zero CV’s), but view set price rather than
output.
Rule: firm setting lower price supplies all market.
Leads to cut-throat competition
Suppose: MCA = MCB and constant
p = 100-q
Cournot
66 Stackelberg 2/3
75
50 100
MR
Monopoly
Perfect
Competition
q
p
MCA = MCB D
q
p
A –
B –
A –
B –
A –
Firms under cut each other until price = MC
Equilibrium is where one firm produces at price = MC
- It is an oligopoly
- Looks like monopoly (only one firm produces)
- Produces same outcome as perfect competition (P=MC)
Note: Market structure is oligopoly, as if incumbent was to let P > MC
then other firms would enter.
v) Price Leadership
Like Stackelberg (sequential), except that the leader sets price rather
than output.
Leader may be dominant firm. E.g. has lower costs and could win ‘price
war’.
Suppose: MCL = 0 and MCF > 0 and linear leader sets price, and
follower is price-taker, so sets p = MCF, and leader knows this.
Construct leader demand curve:
DL = D – MCF. At each price leader calculates followers
reaction, and suppliers reaction of market to maximise profits.
Leader maximises profits where MRL = MCL
Note:
- qL + qF satisfies demand at this price
- it is an equilibrium
Overall there are many models of oligopoly. When it is appropriate will
depend on the nature of the industry we are seeking to analyse.
MCF
q
price
P
D MCL = 0
DL
MCL
qL qL
Lecture Handout Notes:
Topic 5.4: Collusive Oligopoly
The previous models of oligopoly are all examples of competitive
equilibrium.
Since oligopolies together produce more than the monopoly output, there is
an incentive to collude.
This is secret collusion and in many countries it is illegal!
The incentive to collude can be shown in the Cournot Model:
If restrict output and locate in the shaded area, then both firms are better off
(i.e. on higher iso-profit lines).
There are two main types of cartel:
i) Joint-profit maximisation
Firms behave as if a multi-plant monopolist
Industry profits are maximised when:
MR = MCA = MCB
Diagrammatically, sum MC curves horizontally:
A’s rf
B’s rf
πB
πA
Cournot Equilibrium
qA
qB
incentive to collude
£ £ £
qA
* qA qB
* qB q* qA+B
MCA
MCB
MCA+B
MR
p*
FIRM A FIRM B MARKET
D
By constructing qA
* + qB
* = q*
Note: low-cost firm produces more output.
If MCA > MCB, switch production from A to B until
MCA = MCB
Example: (on handout)
Cournot Model:
p = 50 – q
CA = 2 * qA
CB = q2
B
Conclusion:
MR = MCA = MCB
50-2q 2 2qB
=> qB
*
= 1
Also, 50 – 2q = 2
408 = 2q
24 = q
24 = qA + qB
Implies => qA
* = 23
Also, p = 50 – q
p = 26
We can illustrate this:
Note: When qA = 23 and qB = 1, then B is worse-off
‘cooperating’ (colluding). Thus, A must make ‘side-
A’s rf
B’s rf
qB
qA
Cournot Equilibrium
(qA = 202/7)
(qB = 73/7)
πB
πA
side payment
cooperative equilibrium
payments’ to B to make sure B participates – so both are
better off.
Note: Total output is smaller than when competing.
Problems with cartels are that there is an incentive to cheat on
agreements.
From this we see that in this case, both firms have an incentive
to expand outwards. This is an example of the well know
Prisoners’ Dilemma problem.
The Prisoners’ Dilemma:
- if stick to the agreement, then both better off
- but each has individual incentive to cheat
- if both cheat, both end up worse off
Hence, collusive agreements tend to be unstable equilibrium.
ii) Market-sharing Agreement
Firms agree a common price, and divide up market and behave
independently, consistent with price.
e.g. OPEC (oil) and de Beers (diamonds)
Let DA and DB be divisions of markets.
A’s rf
B’s rf
qB
qA
Cournot Equilibrium
(qA = 202/7)
(qB = 73/7)
πB
πA
output when collide
Common Price pC, such that pA ≤ pC ≤ pB
Firms produce qA* and qB*
Incentive to cheat: both firms are price-takers at pC
Low-cost firms have incentive to expand output
iii) Stability of Cartels
Cartels are unstable equilibria.
In both cases, firms have the incentive to expand output.
Potentially, this is observable (even indirectly) as market price
falls.
It suggests cheating is detectable and hence punishable.
However, punishment must be credible, i.e. must be in the true
interest of the firm to carry out threat – often not the case.
In practice, oligopolists tend to compete in ways that do not
affect price:
Non-price competition
- advertising
- sales promotions
Secret-price competition
- off rebates
- set up subsidiary
- sell at higher quantity
There may also be external threats to a cartel from new
production.
e.g. North Sea Oil and OPEC
Russian Diamonds and de Beers
£
MRA DA
qA
MCA
qA*
pA
£
MRB DB
qB
MCB
qB*
pB
pC
Topic 6.1: Profit Maximisation
i. Why do firms exist?
Why not a collection of individuals undertaking production under
exchange?
Two reasons why:
1) Coase (1937)
Internalise transaction costs. Exchange is costly. Firms economise
on these costs through contracts
2) Alchian and Demsetz (1972)
Team-working. Higher output can be achieved from group
production
- diversion of labour
- problem: shirking
Traditional theory of the firm is owner-managed – leaders naturally
to profit maximisation
In the 1950’s and 1960’s, the owner-manager assumption was
questioned, and hence profit maximisation.
Non trivial => implications for firms behaviour
ii. Traditional Profit Maximisation Theory
In the modern context, a good example of the owner-manager
assumption is the one-person business or self-employed. Assume there
is no labour market, and leisure is given up. This will be according to
some production function.
One-man business maximises the economic rent (measured in
utility terms)
IC (indifference curve)
p* = c*
income
leisure
{
{
transfer
earnings
economic
rent
net
working IC
leisure labour
input
Slope is due to
diminishing MPL
In general the firms problem is:
Max π = R(Q,p) – C(Q,w,r)
= Revenue function – cost function
s.t. production function
π = π (Q) is known as the profit function. We use this in our analysis.
A prediction of this model is that profit takers do not affect the output
decision, Q*, and so do not distort firm behaviour.
(This is not the case for models considered below)
Lump-sum profit tax(T)
π = R – C – T
dπ = dR - dC - 0 = 0
dQ dQ dQ
MR = MC
Same output solution for Q* as when T was 0.
C (Q,w,r)
R (Q, p)
Q
£ MR
MC
Q
£
break-even point π
From a liner demand
curve
(short run)
C
R
This shows the break
even point in the long run
Proportionate profits tax(T)
Levied on profits at a rate of t
π = (1-t)(R-C)
dπ = (1-t) (dR - dC) = 0
dQ dQ dQ
Again, the solution is unaffected by tax.
Z
Strong result of Traditional Theory: output is not affected by taxation.
Q*
£
0
Q
π
Q*
£
0
Q
π
π - T
T
iii. Agency Theory
Traditionally, firms raised their capital from bank loans (small firms still
do – overdraft). Modern corporations rely on equity from shares.
Why?
- Loans are risky for firms
- Repayments are fixed
- Banks may call in the loans
Loans are also risky for banks:
- Large loans are problematic (default)
- Bank managers risk averse.
With equity, vary dividend with performance, and spread the risk.
In return for accepting the risk, shareholders receive ownership rights,
i.e. there is a separation of ownership from management
Topic 6.2: Managerial Discretion Models
Separation of ownership from management gives rise to a principal-agent
problem.
Principal (shareholders) own assets, but must operate through agent
(manager), who has stewardship of the assets. It is only a problem if the
principal can not monitor/observe the actions of the agent due to asymmetric
information.
Two main models:
i. Sales-Revenue Maximisation
Associated with Baumol (1959) – see seminar sheet six.
Managers maximise the size of the firm, as it brings them personal
prestige, but subject to earning some return to keep shareholders
happy.
Two cases:
πC denotes the minimum profits required by shareholders (πR = profits
at QR)
i. πR ≥ πC:
Profit constraint imposed by shareholders is non-binding
Manager produces at QR
ii. πR < πC: Profit constraint binding Manager produces at QR1 Note: If symmetric information, then owner can set πC high enough to make sure output is Qπ TC TR πC Q Profit Function £ QR QR1 Qπ
Microeconomic Analysis Note: A profits tax now affects output where profit constraint is (or becomes) binding. ii. Expense Preference Model Associated with Williamson (1964) Managers maximise utility which includes scale of firm (S) and discretionary profit (D) after the shareholders have been satisfied. max u (S, D, perks,) s.t. π ≥ πC Where, D = π - πC In this model, profits have some value to the manager. Note: profits tax may affect output even when profit constraint is nonbinding. πC π tax £ D=0 QM Q IC (indifference curve) Manager πC π
Microeconomic Analysis Note: a proportionate tax gives rise to a stronger substitution effect towards S(≡Q), as D now relatively more expensive – and output may increase! iii. Criticisms Managerial models are still based on the neo-classical economics approach: - Managers are well informed - They optimise (i.e. maximise) Alternative models from management literature; based on satisfying behaviour. i.e. aim to “get by” or avoid bankruptcy Simon: argued that managers have bounded rationality Cyet and March: managers promote the interests of their own departments. £ D=0 QM Q IC Manager πC π IC Manager1 QM 1
Microeconomic Analysis Topic 7.1: Robinson-Crusoe Models All on handout:
Microeconomic Analysis
Microeconomic Analysis Topic 7.2: Robinson-Crusoe and Man Friday Economic Model 2 individuals x 1 good x 1 factor Now there is a possibility of trade We show: - producer and consumer decisions are separate (i.e. independent of each other) - trade allows all individuals to trade off their production constraints, so gains from trade (i.e. everybody is better off) Assume: - Robinson Crusoe owns the means of production, and Man Friday behaves passively to ensure an ‘equilibrium’ - We sketch equilibrium, so w and p are equilibrium prices Hence: RC = Robinson Crusoe MF = Man Friday Xs RC = Xd RC + Xd MF Ld RC = Ls RC + Ls MF a) Robinson Crusoe’s Problem Production: Robinson Crusoe chooses a labour demand and labour supply to maximise profits, as before in 7.1. Consumption: Consumption Opportunity Line (COL) is the same as before, but now Robinson Crusoe no longer has to consume at P, as he can hire labour from Man Friday X = f (L) Highest iso-profit line in production PRC L X ICRC CRC Ld RC Xs RC π* p Xd RC Xd MF Ls RC Ls MF
Microeconomic Analysis Here, given the preferences of ICRC, then: - Robison Crusoe hires labour Ls MF which reduces his consumption by Xd MF - By construction: P . Xd MF = w . Ls MF loss of income to Robison Crusoe=factor payment to Man Friday b) Man Friday’s Problem Man Friday does not produce, just consumes. Conclusion: - Production and consumption is separate for both agents (i.e. decisions over P is independent of that over C) - Both agents better off trading. Illustrates the gains of trade COL (slope = w/p) L X CRC Ls MF Xd MF
Microeconomic Analysis Topic 7.3: Several Outputs Two goods, x and y, and two factors, L and K. Resources are scarce, L = , K = , they can be allocated to x and y: Lx + Ly = Kx + Ky = The production functions: x = fx (Lx, Kx) y = fy (Ly, Ky) i. Efficiency The fundamental economic constraints can be shown diagrammatically in the Edgeworth-Bowley box Any position in the box shows the amount of L and K going to x and y. Consider position R, then it produces xR of x, and yR of y. Note: R is inefficient. We can produce more x and y at S Note: S is also inefficient! Points of efficiency occur where the isoquants are tangential to one another. i.e. point T. 0x Lx R Kx R Ly R Ky R 0y R S T XR YR Ky Kx Lx Ly L K contract curve
Microeconomic Analysis The tangency at T is not unique. The locus of all tangencies (i.e. efficient points) is called the contract curve. The efficiency condition in production is: MRTSx KL = MRTSy KL ii. Production Possibility Curves Re-drawing the contract curve in the goods plane gives the production possibility frontier. The production possibility frontier (PPF) shows the maximum x and y we get given and . Outwards shifts could occur if more and is found (e.g. North Sea Oil), or if there is an improvement in technology. If both production functions are linearly homogenous (i.e. LRS – constant returns of scale), the production possibility frontier is strictly concave, i.e. production possibility curve. The slope of the production possibility curve is called the marginal rate of transformation of y for x (MRTyx) X Y +1 MRT ppc T S R feasible region 0X 0Y X Y production possibility frontier
Microeconomic Analysis MRTyx is related to the marginal cost of x and y as follows: MRTyx = - MCx MCy
Microeconomic Analysis Topic 7.4: International Trade Now uses a 2 x 2 x 2 model. 2 factors: L and K (capital and labour) 2 outputs: x and y 2 agents (countries): A and B Assume: a) Production technologies are linearly homogenous b) Trade is costless c) Factors do not migrate We can examine trade by allowing any of the following to vary (each will produce a competitive advantage): - Factor Endowments - Production Technologies - Tastes Below, we will focus on the changes in factor endowments only – i.e. the other two will remain constant. Suppose x is relatively labour intensive. i. Differences in Endowments A has more K and less L – x is more labour (L) intensive. No Trade: 0yA 0yB LA LB KB KB L K 0xA 0xB Contract curve in A y x PPCA PPCB
Microeconomic Analysis Note: tastes are the same In absence of trade, the countries must produce and consume at respective production possibility curves, at PA/CA and at PB/PB. Note: At PA and PB: MRTA yx > MRTB
yx (ignores signs)
This means that A has to give up more y to produce another unit of x,
than does B.
Hence, B has comparative advantage in the production of x. This is the
basis for trade. Conversely A has a comparative advantage in y.
Trade:
A must agree terms of trade, i.e. prices px and py at which exchange
goods, such that:
MRTA
yx ≥ px/py ≥ MRTB
yx
At these prices, countries produce to maximise their endowments and
then consume to maximise
their utility.
The countries specialise in
producing the good in which
they have a comparative
advantage (B in x, and A in
y), and then diversify in
consumption.
Both countries gain from
trade.
y
x
PA=CA
PPCA
PB=CB
PPCB
CA’
y
x
PPCA
PA’
PA
PB
PB’
PPCB
CB’
Heckscher-Ohlin Result
Each country specialises in the good which uses most intensively the local
abundant factor.
i.e. B has more L
x is L-intensive
=> B specialises in x
ii. Factor Mobility
We assumed factors are immobile
Result
Given that x is labour(L)-intensive and is constant in returns of scale, it
can shown that as w/r ↑ then the MCx increases to MCy
Pre-Trade:
MRTA
yx > MRTB
yx
implies => (MCx/MCy)A > (MCx/MCy)B
=> (w/r)A > (w/r)B
If factors are mobile, then labour migrates B A and capital flows, A B
This alters dimensions of production possibility curves until ppcA ≡ ppcB,
so that the comparative advantage is eliminated.
Result:
Whether we observe flows of
goods or flows of factors will
depend on the relative speeds
of adjustment in these markets.
Openness of markets:
- Barriers to trade
- Barriers to migration/ capital
flows
LA LB
KB
KA
Topic 8.0: General Equilibrium Theory
General Equilibrium Theory attempts to provide a complete description of
decentralised market economy.
Based only on assumptions about the optimised behaviour of microeconomic
agents (producers/consumers).
The General Equilibrium Theory attempts to find prices at which all markets
clear. (i.e. equilibrium)
Circular Flow of Income:
Prices at which D=S in every market.
Dates back to Walras(1870’s), revived by Hicks(1930’s) and developed
by Arrow and Debreu(1960’s)
Producers Consumers
D
S D
S
Factors
Goods
Topic 8.1: Model Set-Up
Many severe assumptions, which lead many to question reality of model:
a. No monopoly (no price setting)
b. No uncertainty (prices known)
c. No externalities (prices exist)
d. No public goods (prices exist)
e. No increase in returns of scale (leads to monopoly)
f. No government (no price distortions-taxes etc.)
Here, we focus on the 2x2x2 model, so there are four markets: x,y,L,K
Definition: excess demand function for good x is:
Zx = xA
* + xB
* - x*
Since, all markets are related, then:
Zx = Zx(px,py,w,r)
Ultimately, excess depends on al the prices in the economy. Likewise:
Zy = Zy(py,px,w,r)
ZL = ZL(w,r,px,py)
ZK= ZK(r,w,px,py)
Walrasian Equilibrium
Set of prices for, px, py, w and r, such that Zx=0, Zy=0, ZL=0, and ZK=0.
This is a general equilibrium.
When prices are determined by markets, it is known as a competitive
general equilibrium.
Topic 8.2: Conditions for a Competitive Equilibrium
i. Partial Equilibrium
a) Exchange Economy
Suppose production has already taken place, producing of x
and of y.
A and B own all the factors, and have claims on the total output.
Suppose, A gets A and A, and B gets B and B (where A and
B = , and A + B = ).
These are known as A and B’s initial endowments (before
exchange/trade).
(a) The Offer Curve
Consider A.
xA*, yA
* are A’s demand for x and y
Consider another set pf prices (where px↑ and/or py↓)
=> another set of demands
The offer curve is locus of all optimal positions as prices
change. Likewise, we can find an offer curve for B.
Producers Consumers
Factors
Goods
Consumption
Opportunity
Line (COL)
slope = - px
py
COL’
yA
*
yA
yA
xA
xA
* XA
ICA
ICA’
A’s initial
endowment (IE)
(b) Edgeworth-Bowey Consumption Box
At these prices there is disequilibrium.
Excess supply of x: Zx=xA
* + xB
* - < 0 Excess demand of y: Zy=yA * + yB * - > 0
By the law of demand and supply:
Zx < 0 py↓ Zy > 0 py↑
This occurs where the offer curves intersect:
A’s offer curve
COL’
COL
B’s offer curve
ICA
ICB
y
x
0B
0A
xA
yA A’s offer curve
B’s offer curve
COL
COL slope
= (-px/py)
xB xB
*
xA
*
yB
xB
yA
xA
yB
IE
ICA
ICB
ii. Production Economy
Now examine equilibrium in factor markets and ignore exchange
economy
Scare factors, and .
In order to maximise profits, it is necessary to hire factors to minimise
costs, i.e. MRTSKL = - w/r
250
Equilibrium at e:
ZL = Lx
* + Ly
* - = 0
ZK = Kx
* + Ky
* - = 0
Condition for equilibrium in production:
MRTSx
KL = MRTSy
KL = - w/r
iii. Product Mix
For general equilibrium the exchange and production economies must
be in equilibrium together.
- prices, px and py at which consumers maximise utility, must
be the same prices at which producer maximises profits.
- Prices, w and r, at which producers minimise costs must be
consistent with IE position.
Condition for profit maximisation (π-max):
0x
Lx
*
Kx
*
Ly
*
Ky
*
0y
e
y
x
L
K
contract curve
At P: MRTyx = - px
py
Condition for Utility Maximisation:
MRSA
yx = - px = MRSB
yx
py
Top level condition for general equilibrium is:
MRSA
yx = MRSB
yx = MRTyx = - px
py
This ensures both sides of economy are in equilibrium.
Aside: This condition is usually written another way: Community
Indifference Curve (CIC). It shows the bundles of goods x
and y that can give consistent levels of utility to each
individual, A and B.
P
slope: - px
py
x
y
p.p.c
Notes:
1. Community Indifference Curve’s useful analytical device
2. Not the same as social welfare function
3. Community Indifference Curve’s can cross
The slope of Community Indifference Curve is determined by MRScom
yx.
Along a Community Indifference Curve: MRScom
yx = MRSA
yx = MRSB
yx.
Hence, we write top-level condition as:
MRScom
yx = MRTyx = - px
py
x
y
ICA
UA = UA
x
y
ICB
UB = UB
y
x
ICA
CIC
L
0x
0y
y*
x*
K
e
(2)
0y
(3)
y
x
y*
0x
x*
e (1)
UA
IE
UB
- py
px
CIC
Topic 9.0: Welfare Economics
Statements about welfare of society from different allocations of
goods/services.
Example:
2 individuals, and =10
(i) (ii) (iii)
A 7 5 0
B 3 5 10
Is society better off under (i), (ii), or (iii)?
Two problems:
a) Inter-Personal Utility Comparisons
- Utility is measured ordinally, so can not compare utility changes
across individuals
b) Ethical Judgements
- Even if can measure utility cardinally, still a problem of how to treat
individuals,
- Suppose, A = a millionaire and B = a beggar
Lead us to make value judgements
- These are statements which can’t be verified/falsified by reference
to the facts
Normative Economics
Contrast to positive economics (topics 1-8).
In making value judgements trade-off:
- analytically useful
- reasonable
Example:
A is a dictator (i.e. only A’s preferences count)
Then (i) preferred (ii) preferred (iii)
Very useful but highly unreasonable.
Topic 9.1: The Pareto Criteria
Most widely used value judgements in Economics
- Vilfredo Pareto (1848-1923)
Pareto Postulates
1) Social Welfare function is of the form:
W = W (u1, u2…,un)
*Individual Approach*
2) Individuals are best judge of their well-being, ui
*Liberal Approach*
3) Social welfare, w, improves if at least one person is better off and noone
is any worse off.
*The Pareto Criterion*
Pareto Improvement (in welfare) us when 3 occurs. An allocation from which
no Pareto improvement is possible is Pareto Optimal/Efficient
It is in this same sense that efficiency is used in economics
Example: The Consumption Box
R is not Pareto Optimal, as R S is a Pareto Improvement.
T is Pareto Optimal (can not make A better off without making B worse off)
Overall, only allocations on contract curve are Pareto Optimal.
x
y
0B
0A
R
S
T
A’s IC
B’s IC
Contract Curve
Topic 9.2: Welfare Maximisation
2 individuals, A and B
i. Objective Functions
Social Welfare Function
W = W(UA, UB)
Iso-welfare line: line of constant welfare (W)
According to Pareto they look like:
ii. Constraint
Utility Possibility Frontier (u.p.f) – maximum utilities for A and B given
societies constraints:
- factor endowments
- production technology
- utility functions
Each output-mix on the production possibility curve has a utility
possibility curve associate with it (maximum utility’s given the output
mix).
y
x
0B’
0B
0A
0A’
Contract Curve
UB
0B’ 0B UA
0A’
0A
Utility Possibility
Curve (u.p.c)
UB
UA
W
ΔW < 0 ΔW > 0
“Squiggles” as utility is
measured ordinally
Utility possibility frontier is outer boundary of all utility possibility
curves.
iii. Welfare Optimum
Occurs where iso-welfare line is tangential to the utility possibility
frontier.
Note:
a) W* is on the same utility production curve which implies the same
position on the production possibility curve – implies some
‘configuration’ in the economy.
b) Utility production frontier gives the set of all Pareto Optimum
positions. The welfare optimum selects from these.
c) Formally, tangency condition:
- dw/duA
= - duB/dx = duB/dy
dw/duB
duA/dx duA/dy
Rearranging, dw . duA = dw . duB
duA
dx duB dx
This says, at margin, the increase in welfare from an extra unit of x
to A, is the same as giving it to B (same for y also).
upc upf
welfare optimum (W*)
W
UB
UA
Utility possibility frontier runs
along outside of all utility
possibility curves
( )
d) The tangency condition is necessary for a welfare optimum, but not
sufficient.
upf
W
W
w*
w**
UA
UB
w** = welfare optimum
necessary = w**
not sufficient = w*
Topic 9.3: Welfare Properties of General Equilibrium
In General Equilibrium Theory, each agent optimises (‘firms’ and consumers),
but does society?
i.e. does the General Equilibrium Theory maximise social welfare?
- First Theorem of Welfare Economics:
A competitive General Equilibrium is Pareto Optimality.
i.e. General Equilibrium puts us on the utility production frontier
i.e. General Equilibrium is efficient
Informal Proof
Focusing on the 2x2x2 model:
Exchange
Condition: MRSA
yx = MRSB
yx
General Equilibrium generates this condition, as it is a set of prices, such
that: - px/py = MRSA
yx = MRSB
x
Likewise, for the production economy, and for the product mix economy.
- Second Theorem of Welfare Economics
Any Pareto Optimum position can be represented as a competitive general
equilibrium provided there is some suitable distribution of income.
This says we can get to any position on the utility production frontier,
including w*
Informal Proof
Again, focusing on the 2x2x2 model:
Exchange
OA
OB y
x
contract curve
Problem:
- at the initial endowment (IE), the equilibrium gives w’ on utility
production frontier, which is not welfare optimum.
- we can not get to w* from IE using markets.
- by moving from IE IE’ we can get to w*
Conclusions
Sufficient conditions for the General Equilibrium to generate welfare
optimum are:
a) 3 conditions for General Equilibrium
( Pareto Optimum by first theorem)
b) Fourth optimality condition
( by second theorem)
- dw/duA = duB/dx
dw/duB
duA/dx
gets us to w*
IE IE’
w*
w'
uA
uB
u uA B
x
y 0B
0A
uB
uA
w'
w* w
upc(=upf)
Topic 9.4: Criticisms of the Pareto Postulates
Will be covered in more detail in seminar 8.
1) w=w(u1,…un)
2) Individual is best judge of utility
3) Pareto Improvement
i) Individualistic/Liberal
Will be discussed in seminars
ii) Inter-personal utility comparisons
Frequently observe both gainers and losers but Pareto cannot say if
welfare has been imprived.
Example 1: Break up a monopoly. Consumers better off and
monopolists worse off.
Example 2: New bridge replacing ferry – gainers and losers
Common Principle
If gainers can in principle compensate the losers, and still be better-off,
then welfare has improved.
Note: Compensation is not actually paid! Hence, also know as
Potential Pareto Improvement (i.e. potential for Pareto Improvement).
Sometimes know as the Kaldor-Hicks criterion
PQ: B gains, A loses. Is society better off?
Consider B paying compensation to A
Q
P’
P utility possibility
curve
UA
UB
This moves us up along the utility production curve (up the contract
curve) until we get to P’.
Hence, Q is the Potential Pareto Improvement on P because P’ is the
Pareto Improvement on P
So society is better off at Q than P
Problem
Compensation principle only considers
distributed issues (movements along
utility production curve). It ignores
changes in configuration of economy (i.e.
the fact that P and Q represent different
positions on the production possibility
curve).
This leads to the Scitovsky Paradox.
Q Potential Pareto Improvement
on P because P’ is the Pareto
Improvement on P.
Suppose make the move from
PQ.
Paradox is that P might be a
Potential Pareto Improvement
of Q!
P Potential Pareto Improvement
on Q because Q’ is the Pareto
Improvement of Q
Suggests unlimited improvements in welfare are possible.
Q
P’
y
x
0A
0B
contract curve
Q
P
x
y
Q Q’
P
P’
upc
uB
uA
upc
iii) Ethical Consideration
Pareto treats all individuals the same
Problem
If A is a millionaire and B is a beggar, then according to Pareto,
welfare is improved if ΔUA↑ but ΔUB=0
Alternative Welfare Systems
a) Rawls
Place individuals behind ‘veil of ignorance’, such that they do not
know what member of society they were.
w(u1,u2,…,un) = min{ u1,u2,…,un}
i.e. maximum strategy
b) Egalitarianism
Welfare improves if utilities more equal
c) Utilitarianism, Jeremy Bentham
“maximise greatest good of the greatest number”
Interpreted as: w = uA + uB, assumes utility is cardinally measured
(will be discussed in further details in seminars)
Line of equality
UA
UB
45degrees
W
Index
A
Alchian and Demsetz................. 62
Allocative Efficiency..............47, 54
Arrow and Debreu .................... 79
average cost............................. 29
Axiom of Greed............ 2, 3, 17, 18
B
Bertrand .................................. 55
C
Cartels ..................................... 61
Coase ...................................... 62
Cobb-Douglas........................... 30
collude See collusion, See collusion,
See collusion
collusion .................................. 58
Completeness............................. 2
consumer preferences ................. 1
Consumer Surplus..................... 20
Continuity .................................. 3
contract curve ............... 74, 90, 97
Cournot ....... 51, 52, 54, 55, 58, 59
D
Differentiability ........................... 6
Duopoly ................................... 51
E
Economies of Scale ................... 29
Edgeworth-Bowey..................... 82
Edgeworth-Bowley box.............. 73
Efficiency ................................. 73
Egalitarianism........................... 98
Elasticises ................................ 11
Elasticity, Demand .................... 16
Elasticity, Income ..................... 12
Elasticity, Own-Price.................. 15
Elasticy, Cross-Price .................. 15
Elasticy, Income ....................... 15
Endowment.............................. 76
Equi-Marginal Returns ................. 7
Euler’s Theorem...................32, 34
Exchange Economy................... 81
Expected Utility Theorem........... 40
G
General Equilibrium Theory .. 79, 94
H
Heckscher-Ohlin Result ..............78
Hicks..............79, See Kaldor-Hicks
Hicksian ............ 13, 14, 18, 21, 22
homogeneous ................30, 32, 34
Hurwicz....................................36
I
Indifference Curves .....................1
Indifference Set ......................1, 3
interdependence ................. 51, 52
iso-cost .............................. 27, 28
isoquant ............ 25, 26, 28, 30, 31
J
Jeremy Bentham.......................98
K
Kaldor .................See Kaldor-Hicks
Kaldor-Hicks .............................96
L
Lagrange....................................7
Lagrangian ........................... 7, 16
Laspeyres.................................19
Lener .......................................47
lexicographic preferences.............6
Linear Programming Approach ...24
M
Man Friday ......................... 71, 72
Marginal Revenue .....................46
marginal utility...................... 7, 16
Marshallian ........ 11, 12, 15, 16, 22
Monopoly ........................... 46, 55
monotonicity...............................2
N
Nash Equilibrium .......................52
Neoclassical Consumer Theory .....1
O
Offer Curve...............................81
Oligopoly.............................51, 58
Optimisation............................... 7
Optimist ................................... 36
P
Paasche ................................... 19
Pareto Criteria .......................... 90
Pareto Improvement ...... 90, 96, 97
Pareto Optimal ......................... 90
Pareto Postulates.................90, 96
Perfect Competition .................. 55
Pessimist.................................. 36
Potential Pareto Improvement ... 96
Preference Relation..................... 1
Price Discrimination.. 47, 48, 49, 50
Price Leadership ....................... 56
Prisoners’ Dilemma ................... 60
Product Mix .............................. 84
Production Economy ................. 84
Production Possibility Curve ....... 74
Profit Maximisation ................... 62
R
Reflexivity .................................. 2
Restrictions .......................... 2, 17
Revealed Preference Theory ...... 17
Risk Aversion............................ 42
Robinson-Crusoe .................69, 71
S
Sales-Revenue Maximisation ......66
Scitovsky Paradox .....................97
Sets ...........................................8
Slutsky ............................... 14, 18
St Petersberg Paradox ...............38
Stackelberg ............. 52, 54, 55, 56
Strategic Uncertainty .................51
substitution effect ..........14, 18, 68
T
Theory of the Industry...............45
Transitivity .................................2
U
uncertainty ........... 35, 40, 51, 80
Uniqueness Theorem.................10
Utilitarianism.............................98
Utility Change ...........................22
Utility Curve..............................20
Utility Function.................. 5, 6, 40
Utility Maximisation ...................85
W
Walras .....................................79
Walrasian .................................80
Weak Axiom of Revealed
Preference.............................18
weak preference ordering .. 2, 6, 40
Welfare Economics .............. 89, 94