Sunday 15 May 2011

Test Driven Development (TDD) a critical review of the claimed advantages gained by using this technique.

Test-driven development (TDD) is a software development process concerning the creation of concise, clean and testable code. The core principle of TDD asserts that testing should be done as part of the development process to drive the software’s progression (Beck, 2004). More specifically, tests should be created for isolated functionality prior to the implementation of code for that functionality (Erdogmus, Morisio, Torchiano, 2005). The TDD approach guides developers along a series of iterative steps to optimise the development and testing processes. The first step stipulates that a simple test must be established for an isolated requirement. Such a test will inevitably fail to compile due to the absence of production code. However, the process continues with the development of code to enable the test to compile; producing a fail result. Once the test compiles, the production code for the requirement in question can be fully implemented, so as to pass the test. The final stage of the TDD process involves the refactoring of both the test and production code in an attempt to reduce duplicate code and ensure that the existing design is optimal (Beck, 2004). In addition, as the code is refactored, the test should be continually run to guarantee that the code continues to behave as expected. This cycle is then repeated for every required function to continue the development process. For each function the developer must create a test, get that test to fail, write code to pass the test and then refactor the implemented code, whilst ensuring that the test, and all previously established tests, are still passed (see Figure 1.1 for a simplified diagrammatic explanation).

The question naturally arises, what functionality should be tested? I.e. how complex should each test be? In answer to this, TDD dictates that tests should be as simple as possible, focusing on a discrete behaviour. The principle being to ensure that the each behaviour is tested in isolation (Astels, 2003). Subsequently, TDD implies that tests should be designed to minimise dependencies. Methods or components outside the context of the behaviour in question should not be considered by the test. This therefore, ensures that the developed section of code behaves as expected in isolation (Janzen, Saiedian, 2008). With regards to the definition of the boundaries surrounding a particular test, design patterns known as testables (such as mocks, stubs and fakes), can be utilised to isolate the behaviour being tested, ensuring the test is kept as simple as possible (Astels, 2003).

Thus far, this paper has considered TDD to be a development process employed at the genesis of a project to ensure the program code is testable from the bottom up. However, TDD can also be applied part way through a project, or indeed to legacy code. Under such circumstances, TDD can be utilised to great benefit, via the refactoring of existing code and the introduction of numerous function specific tests (Linnamaa, 2008). However, the implementation of TDD to an existing project does generate additional risks, particularly if the code being altered has no pre-existing testing procedures. Nevertheless, to deliver the benefits associated with TDD, this technical debt must be paid.

The main justification behind the TDD process is that it guarantees the testability of production code, since tests are used to guide the creation of the software. Naturally, this guarantees extremely high code coverage is achieved since all of the code is testable (Linnamaa, 2008). Subsequently, this implies that the developed code will be clean and concise, since to isolate code for testing it is typically necessary to ensure that the code is simple and well written (Beck, 2004). Additionally, by creating the test first, TDD places emphasis on the specific behaviour in question and not the interface or outside behaviour (Martin, 2007). This therefore further encourages the development code to be less complicated and well written. Also, the process of testing first encourages good program design, in that developers are required to think about the code from a testability and task orientated perspective right from the beginning (Beck, 2004). Before the code is written the developer must assess why the code is being created and what result must be returned. This helps to retain focus in the task at hand, but also helps to document the code as it is written since tests provide a working documentation for the developed code (Ambler, 2008). Moreover, the principle of testing as the code is being developed assists programmers in detecting and fixing bugs early in the projects life cycle since tests provide instant feedback concerning the behaviour of the implemented code (Erdogmus, Morisio, Torchiano, 2005). Similarly, any attempts to alter the code, or optimise its design, can easily and quickly be tested. Thus, if the existing program logic gets broken, it can quickly be detected and resolved (Beck, 2004). As a result, TDD greatly reduces the regression testing time since the testing procedures are already in place (Ambler, 2008). This therefore could dramatically reduce the project costs by minimising the time spent on the testing phase of the programs lifecycle. Additionally, the high code coverage and vigorous testing instils confidence in the quality and accuracy of the program code; the code is tested as it is developed, thus ensuring it works as intended (Erdogmus, Morisio, Torchiano, 2005). In this respect, TDD acts as a psychological tool for developers, guaranteeing that the software works (Linnamaa, 2008). Furthermore, TDD has the advantage that it introduces a great deal of quality into the codes design right from day one since testable code requires components to be loosely related, thus guaranteeing an agile and extensible structure (Natté, 2009).

However, there are also a number of disadvantages associated with TDD procedures. Most notably, the use of TDD can greatly slow the development process by imposing strict testing procedures (Erdogmus, Morisio, Torchiano, 2005). Subsequently, the costs associated with the project can be significantly escalated due to the delay in development. Additionally, TDD places a significant burden on the developer in terms of maintaining the numerous tests since each test will require maintenance as the code being tested develops and changes (Linnamaa, 2008). This requirement, of continuously tweaking the test code to accommodate for changes in the program code, further delays the development process. Moreover, many projects evolve during the course of their development; at the beginning of a project the solution is not always foreseeable. Consequently, developers will be forced to redo tests, creating additional time delays (stackoverflow.com). Furthermore, many dependencies between classes and methods need to be broken to create testable code using TDD. For larger, more complex projects this breaking of dependencies to isolate individual test cases can be extremely difficult and may in-fact add to the overall complexity of the project (Erdogmus, Morisio, Torchiano, 2005)

In summary, it is clear that there are a number of distinct drawbacks which can arise from the implementation of TDD, particularly with regard to the amount of time invested in the development phase. Nevertheless, despite these undesirable characteristics, TDD offers a logical and structured design approach, forcing developers to focus on the task at hand. Implicitly, TDD encourages developers to produce simple, maintainable code. TDD is therefore an extremely effective development style, helping to ensure the development process remains structured and agile.


APPENDIX:


Figure 1.1:



REFERENCES:

S. W. Ambler, (2008), Introduction to Test Driven Design (TDD), AgileData, accessed on the 24.11.2010, http://www.agiledata.org/essays/tdd.html

S. W. Ambler, (2007), Test-Driven Development of Relational Databases, IEEE Computer Society, Vol. 24, No. 3, p. 37 – 43.

D. Astels, (2003), Test-Driven Development: A Practical Guide, Prentice Hall PTR.

K. Beck, (2004), Test-Driven Development: By Example, Pearson Education, 5th Edition.

H. Erdogmus, M. Morisio, M. Torchiano, (2005), On the Effectiveness of the Test-First Approach to Programming, IEEE Transactions on Software Engineering, IEEE Computer Society, Vol.: 31, Iss. 3, p. 226 – 237.

D. S. Janzen, H. Saiedian, (2005), Test-Driven Development Concepts, taxonomy, and Future Direction, IEEE Computer Scoiety, VOl. 38, ISS. 9, p. 43 – 50

D. S. Janzen, H. Saiedian, (2008), Does Test-Driven Development Really Improve Software Design Quality? IEE Software, Vol. 25, no. 2, p. 77 – 84.

L. Linnamaa, (2008), Test-Driven Development, University of Helsinki Computer Science Department,http://www.cs.helsinki.fi/u/linnamaa/linnamaa-Test-Driven-Development-final.pdf

R. C. Martin, (2007), Professionalism and Test-Driven Development, IEEE Computer Society, Vol. 24, No. 3. p. 32 - 36.

M. Natté, (2009), Introduction to Unit Testing, .Net blogs, accessed on 26.11.2010, http://martijnnatte.wordpress.com/2009/07/09/introduction-to-unit-testing/

stackoverflow.com, accessed on 26.11.2010,

http://stackoverflow.com/questions/64333/disadvantages-of-test-driven-development

The cloud: who's fault is it when errors occur, and what is the way forward?

When considering a situation in which data is stored with a third party data service (in a ‘cloud’), if the data becomes lost or corrupted it can be difficult to determine who is at fault. Hence, when determining which party caused the problem it is necessary to examine each case independently.

To assess how the culprit of a data problem is determined, it is necessary to examine different potential data problem scenarios. For instance, if data was to become lost or corrupted because of an operational error such as failing to back up the data, or a hardware failure such as a server crash, it is clear that blame should primarily reside with the service provider, since such issues are out of the user’s control. However, it is also possible that the user could be partly responsible; if an inappropriate technical system design was in place. For instance, critical data which cannot tolerate down time should be part of an architecture which prevents down time. Thus it is possible that both the user and the provider are to blame. This was the case when the Amazon network, in particular the Elastic Compute Cloud, failed. Only users with inappropriate system architectures suffered significant data problems as a result of the downtime (K. Maurer, 2011). Hence, both Amazon and user’s with inappropriate IT structures were at fault (K. Maurer, 2011). In addition, there are cases where data loss/corruption can be entirely the fault of the user. For instance, the user could directly cause data problems through the use of a problematic system architecture, or the inappropriate use of the provided services. Furthermore, it is possible that neither party are responsible for the loss or corruption of data. Consider the situation of a external phishing attack which breaks through reasonable security measures, whilst the fault lies at the providers end, it is largely not the providers fault; both the service provider and user are victims. Such was the case when the Playstation Network was hacked in April, earlier this year (BBC News, 2011). Therefore, it is clear that the blame regarding the genesis of the problem could lie with either the user or the service provider, or possibly both. This therefore further emphasises the need to play the blame game on a case by case basis.

Whilst it is not possible to draw blanket conclusions regarding who is to blame for data loss/corruption problems, it is possible to deduce where liability lies. In the majority of cases, regardless of who is to blame, the user is likely to be held responsible (M. Mowbray, 2009). This is largely due to the heavily one sided user agreements, which typically demonstrate judicious application of disclaimers to ensure minimal responsibility is accepted (M. Mowbray, 2009). Consequently, the user will have to bear responsibility for the majority of failures, without compensation, even if they are void of blame. This issue is demonstrated by the recent news headlines involving Sony and the hacking of the Playstation Network (BBC News, 2011). Although both parties were victims in this instance, since Sony cannot be held liable for the security failure (Sony is only required to take “appropriate measures” (Sony Playstation, 03.05.2011)) it is the Sony user’s who must bear responsibility of the failure. Further examples of responsibility falling on to users include the collapse of Linkup in 2008 (Richard Chow et al., 2009) as well as the loss of personal information by Danger in 2009 which affected millions of T-Mobile customers (J. Kincaid, 2009). In both cases, the users were held responsible despite being completely void of blame. Hence whilst it is possible in some cases to determine that the provider is to blame, it is very unlikely that they will be held responsible.

In summary it is clear that there is a great deal of asymmetry with respect to where responsibility lies. Clearly for significant progress to be made, a shift to a more balanced system must take place. Such a shift would make it easier to determine whether the provider is at fault through more transparent guarantees, and may also reduce the occurrence of data errors in the long run by encouraging a greater quality of service (Richard Chow et al., 2009). The way forward therefore involves greater legal obligations regarding data security and service provision, such as a promise to utilise cryptography. In addition, mechanisms, such as business insurance, may be needed to mitigate the risk associated with greater contractual obligations.





References:


M. Mowbray, 2009, "The Fog over the Grimpen Mire: Cloud Computing and the Law", 6:1 SCRIPTed 129, http://www.law.ed.ac.uk/ahrc/script-ed/vol6-1/mowbray.asp

Richard Chow et al., 2009, Controlling Data in the Cloud: Outsourcing Computation without Outsourcing Control, Proceedings of ACM CCSW’09, November 13, www.parc.com/publication/2335/controlling-data-in-the-cloud.html

K. Maurer, 2011, Amazon’s Cloud Collapse: The Blame Game and the Future of Cloud Computing, April, http://blog.contentmanagementconnection.com/Home/32236

J. Kincaid, 2009, T-Mobile Sidekick Disaster: Danger’s Servers Crashed, And They Don’t Have a Backup, October, http://techcrunch.com/2009/10/10/t-mobile-sidekick-disaster-microsofts-servers-crashed-and-they-dont-have-a-backup/

L. H. Mills, 2009, Legal Issues Associated with Cloud Computing, Nixon Peabody attorneys at law LLP, May, http://www.secureit.com/resources/Cloud%20Computing%20Mills%20Nixon%20Peabody%205-09.pdf

BBC News Technology, 2011, Playstation outage caused by hacking attack, 25 April, http://www.bbc.co.uk/news/technology-13169518

Sony Playstation, accessed at 15:46 03.05.2011, http://legaldoc.dl.playstation.net/ps3-eula/psn/e/e_privacy_en.html

The legal implications of data and applications being held by a third party are not well understood. What are the issues?

The third party provision of computational and network resources for the purpose of storing data and applications comes under the umbrella term of cloud computing. Conceptually, cloud computing can be thought of as a remote computing utility; an underlying delivery mechanism to enable data and software to be accessed remotely via the internet (M. Mowbray, 2009). The theories underpinning cloud computing have become increasingly popular over recent years, supported by a larger more general architectural shift within the computer industry towards increased flexibility, mobility, and cost efficiency (R. Buyya, C. S. Yeo, S. Venugopal, 2008). However, despite significant support for the theories behind cloud computing, it has been slow to develop in practice (Richard Chow et al., 2009). The main reason for the delayed progression stems from an air of fear and uncertainty surrounding the storage of sensitive data and applications outside of the user’s control (Richard Chow et al., 2009). These concerns discourage many companies from storing their data in the ‘cloud’, serving to impede momentum and may ultimately compromise the concept of cloud computing (R. Buyya, C. S. Yeo, S. Venugopal, 2008).

A key proprietor of the concerns surrounding cloud computing is the issue of data privacy laws differing across country borders. An organisation utilising cloud computing services is likely to find its data is stored in a different country to its own. The data is therefore bound by the privacy laws and jurisdiction of the country within which it is stored (M. Mowbray, 2009). Hence, in cases where data does not completely conform to these foreign laws, jurisdictional and legal disputes are going to arise. This is clearly an unattractive factor for organisations considering whether or not to add data to the ‘cloud’.

Additionally, encompassed within this wider jurisdictional issue is the potential for foreign governments to access the data; the data is put at the mercy of the data privacy laws of the country within which it is stored (M. Mowbray, 2009). This issue is exacerbated by the fact that much of the cloud computing services are based in countries such as the US, where laws exist to enable government officials to access data without notification to the data owners; for example the 2001 Patriot Act, in the USA (M. Mowbray, 2009). This point is illustrated by the reluctance of the French government to allow officials to use Blackberry email devices, since these devices use servers based in the US and the UK (M. Mowbray, 2009). Moreover, some regions such as the EU have stringent rules concerning the movement of data across borders (European Data Protection Act), which creates further problems (J. Kiss, 2011). Although this issue is unlikely to discourage organisations from accepting cloud computing, when considered as part of the wider jurisdictional issue, it is clear to see why many organisations are reluctant to participate.

A further reason for concerns about cloud computing stem from the highly one sided nature of current user agreements. The current trend for the user agreements of companies offering cloud computing services is to offer very little in terms of assurance should data be lost, or become corrupted (M. Mowbray, 2009). The aforementioned user agreements also ensure minimal liability with respect to the security of data, most simply offer ‘appropriate measures’ (Microsoft Terms of use, 02.05.2011). This point is nicely demonstrated by the Amazon Web Services terms of use, which accepts no liability “for any unauthorized access or use, corruption, deletion, destruction or loss of content or applications” (Amazon Web Services Terms of Use, 01.05.2011). This therefore serves to discourage those considering adding data to the ‘cloud’, since little responsibility is taken by cloud service providers to ensure the safety or security of the data they maintain. Essentially, users are losing control over all operational issues such as backing up data, and data recovery, without receiving any guarantees regarding data safety/security from service providers (L.H. Mills, 2009).

Similarly, very little guarantee is made regarding the continued and appropriate provision of ‘cloud’ services. For instance, the Google terms of service claims zero liability in the case of incomplete or unsatisfactory service provision; “Google, its subsidiaries and affiliates, and its licensors do not represent or warrant to you that: (A) your use of the service will meet your requirements, (B) your use of the services will be uninterrupted, timely, secure, or free from error. (C) Any information obtained by you as a result of your use of the service will be accurate or reliable, and (D) that defects in the operation of functionality of any software provided to you as part of the services will be corrected” (Google Terms of Service, 01.05.2011). This underlying reluctance to assume responsibility and guarantee a certain standard of service raises questions as to the benefits of storing data in the ‘cloud’. Users are again forced to relinquish control without being compensated with guarantees regarding the standard of service provision (L.H. Mills, 2009). This, therefore, encourages concerns and acts to discourage organisations and individuals from investing their data in the ‘cloud’.

A further issue which serves to hinder the progress of cloud computing relates to the use of subcontractors and the sharing of information. Most cloud service providers sub-contract much of their data storage for efficiency and cost-minimisation purposes (M. Mowbray, 2009). Beyond potential integration issues, this sharing of data may potentially raise additional judicial issues if subcontractors are located in different countries (M. Mowbray, 2009). This is issue is amplified by the lack of user say with regards to the selection/use of subcontractors, since most cloud providers simply use stylised blanket statements, contained within the terms of use; the Google terms of service states “right for Google to make such content available to other companies, organisations or individuals with whom Google has relationships for the provision of syndicated services” (Google Terms of Service, 01.05.2011). More importantly, however, the sharing of data creates additional opportunities for the data get lost, corrupted, or stolen (Richard Chow et al., 2009). These factors therefore serve to increase fears regarding the loss of control and further discourage organisations from accepting the ‘cloud’ as the future of data storage.

In summary, it is clear that the concerns surrounding cloud computing stem from a perceived loss of control. This loss of control is routed in the jurisdictional issues arising from overseas data storage, coupled with heavily one sided user agreements which fail to provide adequate reassurance as to the safety and security of the data being maintained. For third party data storage to fully mature, ‘cloud’ providers such as Amazon and Microsoft need to bear a greater burden of responsibility in terms of the safety and security of stored data (Richard Chow et al., 2009), as this would alleviate many of the fears associated with cloud computing. As competition in cloud markets increases, this is likely to be the case, with providers seeking to differentiate themselves on service quality by offering more attractive guarantees. Additionally, measures may need to be taken to regulate cloud computing, to adequately mitigate the risks users are exposed to (P. T. Jaeger, J. Lin, J. M. Grimes, 2008).


References:

Google Terms of Service, accessed at 13:47 01.05.2011, http://www.google.com/accounts/TOS

Michael Armbrust et al., 2005, A View of Cloud Computing, Communications of the ACM, April, Vol. 53, No. 4.

M. Armbrust et al., 2009, Above the clouds: A Berkeley View of Cloud Computing, February 10, University of California at Berkeley, Technical report no: UCB/EECS-2009-28, http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.html

Richard Chow et al., 2009, Controlling Data in the Cloud: Outsourcing Computation without Outsourcing Control, Proceedings of ACM CCSW’09, November 13, www.parc.com/publication/2335/controlling-data-in-the-cloud.html

R. Buyya, C. S. Yeo, S. Venugopal, 2008, Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities, hpcc, pp.5-13, 2008 10th IEEE International Conference on High Performance Computing and Communications

M. Mowbray, 2009, "The Fog over the Grimpen Mire: Cloud Computing and the Law", 6:1 SCRIPTed 129, http://www.law.ed.ac.uk/ahrc/script-ed/vol6-1/mowbray.asp

Microsoft Terms of Use, accessed at 17:49 02.05.2011, http://www.microsoft.com/About/Legal/EN/US/IntellectualProperty/Copyright/default.aspx

Amazon Web Services Terms of Use, accessed at 15:33 01.05.2011, http://aws.amazon.com/terms/

J. Kiss, 2011, Keeping your legal head above the cloud, January, the Guardian , accessed at 18:19 02.05.2011, http://www.guardian.co.uk/media-tech-law/cloud-computing-legal-issues

L. H. Mills, 2009, Legal Issues Associated with Cloud Computing, Nixon Peabody attorneys at law LLP, May, http://www.secureit.com/resources/Cloud%20Computing%20Mills%20Nixon%20Peabody%205-09.pdf

P. T. Jaeger, J. Lin, J. M. Grimes, 2008, Cloud Computing and Information Policy: Computing in a Policy Cloud?, Journal of Information Technology & Politics, Vol. 5, http://pdfserve.informaworld.com/69309__906675947.pdf

what responsibilities Google take for storing your data

The terms and conditions concerning Google docs are encompassed within the broad terms and conditions governing the use of all Google services, and an extension document which relates solely to Google Docs (Additional Terms of Service). Principally, Google accepts very little in terms of responsibility; the terms of service are replete with disclaimer’s removing the burden of responsibility from Google.

The main responsibilities adopted by Google are outlined in section 11 of the additional terms of service document (overwrites section 11 of the original terms of service). Section 11 states that whilst copyright and ownership of submitted data is retained by the author, by submitting the data, the author is granting Google with the ability to “reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute” the data as Google deems appropriate (Google Additional Terms of Service, 27.04.2011). This condition grants Google with a significant power over the data it receives; Google is entitled to alter any data it receives to suit its own agenda. That aside, this power/license assigns Google with the responsibility of displaying and distributing the data as part of the provision of the Google service.

However, this responsibility is significantly minimized by both section 4 and section 15 of the agreement. Section 4 contains a number of clauses enabling Google to revoke its provision of service, with no justification required; “Google may stop providing the services to you or to users generally at Google’s sole discretion, without prior notice” (Google Terms of Service, 27.04.2011). Similarly, section 15 claims zero liability in the case of incomplete or unsatisfactory service provision; “Google, its subsidiaries and affiliates, and its licensors do not represent or warrant to you that: (A) your use of the service will meet your requirements, (B) your use of the services will be uninterrupted, timely, secure, or free from error” (Google Terms of Service, 27.04.2011). Hence, whilst Google assumes responsibility over the provision of its services and the display/distribution of the data it receives, Google remains uncommitted to upholding this responsibility. That said, Google does still have some liabilities regarding the service it provides, for instance UK users are partially protected by UK consumer protection laws (M. Mowbray, 2009).

As previously alluded to, the vast majority of the Google terms of service concerns removing liability from the behalf of Google. For instance, section 15 of the broad terms of service states that Google will not be held responsible for any loss or corruption of data. Similarly, section 15 also prevents Google from being held responsible for any negative effects induced from the use of its services, be they direct or indirect. For example, Google is not responsible for any “intangible loss”, such as “business reputation” (Google Terms of Service, 27.04.2011).

Likewise, Google makes no promises with regards to monitoring the content of the data it utilizes, nor the enforcement of the Google program policies in the case of inappropriate data. Section 8 of the Google terms of service states that Google has “no obligation to pre-screen, review, flag, filter, modify, refuse or remove any or all Content”. Furthermore, section 8 goes onto assert that responsibility over the content of data remains with the author/user. Thus, Google cannot be held liable for any inappropriate data it displays/distributes. More importantly, it should be noted that the refusal of responsibility over the content of data ensures that Google cannot be held liable should any legal action arise concerning ownership of the data. This is further achieved via the enforcement that the user must confirm they possess the “rights, power and authority” necessary to submit the data in question (Google Terms of Service, 27.04.2011).

With regards to the storage and security of user data and personal information, section 7 dictates that data protection practices are governed by the Google privacy policy. The privacy policy offers very little in terms of how data will be protected beyond the vague promise of “appropriate security measures to protect against unauthorized access to or unauthorized alteration, disclosure or destruction of data(Google Privacy Policy, 29.04.2011). This clearly promises very little, acting as a ‘get out clause’ in the event of a security breach. Interestingly the wording of the Goggle privacy policy resembles that of the Sony Playstaion Network privacy policy, which similarly promises to “take appropriate measures to protect your personal information” (Sony Playstation, 01.05.2011). The recent news headlines involving Sony and the hacking of the Playstation Network (BBC News, 2011) can therefore be seen to illustrate that Google accepts very little responsibility in terms of ensuring user data and information is kept secure from unauthorized access.

A further important aspect of the terms of service when considering the responsibilities residing with Google concerns the ability to instigate changes to the terms and conditions. Section 19 of the original document dictates that changes can and will be made to the terms of service “from time to time”, after which the continued use of Google Docs is interpreted as an acceptance of the amended terms. This is clearly a highly important clause when considering the responsibilities Google acknowledge, since the sparse and limited responsibilities which Google does accept can be retracted or amended with minimal effort being made to inform users.

From an in-depth examination of the Google terms and services it is clear that Google is extremely careful to remain void of responsibility when considering the storage and integrity of user data. All in all, Google goes to great lengths to ensure minimal liability surrounding the data it is given; judicious application of disclaimers ensure that what little responsibility is acknowledged is rendered almost mute.

References:

Google Terms of Service, accessed at 12:15 27.04.2011, http://www.google.com/accounts/TOS

Google Additional Terms of Service, accessed at 13:44 27.04.2011, http://www.google.com/google-d-s/intl/en/addlterms.html

Google Privacy Policy, accessed at 19:11 29.04.2011, http://www.google.com/intl/en/privacy/privacy-policy.html

Sony Playstation, accessed at 15:46 01.05.2011, http://legaldoc.dl.playstation.net/ps3-eula/psn/e/e_privacy_en.html

M Mowbray, "The Fog over the Grimpen Mire: Cloud Computing and the Law", (2009) 6:1 SCRIPTed 129, http://www.law.ed.ac.uk/ahrc/script-ed/vol6-1/mowbray.asp

BBC News Technology, 2011, Playstation outage caused by hacking attack, 25 April, http://www.bbc.co.uk/news/technology-13169518