Why should I pay you to test your own work?
This is a question I have heard a lot over the years when discussing testing budgets with clients. To the uninitiated, it sounds like a fair question. However, anyone involved in software development knows how complex and time-consuming testing can be. Testing is, in fact, one of the most important parts of any software development project.
A large e-commerce platform is an incredibly complex thing with millions of lines of code, gigabytes of data and many integration points. There are so many inter-linked moving parts; so many links in the chain that it is very easy for something to go wrong. The application will be used in millions of different ways through a multitude of browsers across numerous desktop and mobile devices. The development project will have lasted at least 6 months with many different people working on it. The number of areas and scenarios that could be tested are almost limitless. It’s a wonder anything works at all!
Testing can be split into a number of different areas, but each area of testing is important to consider. Every project is a little different; some clients like to take on much of the testing themselves, some like to outsource it whereas some expect their developer to do it all. Testing is also not a fixed entity; you can do a lot of testing and you can do a little. The more you test, the more you will de-risk the project but the more time it will take and the more it will cost.
A unit test is one which tests small ‘units’ of code to ensure that they function as expected. For example, when a form is submitted, it should save the inputted details into a database table. It is a standalone test which specifically, and only, ensures that the unit functions as expected. Using a true test-driven development methodology, a developer will first write a test before actually creating any code so that the code can be considered completed only when the test passes. In practice, unit testing is only used on some key areas of the application to ensure that core functions are working as expected. While unit testing can reduce the likelihood of functional issues occurring it can also increase development time.
You will probably hear your development agency talk about smoke testing a lot. A smoke test is a pragmatic subset of test cases which cover the key user journeys and functions throughout your application. At the very least, your developer should be expected to carry out smoke tests before handing anything over to you for UAT.
User Interface (UI) testing can be a very complex and time-consuming thing. The huge range of mobile, tablet and desktop devices, operating systems and browsers that will be used to access a website means that comprehensively testing every combination manually is almost impossible. Because of the vast number of different variations that need to be covered UI testing is a perfect candidate for automated testing. Automated test tools are able to follow a scripted journey through your website and test whether the expected results are achieved. They can also record each journey so that each one can be played back. Although this method is not perfect, it can significantly reduce the number of major UI issues a website may face.
Some 3rd party testing services such as Bug Finders offer a crowd sourced service where hundreds of freelance human testers from around the world are used to test a website and are paid when they find an issue. This approach can be a relatively cost-effective way of testing your application across hundreds of device/platform/browser combinations. It is normal for a test cycle to result in around 200 issues being raised. The challenge is often in categorising and prioritising the issues so that you focus your resources on dealing with the most important ones. Every website will have constant backlog of low level issues which are unlikely to ever be resolved.
User Acceptance Testing (UAT) is a critical part of a any development project and involves the client carrying out full end-to-end testing of the platform prior to go live. UAT is the process that I see under-estimated most often. It is also the part of a project which is first to suffer when timelines are tight. However, this is likely to lead to higher rate of failure. For any new website build, we would advise that at least 2 months of UAT is planned. Your e-commerce website is only one part of any your commerce business and the end-to-end process involving search, checkout, order management, payment, despatch, customer services, finance and all of the other parts of the chain need to be tested.
UAT is often confused or merged with SIT (System Integration Testing) where you will be specifically testing the integration between multiple systems. SIT is part of the end to end testing which ensures that all parts of the chain are working correctly together.
Good UAT involves the creation of test cases and test plans. These generally take the form of a set of scripts (a script being a set of tasks to run through) that a manual tester will run through and either pass or fail the test according to the outcome. It is not unusual for an end-to-end UAT test plan to include over 500 test cases.
The A in UAT is one of the reasons why it is so important. At the end of the UAT process, you will generally be deemed to have accepted the application, so it is important to that you have thoroughly tested it to ensure that it works in exactly the way you expected. This does not mean that undiscovered bugs will not be under warranty but if there is functionality that does not work in the way that you expected, this needs to be picked up in UAT. The other reason why it is so important is that it is the final chance to pick up issues before it goes live. Any bugs and issues are likely to negatively impact the user experience.
UAT requires a lot of effort on behalf of the client, something that is often underestimated. Some clients use external testing agencies to support them during UAT which can significantly de-risk a project where the client does not have the man power to carry out UAT effectively.
I am sometimes very surprised how some retailers fail to take security testing seriously enough. It is not unusual to find that the retailer does not know when they last ran a penetration test on their web platform. These are generally the ones who have not yet been hit with a cyber attack (or don’t yet know that they have been hit). In the current climate where cyber crime continues to grow in frequency and sophistication, and especially with GDPR on the horizon in Europe, security testing is increasingly important. All e-commerce web platforms should be penetration tested by a specialist 3rd party at least annually but ideally twice a year. It is also advisable that your application is scanned for vulnerabilities using specialist software such as Nessus on a regular basis. At Envoy we tend to scan our clients’ web platforms on a weekly basis to ensure that application vulnerabilities are picked up very quickly. At the very least you should carry out application security scans before each release to production. It is no good waiting for 6 months until the next penetration test when you have introduced a new application vulnerability.
Performance testing is generally used to determine how much traffic, page requests, concurrent users and order volume your website can handle. It is a harder process than you may imagine as, to accurately test, you need to mimic real user behavior and, as you will know, real users do a lot of different things. The best you can do is mimic your key user journeys such as search, add to basket and checkout. You ideally want to carry out load testing on your production environment rather than a staging environment as it will give you a much truer picture, but this is also likely to take your platform offline at some point during the test.
Most retailers tend to carry out load tests once a year, normally before peak trading periods such as Black Friday or Christmas. The problem that this can cause is that, since the last annual test, a large number of changes may have been made to the application, some of which may have had an incremental impact on performance. If an annual load test shows a drop-in performance compared to the previous year, it is very hard to determine which change or changes over the past year have contributed to that drop in performance. This also may not give you enough time to resolve the performance issues before peak trading starts.
To counter this, it is advisable to carry out performance bench marks prior to each new code release. These do not need to be performed on a production environment as long as each test is carried out on the same environment as the aim is to determine whether performance has increased or decreased relative to the last release. This approach enables development teams to understand where any increases or decreases in performance are coming from. This, of course, takes time and therefore will increase development time and costs
While the list above is not exhaustive, you can see that the scope of testing within software development can be very large and complex. Each type of testing takes time and effort and you should not just assume that it is all done as standard with no additional charge. Companies with a strong focus on testing will allocate up to 40% of any project time to testing which can be a very surprising amount. Good testing can de-risk a project and can pay for itself in the long run as it will result in less bugs, better performance and a better overall experience for your customers.