From being the last step squeezed in just before a project release to being the epicenter of the new age SDLC models software testing has grown in importance and criticality. And not without reason. Testing remained nonessential for ages even after software developing became the sole of every new exploration in terms of technology. This lead to companies, even some industry giants, finding defects a little too late and facing public outcry and humiliation.
For instance, in 1990 none of the AT&T users could make a long distance call due to a software defect. This costed the company $60 million. Such instance, amongst others, help ascertain the need for software testing.
With a rapid increase in the complexity of software products, software testing has become imperative for the companies of all sizes. Until and unless the companies perform proper testing, they cannot inform stake holders of its quality and take an informed decision about its launch.
As software development techniques have advanced during the last decades, some basic principles of testing have also been established. Describing theoretical ideas and practical hints, these principles can be seen as a basic guideline for software testing
Here are the seven golden ideologies of software testing.
1. Testing Brings the Defects to the Surface
The rationale behind software testing is to find the defects which might potentially impact the functioning of your software and hence pose serious risk to its success. Testing lets you discover gaps and defects in an application thereby ensuring the completeness and correctness of a solution. Testing doesn’t guarantee the absence of defects hence it’s absolutely important to have a strategy that focuses on the risks associated with a product and has clear objectives to uncover maximum and most important defects in a software. What testing does is it brings defects to the surface but what it doesn’t claim to do is guarantee a defect less application. A defect can be dormant in a software for ages. Hence it’s vital to test the software to find defects and avoid failures that impact the business and end users.
2. Exhaustive Testing is Impossible
Testing any application in its entirety is critical to make it work when it is deployed. However, let us not pretend that you can test it all. Software are often too complex, and validating every possible system permutation, input combination, and software feature is nearly impractical and would take endless time and energy. We can’t test everything, hence we should be focusing our energies on testing what is important, designing test cases which are based on risk assessment and using techniques which increase the probability of catching bugs.
Exhaustive testing is usually done when the program or the scope of a project is very small but for bigger projects it only makes theoretical sense. A sensible test approach covers all the bases and is more focused on achieving the right kind of coverage instead of trying to cover it all.
3. Start Testing at the Earliest
When should testing start and on what? Answering these key questions is important to avail all the benefits of testing. The testing activity is an integral part of any SDLC and should commence as soon as possible. If testing begins early, it not only saves time and reduces complexities but also saves money. Defects identified later in SDLC are expensive to fix than defects identified at an early stage.
A study by a group of authors from University of Memphis and University of North Alabama published in the Journal of Information Technology Management stated out that the cost of finding and fixing a defect in software roughly increases 10 times with each passing stage of development. An error that costs $100 to rectify in the business requirements stage would cost $10,000 in the high-level design stage and $1,000,000 in the implementation stage.
4. Defect Clustering
Defect clustering refers to the datum that most of the defects are bunched in some parts of the application. Since defects are not uniformly distributed finding these clusters can ensure catching a significant number of bugs and in effect mitigating some risks. Such ‘hot-spots’ are found particularly in large systems where the size, complexity, developer errors and frequent changes are major influencers on the quality of the application. According to Testingexcellence.com, “Defect Clustering in Software Testing is based on the Pareto principle, also known as the 80-20 rule, where it is stated that approximately 80% of the problems are caused by 20% of the modules.” Aligning the test effort with this principle promises more effective testing i.e. more output from the same amount of testing energy. Testing techniques and domain expertise can be leveraged to make an educated guess on where these defect clusters can be found in your software resulting in a well strategized approach to software verification and validation.
5. Pesticide Paradox
Every method, used by testers to find bugs, leaves a residue of subtler unidentified bugs. These bugs go undetected with traditional and repeated testing techniques as each technique has a limited scope. To simply put, it is like those insects which have developed immunity to regular pesticides. To terminate such bugs, pesticide companies have to come up with new types of poisons which can be effective on such persistent insects. This is called a pesticide paradox and is applicable to software too. One testing technique will not uncover all types of bugs and using the same repeatedly will only work so much. Hence a good testing approach requires a mix bag of different and new techniques which change as per the test level and objectives.
6. Testing is Context Dependent
All software is purpose-built and each one differs from another. Every software has been developed to address the challenges in a context. Keeping this context and challenges in mind while defining the test strategy helps in catering to that specific software and the possible failures. Testers should realize and analyze the risks associated with an application and work towards mitigating them. As one size does not fit all, a common testing approach across projects is not sufficient to discover critical defects.
For instance, if you test an ATM-related software, you will look at security testing, performance testing, load testing and validation as key types of testing to undertake. On the other hand, if you test a website, you should perform navigation testing, cookies testing, indexing, compatibility testing, server testing etc.
The most powerful test strategy for an application revolves around the risks associated with a specific solution and guarantees the success of the same by uncovering the system critical defects at the earliest.
7. Absence-of-Errors Fallacy
In the process of making an error- free software, the basic purpose behind developing a solution must not be forgotten. If the software you have built is error-free but does not meet the requirements, it would result in nothing but failure. It will not fulfill the customer needs and expectations which means it won’t be used because although the application is error-free it’s not the solution they wanted. It is important to come out of the absence-of-errors fallacy meaning it’s imperative to understand that absence of defects doesn’t guarantee the acceptance of a solution. The software should serve the end-purpose and be able to handle an environment for which it has been built.
Various testing strategies and techniques have evolved over the years and are context dependent but these principles, since established, have always helped to guide the test effort for all kinds of projects. Software testing is a tool for stakeholders to assess the quality of a software and these golden rules can be used to achieve the perfect results from your testing activity.