What is Software Testing?
The internet defines Software Testing as the process of executing a program or application with the intent of identifying bugs. Testing software as the process of validating that a piece of software meets its business and technical requirements. Testing is the primary path to check that the built product meets requirements adequately.
Whatever the methodology, you need to plan for adequate testing of your product. Testing helps you ensure that the end product works as expected, and helps avoid live defects that can cause financial, reputational and sometimes regulatory damage to your product/organization.
Who are the key parties involved?
Will you violently disagree if I say that everyone on a project is a key contributor, though Independent Contributor but a key contributor none the less? Contrary to popular belief, a dedicated Testing phase alone isn’t sufficient to catch all the bugs with your product. Testing needs to be a way of life, and be part of every conversation and task that a project team performs.
In this sense, everyone involved in a project is a key party. Developers do DIT, Product owners review copy and do hands on testing, BAs are constantly reviewing requirements, Project managers and Scrum masters regularly review plans to re-align priorities and extract best value. In their own way, everyone is testing all the time. As they should.
To bring it all together, you have the Test Manager and Test Leads/Coordinators,
Project Manager/Scrum Master, Project Sponsor/Product Owner, and Business Analyst overseeing the Test phases of a project – with the support of Development Leads, Testers, Architects, and other support teams (like the Environments team).
Software Testing Process
Agile or Waterfall, Scrum or RUP, traditional or exploratory, there is a fundamental process to software testing. Let’s take a look at the components that make up the whole.
NUMBER-1: Test Strategy and Test Plan
Every project needs a Test Strategy and a Test Plan. These artefacts describe the scope for testing for a project:
- The systems that need to be tested, and any specific configurations
- Features and functions that are the focus of the project
- Non-functional requirements
- Test approach—traditional, exploratory, automation, etc.—or a mix
- Key processes to follow – for defects resolution, defects triage
- Tools—for logging defects, for test case scripting, for traceability
- Documentation to refer, and to produce as output
- Test environment requirements and setup
- Risks, dependencies and contingencies
- Test Schedule
- Approval workflows
- Entry/Exit criteria
NUMBER-2: Test Design
Now that you have a strategy and a plan, the next step is to dive into creating a test suite. A test suite is a collection of test cases that are necessary to validate the system being built, against its original requirements.
Test design as a process is an amalgamation of the Test Manager’s experience of similar projects over the years, testers’ knowledge of the system/functionality being tested and prevailing practices in testing at any given point. For instance, if you work for a company in the early stages of a new product development, your focus will be on uncovering major bugs with the alpha/beta versions of your software, and less on making the software completely bug-proof.
When you are confident enough to release a version to your customers, you’ll want to employ more scientific testing to make it as bug-free as possible to improve customer experience. On the other hand, if you’re testing an established product or system, then you probably already have a stable test suite. You then review the core test suite against individual project requirements to identify any gaps that need additional test cases.
With good case management practices, you can build a test bank of the highest quality that helps your team significantly reduce planning and design efforts
NUMBER-3: Test Execution
You can execute tests in many different ways—as single, waterfall SIT (System Integration Test) and UAT (User Acceptance Test) phases; as part of Agile sprints; supplemented with exploratory tests; or with test-driven development. Ultimately, you need to do adequate amount of testing to ensure your system is (relatively) bug-free.
Let’s set methodology aside for a second, and focus on how you can clock adequate testing. Let’s go back to the example of building a mobile app that can be supported across operating systems, OS versions, devices. The most important question that will guide your test efforts is “what is my test environment?”.
You need to understand your test environment requirements clearly to be able to decide your testing strategy. For instance, does your app depend on integration with a core system back end to display information and notifications to customers? If yes, your test environment needs to provide back end integration to support meaningful functional tests.
Given how Agile projects are run, you may only have a couple of weeks between initiating a project and starting delivery sprints, which time isn’t enough to commission an end-to-end test environment if one doesn’t already exist. If everything goes fine, you’ll have a test environment to your liking, configured to support your project, with all enablers built to specifications. If not, then your test strategy will be different.
Reviewing test environment requirements early on is now a widely recognized cornerstone for good project management. Leaders are giving permanent, duplicate test environments a good deal of thought as an enabler for delivery at pace.
NUMBER-4: Test Closure
Right—so you have done the necessary planning, executed tests and now want to green-light your product for release. You need to consider the exit criteria for signaling completion of the test cycle and readiness for a release. Let’s look at the components of exit criteria in general:
- 100% requirements coverage: all business and technical requirements have to be covered by testing.
- Minimum % pass rate: targeting 90% of all test cases to be passed is best practice.
- All critical defects to be fixed: self-explanatory. They are critical for a reason.
Polish things off with a Test Summary and Defects analysis: providing stats about testing – how many high/medium/low defects, which functions/features were affected, where were defects concentrated the most, approaches used to resolve defects (defer vs fix), Traceability Matrix to demonstrate requirements coverage.