Error Guessing

Error Guessing

"A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them", per ISTQB.

Error guessing technique involves the tester making guesses about mistakes (errors) that a developer might make and then designing tests for them. Error guessing requires the tester to have knowledge and experience of common programming errors and their impact on code produced, the nature of bugs that can be introduced and how those may be reproduced. The tester needs to have some experience with programming and the technologies used by development. This enables the tester to make guesses about potential errors that may be introduced and create tests to find bugs associated with those errors. Error guessing may be used as a standalone technique or to complement other techniques. Error guessing can be applied at any stage of testing and may be used to even identify potential risks.

The effectiveness of using the Error guessing technique lay on the creativity and ability of the tester to guess errors and find bugs. Each tester is unique in this case and likely to approach this technique distinctly. Error guessing may also be used as a means to perform a quick smoke test. Trying to lay down guidelines and documentation requirements for this technique may constrain the tester's freedom and creativity which are important for Error guessing to be effective.

Needless to state it, error guessing is normally used as an additional test technique and not the sole or primary testing technique. Error guessing can help find bugs that may be missed by other techniques. Once tests are executed, it is recommended to capture them and automate as much as possible.

As you may have realized by now, the success of this technique to a certain extent is dependent on both the developer making similar mistakes as in the past and the tester having some experience with finding bugs that are similar to the ones that are in the current system-under-test.

Software Testing Types - comprehensive list

Aggregating all of the different types of software testing in one place. We touched upon nearly 70 different test types and a brief description of each testing type during the course of the past 3 posts (#1, #2, #3).

Read them all here.

  1. Types of Software Testing - Part 1

  2. Types of Software Testing - Part 2

  3. Types of Software Testing - Part 3 

And, the original exhaustive list of software testing types is available here (no descriptions included) if you are interested.

Types of Software Testing (3)

This is a continuation from my previous posts ( post1 & post 2 ) on types of software testing.

Static testing

Static testing involves testing of an application without executing it. This is done either manually or using static analysis tools. Examples of static test types include - Desk checking, Code walk-through, Code reviews and inspection

Scenario Testing

Scenario testing is a type of testing involving use of scenarios or stories pertaining to application usage.

Scripted Testing

Scripted Testing is testing that follows a scripted path designed by the tester. Step by step instructions and expected outcomes are defined making it easy for testers to follow.

Security Testing

Security Testing is a type of testing intended to identify defects in an applications security mechanism(s). Tests span vulnerability assessments, data integrity checks, fuzzing, verifying - authentication, authorization, confidentiality, etc.

SME Testing

SME (Subject Matter Expert) Testing involves testing by a domain/subject matter expert. For example, when developing a HR application you would have a domain expert as in a HR practioner as the SME doing tests. Similarly, a finance professional testing a financial application and so on. SMEs can also be experienced technical experts who can guide the team on technical aspects.

Smoke Testing

Smoke Testing is a subset of regression tests that are normally run to verify if a drop/build is ready for further more extensive testing. Sometimes referred to as BVT (Build Verification Tests) or BAT (Build Acceptance Tests).

Soak Testing

Soak Testing is a type of performance test involving a specified load (often intended to mimic real world usage) over an extended duration of time to verify the system's ability to sustain the load.

Specification Testing

Specification Testing involves using the application's specifications as the reference for designing tests, selection of data and determining adequacy.

Standards / Compliance Testing

Standards / Compliance Testing is a type of testing to verify if the application meets the required/specified standards and can be viewed as an audit of the system for compliance.

Section 508 accessibility testing

Quoting directly from the US government site - 'Section 508 of the Rehabilitation Act, as amended by the Workforce Investment Act of 1998 (P.L. 105-220) requires federal agencies to develop, procure, maintain and use information and communications technology (ICT) that is accessible to people with disabilities - regardless of whether or not they work for the federal government.' In summary, this means products are accessible to all users irrespective of their disability status. This could mean that products are compatible with assistive technology, such as screen readers.

SOX testing

SOX testing involves verification of compliance to the Sarbanes-Oxley act. The Sarbanes-Oxley Act is legislation passed by the U.S. Congress to protect shareholders and the general public from accounting errors and fraudulent practices in the enterprise, as well as improve the accuracy of corporate disclosures.

State Testing

State Testing involves testing for state transitions which may be impacted by change in input conditions and/or sequencing of events.

Stress Testing

Stress Testing involves verifying a systems behavior under adverse situations such as excessive load beyond what it is designed for until the system's performance degrades significantly or fails.

System Testing

System Testing involves testing of the complete system or product with all its components/modules integrated. The system test looks at the system from the customer/client's perspective. System tests validate whether the software meets the requirements (functional and non-functional).

Testability Testing

Testability Testing involves testing the ability of each piece/functionality of the application to be tested. It tells us about the ease with which the application/its features can be tested.

Unit Testing

Unit Testing involves testing of each unit (smallest testable piece) of software to validate it performs correctly as expected.

Upgrade & Migration Testing

Upgrade testing involves testing of the move or upgrade of an existing system from one version to a higher version. Migration testing involves testing of the move from one system to another.

Usability Testing

Quoting directly from the usability site - "Usability testing refers to evaluating a product or service by testing it with representative users. Typically, during a test, participants will try to complete typical tasks while observers watch, listen and takes notes.  The goal is to identify any usability problems, collect qualitative and quantitative data and determine the participant's satisfaction with the product."

White box Testing

White box Testing (also known as glass box testing, clear box testing, open box testing, transparent box testing) is testing based on knowledge of the internals of the application. Tests are designed based on knowledge and examination of the application's internal architecture, design and code.    Types of white box testing include - Unit Testing, Code Coverage Testing, Statement/Path/Function/Condition testing, Complexity Testing / Cyclomatic complexity, Mutation Testing


Related posts

Types of Software Testing (2)

This is a continuation from my previous post on types of software testing.

Fault-Injection Testing

Is a test type involving injection of faults (compile or runtime) to test the error handling abilities of the system and its robustness.

Functional Testing

Is a test type used to verify that the software has all the required functionality specified in the requirements. Conformance to functional requirements is tested.

Fuzz Testing

Is a test technique used to discover security issues and errors in software by inputting large amounts of unexpected, invalid, random data. The aim is to make the system crash and reveal bugs. It is often executed in an automated manner.

Gray Box Testing

Is a test type involving use of white and black box techniques. Here, the tester has some knowledge of the internals of the system under test unlike in black box testing where the tester has no knowledge of internals.

Guerilla Testing

Is a type of usability testing involving quick capture of user feedback about specific areas of the product. Users are approached and asked to help quickly test/use the product and give feedback.

Install & Configuration Testing

Used to test the various installation scenarios and configurations.

Integration Testing

Involves integration of the different software modules and testing them as a group.

System Integration Testing

Tests the system's integration point with other systems. It could also mean the testing performed on a system in an environment where all the required hardware and software components are integrated.

Top-down Integration Testing

Testing is carried out from top down/from the main module to the sub. If the bottom level/sub modules are not yet developed, stubs are created to simulate them.

Bottom-up Integration Testing

Testing is carried out from bottom up/from the sub module to the main one. If the top level/main modules are not yet developed, Drivers are created to simulate the top level module.

Bi-directional Testing / Sandwich Testing

Involves simultaneously performing Top down and Bottom up integration tests.

Interface Testing

Testing of interfaces & communication between systems and components.

Internationalization Testing

Testing the product's capabilities to be localized. Testing is done across language settings.

Interoperability Testing

Testing the ability of a system to inter-operate & interact with other system(s).

Load Testing

Is a non-functional test type used to test the product under real life load conditions. It can be used to determine the maximum capacity of the system without suffering performance degradation.

Localization (l10n) Testing

l10n testing is performed to verify a product's localization/translation for a specific locale/language and is executed on the localized version of the product.

Logic Testing

Is a type of testing performed to validate the correctness of the software's processing logic. Also includes testing of predicates.

Manual Testing

Is a process of executing tests manually by a tester as opposed to an automated test which is scripted and executed by a tool/program.

Walk-through Testing

Is a type of testing involving peer reviews of software.

Performance Testing

Is a type of testing used to determine how a system will perform under a specific workload. Metrics such as responsiveness, throughput, etc. are collected and analyzed.

Pilot Testing

Normally involves a group of users trying out/testing the product prior to deploying it for wider user/customer access. E.g. pre-Beta

Protocol Testing

Involves testing of various protocols such as LDAP, XMPP, IMAP, SIP, etc.

Recovery Testing

Involves testing the ability of the system to recover post failure and the time taken to recover. Integrity checks are also run post recovery.

Regression Testing

Is a type of testing to verify existing functionality is not broken due to new enhancements/fixes.

Reliability Testing

Is performed to verify the software's ability to perform consistently in a fault-free manner within a specified environment for a specific time duration.

Requirements Testing

Is an approach to designing tests (functional & non-functional) based on objectives and conditions that are derived from requirements.

Risk-based Testing

Is a type of software testing wherein prioritization of tests is done based on risk assessment.

Sanity Testing

Is a subset of regression tests and designed to run quickly while performing a sanity check of the application to verify any bug fixes, run a set of prioritized regression tests and check any new feature changes at a high level. Any failure would result in the drop/build not proceeding forward to more extensive tests.

Scalability Testing

Is a type of testing done to measure the application's ability to scale up based on varying load profiles.


Related posts

Types of Software Testing

In an earlier post I listed out several different types of software testing. This post will elaborate a little more on many of these types of software testing. In a subsequent post I shall cover the remaining types of software testing. As a professional tester, you may probably work only on a subset of these types of software testing for most part. Some of you may even specialize in a limited subset of test types e.g. performance/stress/load, security, i18n/l10n and so on. Nevertheless, it is useful for testers (and non-testers too) to be aware of the various types of software testing.

Elaborate definitions of all the popular types of tests will be covered in the posts to come.

Software Testing Types

Acceptance Testing

Performed after system testing is complete. Acceptance testing confirms that the software satisfies the specified requirements. Acceptance testing is normally a user performed test exercise which uses black-box techniques to test the system against specifications.

Ad hoc Testing/Random Testing/Monkey Testing

Also termed as unplanned or unstructured testing. It is a test type where test execution occurs in the absence of documented test cases and plans. It does not make use of any of the test design techniques such as boundary value analysis (BVA), equivalence partitioning, etc. Ad hoc testing is performed to explore the different areas in the product by applying intuition, knowledge of the product, technology, domain and experience.

Buddy Testing

Buddy testing essentially groups a couple of members working together to test a piece of code/functionality. This could be two testers working together or even when two developers test each other’s code.

Paired Testing

Paired testing is a form of buddy testing where two testers work on the same system at the same workstation. Both testers may take turns to test the software while analyzing scenarios, reviewing each other's work and exchanging notes. Again, I say two testers here. It may as well be a combination of a tester and a developer working together as followed in some agile models. There are benefits to this approach and a few drawbacks too which we'll explore in subsequent posts.

Exploratory Testing

I am just going to directly quote James Bach here. 'definition of exploratory testing is test design and test execution at the same time. Exploratory tests, unlike scripted tests, are not defined in advance and carried out precisely according to plan. The term "exploratory testing"--coined by Cem Kaner, in Testing Computer Software-- refers to a sophisticated, thoughtful approach to ad hoc testing.'

Iterative / Spiral model Testing

Here testing is a process of continuous/ongoing improvement as the system changes in each iteration. Testing needs to be closely integrated with Development. Often unless testing is "done" progress cannot be made. New features or modifications are tested in each iteration/spiral while running regression tests either in the same or upcoming iteration/spiral based on time/resource availability.

Extreme Testing

Practiced as part of TDD (Test Driven Development) or test first development (TFD). Developer writes their own tests and needs to first write tests before writing a single line of functional code. This approach was popularized in Extreme Programming (XP).    

Alpha Testing

Is testing performed in-house and is a form of acceptance testing of software which is done when Development is mostly complete with feature/functionality. There may be outstanding issues which need to be addressed.

Automated Testing

Is a technique of using software tools to run pre-written scripts to test applications. Essentially, many (not all) tests which are run manually can be automated and executed without manual intervention.

Beta Testing

Is performed by real users of the product in a real environment. This provides an opportunity for users to experience the product first hand and give feedback which has a greater likelihood of getting in to the product.

Black Box Testing

Is a method of testing wherein the tester is unaware of the internals (implementation/design/structure) of the system being tested.

Boundary Testing

Also known as Boundary Value Analysis (BVA) is a type of testing where in you test at the boundaries or corners of the input domain. Tests are designed based on both valid and invalid boundary values.

Compatibility Testing

Is a type of non-functional test to validate the application's compatibility/ability to function correctly with various operating environments which include hardware, operating systems, other applications, clients/browsers, networking, storage, etc.

Conformance Testing

Is a set of tests performed to verify conformance/compliance to specified standards. E.g. section 508 compliance testing, IEEE standards, etc.

Consistency Testing

Is performed to verify consistency of the application across different environments. For example visual consistency across browsers and client OS platforms, across locales, etc.

Deployment Testing

Is performed on the staging or production environment to validate the deployment. Mostly involves a select set of tests to be executed to validate that the deployment has been successful.

Documentation Testing

Involves testing/verification of all documentation artifacts. Includes Online Help, Manuals, Guides, etc.

Domain Testing

Involves testing using a select subset of tests from a large/possibly infinite set of potential tests. Normally, a domain is divided into sub-domains/classes and individual members are picked from each class to be tested.

End-to-End Testing

Type of testing to check the end-to-end workflow and use cases spanning modules/functional areas. Rather than focus on a specific functional area, cross functional integration and relationships are tested including dependencies with other components.

More types of software testing to follow in the next post.


Related posts

What is Quality?

QA/QC/QE - Quality's the common thread. So, what is the definition of quality?

In simple terms, Quality is the value to someone. If that isn't enough, here are a few quality definitions.

As per Joseph Juran, quality means fitness for use.
According to Philip Crosby, it means conformance to requirements.

The American Society for Quality gives two meanings - 1. the characteristics of a product or service that bear on its ability to satisfy stated or implied needs; 2. a product or service free of deficiencies.

Quality is not exactly a uni-dimensional attribute. On the contrary Quality may be considered to comprise a set of attributes. For example, when purchasing a computer for personal use, one would look for the price, processor(s), memory, storage, type of storage (SSDs vs HDDs), display, OS, brand, model, etc. All of the different attributes of a computer may together be considered as its quality attributes.

For software, there are several quality attributes. Some of the important ones include -

Robustness and failure handling ability
Performance - latency, throughput
Resource usage
MTBF (Mean time between failure)
Security (includes MTTD, MTTE)
Usability - intuitive, consistent UIs, simple and clean designs
Upgrade-ability and patching capabilities
Migration support
Platform support
Ability to integrate with existing systems if/as needed

Note these do not include the code & design level quality attributes such as standards compliance, modular designs, reusability, testability, sustainability, ability to modify code easily to changing requirements, etc.

Liked this post? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

TaaS - Testing as a Service

This is an introductory post to TaaS (Testing as a Service). If you have prior TaaS experience, feel free to share your thoughts.

What is TaaS?

TaaS leverages the cloud to offer scalable testing services to clients on an as-needed/on-demand basis. The goal is to offer highly accessible and available testing services at lesser cost. Test tools reside and test execution occurs on the cloud. Interfaces to access this service are provided e.g. via a web service, web app, etc. Normally, when talking of TaaS people think that only automated tests would be supported. However, TaaS involves the following models - all automated testing, manual (human run) or a hybrid model. Manual testing would be similar to an outsourced testing model which most of you may be familiar with.

Why move to TaaS? 

Increasing costs - human resources, Labs & equipment - hardware/software, challenges with handling larger and complex products, several different types of testing to be performed, etc. As software gets more complex, inter-dependencies increase, support matrices multiply and the overall costs & complexity associated with testing keep rising.

TaaS offers to reduce hardware costs associated with maintaining labs in-house with elastic virtualized resources on the cloud at a much lower price point. Additionally, the number of testers needed in the TaaS model may be lesser than the traditional (non-cloud) model. In the non-cloud model, we can have large suites of tests that take a long time to execute and consume significant hardware resources which may block multiple parallel runs. On the cloud, given the ability to auto scale and spawn systems on demand, it is possible to parallelize execution of tests across multiple different topologies/configurations.

What can TaaS do?

TaaS can handle various categories/types of testing. Here are the more popular ones -
  • Standalone product testing - upload a product/application and the test service runs a set of pre-defined checks and reports back on tests run and issues observed ranked by severity. More suited to small and some medium size apps
  • Continuous Testing - checkout latest code from a repository, build, deploy, run a defined set of tests and report results back to enable Developers to improve their code/fix issues
  • Application certification - offers more flexibility in determining what to test and provides a certification report. Useful to run against release/milestone drops of an application
  • Load/Stress/Performance testing - an advantage with TaaS is the ability to quickly and often seamlessly scale on demand, mimic real world usage easily - perform cross-geo deployments and test, offer the necessary bandwidth and resources as needed
  • Functional testing, localization (l10n) and i18n, Security testing, Unit testing, etc.

Benefits of TaaS

  • Efficient use of test infrastructure and tools - with TaaS you normally pay only for what you use unlike a traditional model where you have a significant outlay of investment for setting up the infrastructure, obtaining dedicated tools, getting resources, etc. TaaS payment models are generally of the type - pay-as-you-go or pay-per-unit
  • TaaS offers a scalable cloud based environment - unlike the traditional model where you are limited by the amount of hardware and platforms you have on site, with TaaS you can virtually scale up and down to the necessary extent based on your needs
  • Related to the above point - the benefit of being able to scale in a TaaS model allows you to run really large tests and simulations
  • With TaaS, you can share test tools and computing resources. Moreover, these tools and resources can be obtained when you need them - on-demand.
  • Potential savings in costs with TaaS - operational, maintenance, etc.
  • Potential for reduction in test times in a TaaS model which may help speed up releases
Liked this post? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Subscriptions are handled automatically by Google's FeedBurner service.

Testing in an Agile world (redux)

The last post touched upon Agile testing in brief.

In this post, I plan to do a redux and point to a series of posts which are based on a paper on Agile testing which was published by the Quality Assurance Institute (QAI).

These should provide a more extensive and relevant view of Testing in an Agile environment.
  1. Testing in an Agile world (post #1)

  2. Testing in an Agile world (post #2)

  3. Testing in an Agile world (post #3) 

  4. Testing in an Agile world (post #4)

Liked this post? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Subscriptions are handled automatically by Google's FeedBurner service.

Agile testing

In this post, we'll look at testing in an Agile environment. In subsequent posts, we'll explore this subject some more. Here's a link to an earlier post on Agile Development and Testing, if you are interested.

How different is agile testing from non-agile a.k.a. traditional models? Well, for one - in Agile, testers are involved all through the release and not restricted to a specific "phase" in the development cycle. Testing occurs through each iteration or Sprint. Testing like Development, is flexible and can/should accommodate change. Also, there is a higher degree of collaboration across functions in agile where all members (Dev, Test) are part of one scrum team and not viewed as distinct members belonging to separate functional groups.

Testers are involved in release planning, reviews of stories, estimation, risk analysis and defining acceptance criteria for each story.

In an Agile team, testing doesn't wait until Development is "done". Test types may overlap at times. Test team members can pick up builds frequently from the continuous integration system, test and give quick feedback in line with the Agile goal of giving early and regular feedback as the software is being built. Testing is not considered as a final phase to be done once Development has finished their work. Different stakeholders can test, often in parallel. For example, Developers, Testers, Product Managers, Management, etc. can run a variety of tests and provide inputs as the software is being developed. Some agile methodologies may involve pairing developers and testers together as a piece of the software is being built. In this case, testers provide inputs on the scenarios which would be tested while the developer can figure out how to address them. Testers gain a greater understanding of the code while developers get instant feedback on their work and enable them to make improvements on the fly. It helps the team produce quality software quicker via eliminating the lag between Dev & test.

While the software is being built, stakeholders can see how it is shaping up and can try out and test it themselves. This brings up the requirement for flexibility in requirements which is possible when following an Agile model. Stakeholders can suggest changes to requirements based on what they have seen/used. They do not need to wait until the end when Dev and testing is done to get their hands on the product. They can see it as it is evolving. The other interesting aspect of Agile is the definition of done for each story. The done definition typically encompasses both development and testing of the developed artifact. Agile teams may even (ideally) mandate automated tests to be complete too as part of the done definition. This would enable the team to build up a regression test suite with each iteration.

Test Automation in an agile team is of key significance. Manual testing may be performed for tests that are not automate-able or for tests of exploratory nature.

In comparison to the traditional models, Agile teams typically produce "just enough" documentation. Unlike the traditional approach of detailed documentation for requirements, test plan, test strategy, approach, etc. agile teams produce the essential level of documentation which is required by the various functions. Agile teams may use tools such as Atlassian's JIRA or similar to track epics, stories and tasks (and bugs too).

Testers use the same configuration management tools as Developers, check in test automation code to the same repositories and integrate their work with the CI system so that automated tests run with each build. In real practice, for large projects it may be the case that a subset of automated tests or even just unit tests run with every build while the full set of tests are run maybe once a day (at night usually) or once in a few days due to the time taken to complete a build and automated test run.

Liked this post? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Ad hoc testing

Also termed as unplanned or unstructured testing. It is a test type where test execution occurs in the absence of documented test cases and plans. It does not make use of any of the test design techniques such as boundary value analysis (BVA), equivalence partitioning, etc.

Ad hoc testing is performed to explore the different areas in the product by applying intuition, knowledge of the product, technology, domain and experience. It is done to find bugs that were not uncovered during planned testing.

Ad hoc tests may be run either prior to or after execution of planned tests. Ad hoc testing when done prior to planned testing helps in evaluating the quality of the product before starting a formal testing campaign. It also helps clarify requirements better.

Ad hoc tests when run post execution of planned tests helps unearth newer defects that planned tests may have missed. It highlights additional perspectives and scenarios which may not have been considered as part of the planned test exercise. Towards the end of the release after the formal test cycles have been run, a round of ad hoc testing serves to increase confidence in the coverage of the planned cycle.

While Ad hoc tests are executed without the need to document test cases, it is recommended to document tests that were executed and steps followed as much as possible to enable us to enhance the existing planned test suite, ensure repeat-ability and increase coverage.

In Ad hoc testing, the tester "improvises" and attempts to find bugs with any feasible means. An approach to ad hoc testing would be to start our tests using the existing documented test cases and explore newer variations from there. Alternatively, the tester(s) can explore the product using their experience and knowledge without referring to documented tests.

Ad hoc testing enables discovery - of new issues, areas that may not have been touched by planned tests, new perspectives that question requirements and assumptions. Ad hoc testing can find holes in your test strategy. Ad hoc testing when run post planned testing, serves as a tool for verifying the completeness of your testing.

A drawback of ad hoc testing is that these tests are not documented and, hence, not repeatable. This prevents ad hoc tests from being used for subsequent regression testing. To overcome this, it is recommended that we document test cases as much as possible once they have been executed. Despite this, it may be the case that some tests and steps may be left out as testers "jump" across functional areas to test and unearth issues. Ad hoc tests can be used to complement the planned testing exercise. On its own, it doesn't inspire much confidence in coverage. There is also the concern regarding repeat-ability despite trying to document as many of the tests and steps as possible.

Do you know the different types of testing? Check this post to know more.

Software Testing Lifecycle (STLC)

Listed below are the typical steps in a Software Testing Life Cycle (STLC). Note that these are not set in stone and can change per your requirements. Phases can collapse or get more granular as needed. I have tried to list steps in a fairly granular fashion. Several of these can be combined into a larger "phase".

Requirements analysis and review

This covers functional and non-functional requirements and their impact on testing. An RTM (requirements traceability matrix) may be prepared along with defining the acceptance criteria/done definition for various requirements. Additionally, any specific testability requirements may be conveyed to stake holders while considering automatability of requirements. In the past, one might consider a formal requirements sign-off too with testing as one of the stakeholders. In the Agile world, a high level set of requirements can be agreed upon (perhaps at an epic level or even high level stories) with changes expected as teams go through each iteration or sprint (e.g. Scrum).

Design review

Continuing from the previous stage, here a greater degree of clarity around requirements is available. Teams may review design and mockups which help the test team with preparation of test plan and cases. Teams may iterate until acceptable designs are identified.

Test planning

This is the stage where test plan(s) is/are prepared including initial effort estimates, resource plans, test tool identification, etc. I listed out test plans to mean both an overall test plan and individual test plans for sub-components or modules as the case may be. Depending on the size and nature of your application, you may have just one over arching plan or an overall plan supported by sub-level plans.

Test design

The test team creates test cases followed by cross-functional reviews typically involving development team and product managers/owners. These could be automated and/or manual tests.

Test environment preparation

Here, the test bed/lab setup is performed. Any 3rd party software required is installed and integrated. Product builds are installed on this environment and sanity/smoke tested prior to starting an extensive test campaign. In summary, all required hardware and software components are setup, integrated and made ready.

A point to note - the ideal goal here is to mimic the real world or production environment. Depending on your resourcing situation, you may either have a replica of your production deployment or a close enough clone.

Test data preparation

Necessary test data required to execute all identified tests is prepared. The nature of data to be prepared is dependent on the type of inputs accepted by the SUT. While this step may sound simple, there are considerations to be made on the data to be selected. Unless you have unlimited time and resources at your disposal, you will need to pick a subset of data which constitutes a representative sample whose successful execution will provide a certain degree of confidence in the ability of your application to handle most (any) inputs. Let's call it the test data selection problem which we will touch upon in a subsequent post. For now, know that selecting the right set of data has a significant bearing on the outcome of your testing exercise/campaign. Note that the necessary workload needs to be simulated to match real world usage.

Test Oracle preparation, Test stubs and driver preparation as needed, Test termination/exit criteria definition

Here we identify a mechanism or an entity which is used to confirm whether the software performed correctly or otherwise. An example would the requirements definition itself which the application must satisfy. For automated tests, we need to develop suitable test oracles (e.g. functions which return a boolean value or some such method) to check if the observed behavior is accurate. Other tasks include preparing any needed stubs and drivers and determining the criteria for terminating your tests.

Test execution

True to it's name, this stage involves running of the tests and reporting results.

Test results analysis and reporting defects

We could combine this with the previous stage but I have separated it out for a little bit more clarity. In this stage, test results are analyzed and defects (bugs) reported.

Fix verification, retest

Defects (bugs) reported are fixed and fixes are verified. Necessary regression tests are run to ensure fixes haven't introduced new defects. In the real world, expect fix failures, unexpected regressions and lot of duh moments.

Test closure

Prepare and submit the report of all testing performed, defects found, etc. and relevant artifacts. You would normally archive it at some location for later reference.