In Michael Bolton’s post on Testing vs Checking and the follow-up comments, he splits testing into exploratory testing and confirmatory testing (he’d rather call the latter as checking). I wasn’t convinced with this split-up and studied all his posts on this subject. I initially thought to leave a comment on his post but found it suitable to write it as a post on Testing Perspective because what I am saying has not been said earlier (atleast explicitly) and needs a treatment of its own.
So, let’s start by discussing what is Confirmatory testing. Bad part is that the term ‘confirmatory testing’ is not an as famous term as exploratory testing. Good part is that I am happy that it isn’t, we already have half the English dictionary in testing glossary. What this means is that I am not floating this term, I’m just using it as mentioned by Michael. I am calling it confirmatory testing so that the readers can correlate this post with the post and comments at Michael’s blog. Once this discussion is over, I don’t have any intention to use or advocate using the term ‘confirmatory testing’ unless a similar argument context is brought up.
Following is the entry for the word confirm at Online Oxford dictionary:
(verb) establish the truth or correctness of (something previously believed or suspected to be the case)
On the same page, “confirmatory” is mentioned as the adjective form of this verb.
Based on the meaning of the work ‘confirm’, I came up withe following description of confirmatory testing:
Confirmatory testing of software is done to test the correctness of your conscious and/or sub-conscious assumption(s) about how the software and/or the overall system in which it exists, would behave when you or someone else on your behalf (human(s)/machine(s)) interact with it directly or indirectly in a given manner.
So, confirmatory testing would involve:
- Assumption – The assumption about the software behavior could be anything, it could be based on anything, it could be yours or someone else’s. The assumption could be known at any stage of a test – before test execution, after test execution, at the time of report analysis. The assumption could be conscious on part of the tester as well be sub-conscious one, based on what the tester feels is right based on his/her past experience/biases etc.
- Interaction – The interaction could mean providing a given set of inputs to the software, interacting with its GUI controls, modifying the system on which the software is running in an attempt to impact the software etc.
- Confirmation – A decision whether the test passed or failed or needs further investigation, based on observations made on the system as a result of the interaction. The observations are compared against the assumption(s).
Confirmation in the context of testing is not tied to test execution, rather to the analysis of the test results/observations, when we can say for sure whether we can declare a test as passed or failed or atleast that with all our current knowledge we are unable to comment on the outcome. It’s only at this stage that the test has provided useful information. Testing should be considered complete for a given interaction only when the result of confirmation in terms of pass or fail is available.
An important point to note is that if at the time of confirmation, the tester or the automated test system is not aware of the assumption which he/she needs to confirm, it means the test for that particular interaction is not completed. This is true irrespective of whether the testing is based on formal documented test designing, whether the test is automated or manual and whether the approach to testing was exploratory. Till the confirmation is done, test is incomplete. Note that this final confirmation is contextual and could very well be incorrect from the perspective of a different tester or evaluator.
Despite the ambiguity in the above situation, it is still confirmatory testing. The only difference is that the tester is unable to confirm the results against any of his/her assumptions existing beliefs/expectations. At this stage he/she would say that the test results need further investigation either by the same tester or a different one. It might involve the analysis of the same result set or executing the test again by repeating the exact interaction to collect fresh results and possibly even multiple iterations of the same test.
Let’s see confirmatory testing in various contexts:
Testing with Formal Test Designing
- We could have documented detailed instructions/steps thereby making the assumptions pre-defined as “Expected Results/Behavior”. While testing, we would confirm only these assumptions.
- We could have been provided with test case document written by some one else. If we stick to testing only the assumptions in the test design document, we are confirming someone else’s assumptions about the software behavior. The “someone else” in this case could represent one or more testers, developers, technical leads, managers, architects, product owners and anyone else involved directly or indirectly in the creation of the test design document or its reviews.
- We are doing exploratory testing and in the course of the same, identified an interesting test we would like to execute. While conducting this test, we would assume/expect a certain kind of behavior (knowingly/unknowingly) and confirm the same by analyzing the output.
- As exploratory testing is about learning during test execution, we may not be clear about the outcome of a given interaction. We would still go ahead and perform that interaction . By observing the behavior of the software/system, we would still confirm whether what happened could be tagged as pass/fail/needs further investigation. Knowingly/unknowingly, at this stage we are doing confirmation against our beliefs/expectations/previous experience. If we are not able to do so, the test under discussion should be treated as incomplete and a note should be taken in the session notes.
- We could have written a small script, developed or used an existing testing tool/framework to automate test cases that we thought can be automated and could better be automated. We have assertions that return Pass/Fail or set similar variable values based on comparisons done using relational operators. With such comparisons, we are confirming that value(s) seen at run-time has/have the same relation with predefined value(s) that we assumed before test execution.
- Similar to the above, we could have done the above as a part of formal test automation involving other team members in terms of feedback, direct contribution and reviews. We could also be involved in executing and analyzing such tests created by other testers. The resulting set of assertions is the collective assumptions made by one or more testers, developers, technical leads, managers, architects, product owners and anyone else involved directly or indirectly in the creation of the test automation and its review. So, while testing, we are confirming the assumptions of all mentioned stakeholders.
- A small variant of this would be the automated tests which do not in themselves declare any test case as pass/fail rather provide information which has to be analysed post-execution to determine pass/fail status. An example of this is client server performance testing. This post-execution confirmation could be based on an SLA (Service Level Agreement) and/or industry standards (e.g. web response time for an HTML-only page) and/or company standards (e.g. a company with many web applications could have such standards set) and/or prior experience/expectations of the team/tester analyzing the results.
All testing is confirmatory because whenever we test, we do so with a lot of conscious and sub-conscious assumptions based on information made available for the software under test as well as our past experience. While testing we would confirm whether these assumptions are correct.
If all testing is confirmatory, I don’t see any reason why we should think of testing as exploratory testing versus confirmatory testing, because even exploratory testing is confirmatory testing.
Exploratory Testers: Here’s another statement for you: “All Testing is Exploratory”. If you are not getting my point already, stay tuned for further posts.