List

After the confusion around not publishing Michael Bolton’s comment in response to my earlier post was sorted out, Michael has replied (thanks Michael!) with an elaborate post: All Testing is (not) Confirmatory, providing reasoning in favor of his concept of “Testing versus Checking”. Meanwhile, I had a discussion with Pradeep Soundararajan on the comment episode and he pointed out a very important thing – “We shouldn’t let the software issues come in the way of relationships”. I can’t agree more because the time which could be spent on discussing and building on the argument was spent rather on proving that I didn’t get Michael’s comment. Next time, I would rather reach out to the person with such objections via email rather than using Twitter as the platform.

Having said that, let’s start with the actual discussion. If you want to get the best out of this discussion, please read the following posts, before reading any further contents.


I am not going to quote complete sections by Michael here, rather the gist of his discussion points. Please see his original wonderful post : All Testing is (not) Confirmatory, where he has challenged my statement that all testing is confirmatory.

Michael: Relates the example of a Greek mythological character – Procrustes, and claims that I tried to fit “Confirmation” into Procustes bed.

This was an interesting analogy. Let me start with another one from Hindu Mythology (Indian Epic – Mahabharata):

Yudhishthra (one of the Pandavas) and Duryodhana (one of the Kauravas) were sent on a world tour to see how the world is doing. When they came back, Duryodhana said the world is very bad, everyone is bad, the world is as good as hell. In contrast, Yudhisthira mentioned that the world is great, people are very good, the world is as good as heaven. (Yudhishthira and Duryodhana represent positive and negative characters in the epic, respectively)”

Moral of the story: We see the world the way we are ourselves.

Why I have related this story is to express the irony that I feel it’s Michael’s division of testing that mimics Procrustes.

Whatever I think about the myth Michael related, my argument wouldn’t be complete if I don’t answer him in the language of his analogy. So, let’s talk about the Greek myth.  One essential point that Michael didn’t mention about the myth is that some experts think that Procustes had 2 beds: a smaller one and a larger one and not 1 bed. So, no guest would ever fit as he would change the choice of bed 🙂

Going by this analogy, following is why I see myself as different from Procrustes in the context of guests:

  • I have a single bed for guests – I treat “testing” as a big adjustable bed – a bed, big enough to accommodate all my guests.
  • I get very few guests.  I don’t name the same guest and calculate bed space differently just because s/he is wearing different clothes than earlier. Most of my learning about testing happens via handful of guests. I can easily recollect their names.
  • At times people have weird names. Their names may not reflect their personality. Because I have got a few guests, I *know* them. So, their names really don’t make any difference to me. I have had a lot of discussion with them, I know about their personalities and various moods. Other people might know them differently; I’m not worried about that.
  • As I always expect multiple guests to be at my home, I don’t think about cutting or stretching a guest to fit into the bed. Stretching is out of question. I can’t block full bed for a single guest. Cutting is not required as I would rather adjust the bed if needed so that all my guests can co-exist on the same bed comfortably.
  • The “exploration” and “confirmation” guests are permanently at my home. They never leave the bed.
  • As I don’t have multiple beds, I don’t need to divide my guests. I don’t need to attach special importance to a particular guest and provide him the longer bed. There aren’t any of such choices. I don’t differentiate between guests.

This is why I think Michael mimics Procustes: (I hope Michael doesn’t take this one personally, the way I didn’t take his analogy personally, rather as a discussion to further the argument)

  • He appears to be the Procustes with two beds – “Exploratory Testing” and “Confirmatory Testing”
  • Exploratory testing bed for him is the longer one, comfortable, meant for humans. This is because he thinks that this bed is for those with “sapience”.
  • Confirmatory testing bed is the smaller one, no cushions or mattress, meant for machines. This is because Michael thinks that this is meant for guests who do “non-sapient” work and should be treated as machines.
  • Every time a guest comes, he would check guest’s clothes at that time, his mood and then decide which bed to offer. The guest might have to be stretched or cut to fit the bed as for Michael, testing is either exploratory or confirmatory.
  • Exploration and confirmation don’t individually represent complete testing, so Michael would have to stretch/cut guests to fit into either of the beds.
  • He doesn’t welcome multiple guests and doesn’t acknowledge that they can co-exist.

Michael: Mentions Confirmatory testing as “an approach or a mindset to testing”.

As discussed in my previous post, in my opinion, confirmatory testing is not an approach or mindset. Whatever approach to testing you choose, whatever your mindset is: Confirmation is not optional. Confirmation is an essential step in any form of testing.

Michael: Stresses that his usage of confirmatory is twofold: its role in test design & not test results, and my usage of word “previously” in definition of confirmation

His discussion is justified based on his version of “confirmatory”.

I don’t confine confirmation to test results alone as pointed out by him. Confirmation applies to assumptions (which might be a part of a written test case/automated test/exploratory test) as well as results analysis. The things would become further clear to you if you start thinking of “Confirmation” rather than “Confirmatory Testing”.


When I say “All Testing is Confirmatory”, it means all types of testing include confirmation. It’s the same way when I (or even the thought leaders behind exploratory testing) say “All Testing is Exploratory”. Exploration and Confirmation are essential parts of all types of testing. It should not be confused with whether it was *you* who explored and came up with a test case. It shouldn’t be confused with whether someone else (human/machine) executed that test and not you. I shouldn’t be confused with whether the confirmation was automated or based on human analysis (or sapience if you like). Exploration and Confirmation are unavoidable steps in all sorts of testing.

Michael: Points out that confirmation is just about showing that a product can work

I disagree. Confirmation is independent of type of assumption. An assumption could be whether a product works or does not work. So, confirmation is very much applicable to fault injections, fuzzing, defect based testing etc. Such tests would confirm that the product cannot work under certain circumstances.

Michael: While differentiating, “exploratory mindset” from “confirmatory mindset” mentions: “In this mindset, we’re typically seeking to find out how the program deals with whatever we throw at it, rather than on demonstrating that it can hit a pitch in the centre of the strike zone.”

In both these scenarios, there would be a decision point. It might be automated or done by human users. In his language, that decision point is “Is there a problem here?” When the tester attempts to answer this question, s/he is “confirming”.

Irrespective of the nature of mission, knowledge previously known or got via exploration by you or someone else, documented/undocumented test cases, when it comes to the decision point – the point at which you are forming an opinion about how good is the subject of your exploration, you are doing confirmation.

That’s the reason I do not agree with Michael’s interpretation of the word “previously”. At the moment of decision, everything you know/do before it is “previous” to it.

Michael: “I don’t think of that as confirmation (“establishing the truth or correctness of something previously believed or suspected to be the case”). I think of that as application of an oracle; as a comparison of the observed behaviour with a principle or mechanism that would allow us to recognize a problem”

As per the BBST course developed by Cem Kaner and James Bach – “An oracle is the principle or mechanism by which you recognize a problem” (I guess it was changed from the original text at this link: “http://www.testingeducation.org/k04/OracleExamples.htm” where it is mentioned: “An oracle is a mechanism for determining whether the program has passed or failed a test.” (I’ll come to discussion of Pass/Fail versus “Is there a problem here?” later). I am fine with both the definitions, I interpret both in same manner.

I agree to Michael’s point that that confirmation involves the application of an oracle. For that matter even in exploratory testing you would apply oracles and hence confirmation is a part of “exploratory testing”.

Michael: “If your hypothesis is that the product works—that is, that the product behaves in a manner consistent with the oracle heuristics—then your approach might be described as confirmatory.”

Oracle is defined in a very generic manner. It doesn’t talk of “pre-defined” oracles the way Michael puts it. Rather the definition of oracles is consistent with my interpretation of “previously” – At the moment of decision, everything you know/do before it is “previous” to it. Oracle forms the basis of that decision.

Every form of testing uses oracle(s) and hence confirmation.

Michael: “Yet the confirmatory mindset has been identified in both general psychological literature and testing literature as highly problematic. Klayman and Ha point out in their 1987 paper Confirmation, Disconfirmation, and Information in Hypothesis Testing that “In rule discovery, the positive test strategy leads to the predominant use of positive hypothesis tests, in other words, a tendency to test cases.”you think will have the target property.”

To me confirmation and disconfirmation are two sides of the same coin. The moment you make an assumption “negative”, confirmation becomes disconfirmation. The paper that Michael quotes treats confirmation as “positive testing strategy”. With all the criticism that the terms “positive testing” and “negative testing” get in various blogs and forums, I am surprised at why Michael’s has such a limiting view of “confirmation”.

I disagree to the interpretation of “confirmation” as conducting only the tests that show the software works. You can conduct a test that confirms that problem exists. Splitting tests into confirmation and disconfirmation calls for the same level of criticism as “positive” and “negative” testing.  (see one of them here by Pradeep Soundararajan: http://testertested.blogspot.com/2007/04/negatives-of-negative-testing.html). Testing is Testing. Testing includes confirmation whether assumptions made about the subject of test are correct. Assumptions could very well be about software failure.

Michael: “As I see it, if we’re testing the product (rather than, say, demonstrating it), we’re not looking for confirmation of the idea that it works; we’re seeking to disconfirm the idea that it works. Or, as James Bach might put it, we’re in the illusion demolition business.”

As mentioned earlier, I have a very generic view of confirmation which covers both “positive” and “negative” test strategies.

B.t.w. James’s opinion of “testing business” is incomplete. We are not just in illusion demolition business. Proving that the software works is also a part of our business.


Going by your discussion of confirmation and disconfirmation, James view of testing business is just disconfirmation. For me it’s just confirmation with a different flavors of assumptions.

Michael: Challenges my opinion that a test has to “Pass/Fail”. Emphasizes that a better outlook would be to ask “Is there a problem here?”

I am perfectly fine with Michael’s usage of “Is there a problem here?” instead of “Pass/Fail” (and obviously a lot of testers today visibly have starting using this.). I am aware of a recent website with a similar theme. It’s a matter of interpretation. I interpret “Pass/Fail” as subjective opinion of the tester (even if reflected in an automated test). So, if a tester replies to your question “Is there a problem here?”:

  • Yes: Means Fail
  • No: Means Pass
  • Do not know for sure: Needs further investigation

In my test reports, there’s column “remarks” where I can elaborate on my findings for each such claim. Whether you name the column “Is there a problem here?” and answer Yes/No?Don’t Know or you name the column as Pass/Fail/Under Investigation and give appropriate value, doesn’t make any difference to the value of the report.

The problem with your approach to “Is there a Problem here?” is that you are discussing it only partially. At first look, I confess, it looks pretty convincing. But the moment one starts thinking that one would have to answer this question, the difference fades away.

Michael: “I think Rahul’s notion that a test must pass or fail is confused with the idea that a test should involve the application of a stopping heuristic.  For a check, “pass or fail” is essential, since a check relies on the non-sapient application of a decision rule.”

I was actually not able to understand Michael’s note on my confusion around “stopping heuristic”. How’s Pass/Fail related to “stopping heuristic”? I’ll have to check with Michael on what exactly he meant before commenting.

As an extension to my previous note, I don’t differentiate between Pass/Fail and “Is there a problem here?”. Secondly, I don’t agree that confirmation is confined to “non-sapient” testing. Confirmation is a part of every form of testing. Even the most “sapient” exploratory test would have a “sapient” confirmation step depending on nature of oracle.

Michael – “I might test a competitive product to discover the features that it offers; such tests don’t have a pass or fail component to them. A tester might be asked to compare a current product with a past version to look for differences between the two.” Michael further relates examples to illustrate that Pass/Fail is limiting.

Just comparing two products in terms of features, is not testing unless there is a decision point. It is “Exploration” which as acknowledged by me earlier, is a part of all forms of testing. Just exploration does not complete even a single test case. There has to be a confirmation step, where the tester expresses his/her opinion on the compared features (labeling it as Pass/Fail or answering “Is there a problem here?”) At the decision point, it would always be confirmation. The basis of this confirmation – the assumption  – should be available, otherwise, tester wouldn’t be able to express any opinion. Till opinion is expressed, it is exploration/learning (part of testing) and not testing (complete).

As mentioned in my earlier post on the subject – “Testing should be considered complete for a given interaction only when the result of confirmation in terms of pass or fail is available”. You could extend/replace the last part as- “when results in terms of answer to the question Is there a problem here? is made available by the tester as a Yes/No.”

Michael: (Thanks for relating a performance testing scenario) Comments that Load testing (hypotheses: “The system does handle 100,000 transactions per minute”) often gets addressed with a confirmatory mindset and the tester “sets up some automation to pump 100,000 transactions per minute” and if the system stays up and exhibits no other problems, he asserts that the test passes. On stress testing he mentions: “have a different information objective here than for the load test, and we have a mission that can’t be handled by a check.”

** He sets up some automation to pump 100,000 transactions per minute** I see that Michael has got an over-simplified view of load testing.

For the scenario mentioned by Micahel, confirmation of the assumption – “this site would handle 10,000 transactions per minute” or “this site wouldn’t handle 10,000 transactions per minute” is not as straight-forward as depicted. A lot of effort goes into designing a workload distribution pattern, user simulation, data collection, plotting, correlation and analysis. “Handling” in this case is a very ambiguous word, because just of the reason that the site didn’t go down at this load, doesn’t mean that it was able to “handle” the load. “Exhibited no other problems” – What does Michael mean by no other problems and how come finding that “no other problems exist” be non-sapient? As a part of the analysis, for example, you see the memory plot of one of the servers and observe that even after reduction in load, memory consumption didn’t go down, you have found a memory leak. This “confirms” whether your assumption about the site was right or wrong. To reach this stage of confirmation, it takes lots of “sapience”. Need not say, there are pieces of work here shouting “I am exploration” as well as “I am confirmation” for a load test. In load tests, resources are never analyzed in isolation, so I fail to understand how load testing is non-sapient (Michael’s version of confirmatory approach).

Load testing does not fall into Michael’s definition of “checking” or confirmatory testing, rather just like any other forms of testing, exploration and confirmation are essential parts of it.

If, say, 10 testers are involved in this load testing project and they deal with various stages of it, some of which require less sapience, others require more sapience, that does not make load testing as non-sapient or confirmatory testing.

Ironically, many a times, stress testing of a website is “non-sapient”. You could blindly keep pumping load and call it a stress test. It wouldn’t need the “sapient” intricate design of a workload distribution pattern, effective state management, analysis of the results on servers that didn’t crash and so on.

Michael: “In the latter test, there is a confirmatory dimension if you’re willing to look hard enough for it. We “confirm” our hypothesis that, given heavy enough stress, the system will exhibit some problem. When we apply an oracle that exposes a failure like a crash, maybe one could say that we “confirm” that the the crash is a problem, or that behaviour we consider to be badis bad. Even in the former test, we could flip the hypothesis, and suggest that we’re seeking to confirm the hypothesis that the program doesn’t support a load of 100,000 transactions per minute.”

I am happy to see how easy it was for Michael to put in a single paragraph, what I’m thinking so far. The reason for this is simplicity. It’s very simple to think how confirmation is a part of every form of testing, and then infact, never think about it :-). No need to think and differentiate exploratory testing from confirmatory testing, just acknowledge that exploration and confirmation are essential parts of every form of testing. No need to think whether you need to bring your sapience into picture or not, have it handy all the time whether it’s about exploration or confirmation.

Testing = Exploration + Confirmation + …

Rahul Verma

11 Responses to “All Testing is Confirmatory – Part II”

  1. Michael Bolton

    A few responses:

    – Exploratory testing bed for him is the longer one, comfortable, meant for humans. This is because he thinks that this bed is for those with “sapience”.

    – Confirmatory testing bed is the smaller one, no cushions or mattress, meant for machines. This is because Michael thinks that this is meant for guests who do “non-sapient” work and should be treated as machines.

    This is an incorrect spin on the point of testing vs. checking. I abhor and utterly reject the idea that any person should be treated like a machine. Indeed, this is my complaint with the state of testing in much of the world, but especially in India: organizations (and in particular Western organizations) are treating testing as non-sapient work, and are treating testers—real, human testers—as though they were machines. This is a waste of human potential, and, to my mind, it’s also immoral.

    In your summary of my arguments above, you’ve left out a point that I consider salient. I said, “When I describe a certain approach to testing as “confirmatory” in my discussion of testing vs. checking, I’m not trying to introduce another term. Instead, I’m using an ordinary English adjective to identify an approach or a mindset to testing.” This is in contrast to something that I consider a term of art, “exploratory testing”, for which we’ve addressed particular structures, skills, and tactics. I’ll return to that point in a moment.

    …confirmation of the assumption – “this site would handle 10,000 transactions per minute” or “this site wouldn’t handle 10,000 transactions per minute” is not as straight-forward as depicted.

    Of course it’s not, and that’s my point. My point is, and always has been, that the checking part is the least interesting part of any test; that the check can be applied by a machine; that the important stuff is the design, programming, and analysis of the check, all of which are testing activities. That said, you can choose to perform a load test in an exploratory or a confirmatory way. The risk that I see is that if your automation is oriented towards fulfilling a previously identified set of conditions for success or failure, and your analysis is focused on seeing a green bar (or some similar) mechanism, then you are unlikely to observe problems that aren’t covered by your prior assumptions. That’s what I mean by a confirmatory mindset, and I see that as a problem.

    As mentioned earlier, I have a very generic view of confirmation which covers both “positive” and “negative” test strategies.

    Yes. My question for you is this: is that generic view helpful? How will it allow us to sharpen our approaches to testing? By expressing it, what decisions will it inform? Will it cause us to change our thinking or our actions in any material way? Similarly, one could assert that “all tests are based on some idea”. Okay, but so what?

    If you are suggesting that all testing is confirmatory and therefore there is danger in a kind of testing that is focused on validating our assumptions without challenging them, I’d agree with you (which is why I cited the Klayman and Ha paper). If you are suggesting that there is a kind of testing called “confirmatory testing”, and therefore we should learn the structures, skills, and tactics of confirmatory testing (as our community has done with exploratory testing, then that could be really cool. If on the other hand, you’re suggesting that “confirmation is a part of every form of testing…and then in fact, never think about it”, then I suspect that the greater community will find that rather less helpful.

    —Michael B.

  2. Shrini

    >>>Whatever approach to testing you choose, whatever your mindset is: Confirmation is not optional. Confirmation is an essential step in any form of testing.

    I disagree. Many testing (not checking) activities might not even remotely use the term “confirmation”. For example this exploratory testing (not checking) mission “Investigate the behavior of gmail application when dealing with Multibyte character based strings” or “analyze the behavior of exchange server running in this new hardware”. Please note that I am not making any assumptions here (implicit or explicit) – I am conducting an OPEN ended (meaning “close” only when time is up or money is over or we have collected sufficient information to show to our stakeholders). No assumptions and no use the term/phrase “confirm” – even remotely. What kind of testing is this? So confirmation is an essential step in any form of testing is falsified. Right?

    >> His discussion is justified based on his version of “confirmatory”.

    Can you compare contrast Michael’s and your’s version of “conformity”? Where does these to depart from each other?

    >>> Confirmation applies to assumptions (which might be a part of a written test case/automated test/exploratory test) as well as results analysis.

    Applying or introducing “assumption” to this – does not amount to extending “confirmation” to “test design” – assumptions are part of test design not everything right?

    >>>When I say “All Testing is Confirmatory”, it means all types of testing include confirmation.

    This is the root all confusion. When you say “all testing is confirmatory” – I read it like there is NO testing that is not confirmatory. Having post titled “All testing is confirmatory” than saying “All testing involves elements of confirmation” (though even this can be falsified) – is source of confusion.

    >>> It’s the same way when I (or even the thought leaders behind exploratory testing) say “All Testing is Exploratory”.

    I am not sure if any ET thought leader has specifically said “All testing is exploratory” (please point me to a reference – I would challenge that). You might have heard statements like “All testing involves SOME element of exploration”. Note this is different from saying “all testing is exploratory”.

    >>> Exploration and Confirmation are essential parts of all types of testing
    I would not use the qualifier “All types” and there are examples of testing where these “essential” elements are clearly missing.

    >>> Exploration and Confirmation are unavoidable steps in all sorts of testing.
    I disagree.

    >>> Confirmation is independent of type of assumption.
    But in your previous post mentioned that assumption is essential element of “confirmation”. If there is no assumption – what are you confirming?

    >>> In his language, that decision point is “Is there a problem here?” When the tester attempts to answer this question, s/he is “confirming”.
    Disagree. Let us say I launch certain application and a sequence operations – ask question is there a problem here – I am just observing and might be recording all that I can possibly observer. It could be an open ended investigation/exploration. Where is confirmation – there are assumptions or expectations to start with? No assumptions, expectations hence no question of confirmation.

    >> when it comes to the decision point – the point at which you are forming an opinion about how good is the subject of your exploration, you are doing confirmation.

    What if there is no decision point ? In an open ended exploration – there are no decision points.

    >>> I agree to Michael’s point that that confirmation involves the application of an oracle. For that matter even in exploratory testing you would apply oracles and hence confirmation is a part of “exploratory testing”.

    While application of an oracle is required in confirmation – you cannot conversely say that every application of oracle is confirmation. I am apply an oracle but may not make any confirmation. I might discover a new information or behavior – no confirmation here as well.

    >>> Every form of testing uses oracle(s) and hence confirmation.
    Disagree. Using an oracle doesn’t automatically make the process as confirmation as it might discover new information.
    >> Splitting tests into confirmation and disconfirmation calls for the same level of criticism as “positive” and “negative” testing.
    While I agree that calling something positive or negative testing might not be a good thing for testing – “confirmatory” and “exploratory” might have some interesting applications for the good. As Michael pointed out – “confirmatory” mind set (I am looking see a black car on the road – I will see a black car) can blind you to certain types of thinking fallacies. A confirmatory mind set while might be useful in certain situation is limiting in many others – “your eyes will see what you expect to see”. Hence a confirmatory mindset (checking if a pre-supposed idea is true) is often unidirectional and limiting. I would even say harmful for testers ….
    >>> Testing includes confirmation whether assumptions made about the subject of test are correct. Assumptions could very well be about software failure.
    This is a fair statement with a qualifier that not all testing include confirmation.
    >>we are not just in illusion demolition business. Proving that the software works is also a part of our business.
    While later is very trivial one – tough one is to demonstrate illusion demolition. For every “software works” claim – there are several “it does not here” claims.
    >>> I was actually not able to understand Michael’s note on my confusion around “stopping heuristic”.
    Stopping heuristic deals with various ways in which testing could be stopped. You insistence for test to complete there is has to be “pass/fail” – is related to stopping heuristic.
    >>> don’t differentiate between Pass/Fail and “Is there a problem here?”
    This might be because you have that “comments” field in your report where you address a result that is neither pass nor fail (item for further investigation). Thus you have overcome “binary” limitation of pass/fail by embracing expansive definition of “is there a problem here”.
    >>> Just comparing two products in terms of features, is not testing unless there is a decision point.
    Is this a new definition of testing? You seem to suggest that unless there is a decision point, nothing can qualify to be testing – which does not appear to be correct. Mandatory requirement that there has to be a decision point is true for checking not testing.
    >>> It is “Exploration” which as acknowledged by me earlier,
    So, exploration is not testing? Now I am surprised that how come you have such a narrow view point on testing?
    >>> At the decision point, it would always be confirmation.
    Not always – physicists that went looking for nature of electrons were puzzled by dual nature of electrons. They were investigating the fundamental building block – they were puzzled by new information that now we come to know as “quantum theory”. More so, there might not a decision point. Then what? Is that not testing?
    >> The basis of this confirmation – the assumption – should be available, otherwise, tester wouldn’t be able to express any opinion
    But – you mentioned previously in this post only that confirmation is independent of assumption. Tester while testing can ALWAYS express opinion without any confirmation or assumption.
    >>> Till opinion is expressed, it is exploration/learning (part of testing) and not testing (complete).
    Again refer to stopping heuristic – you are now creating a new situation for stopping testing that an confirmation be done. I disagree.
    >>> Need not say, there are pieces of work here shouting “I am exploration” as well as “I am confirmation” for a load test
    Situations in Load testing (there is nothing called Load checking I believe) require sapience as there can be no pre-programmed way to handle to evolving situations. But – strictly No confirmation. All we are doing is sapient exploration and investigation.

    >>> No need to think and differentiate exploratory testing from confirmatory testing, just acknowledge that exploration and confirmation are essential parts of every form of testing.
    Why? I am not convinced that confirmation is an ESSENTIAL part of testing.
    >> No need to think whether you need to bring your sapience into picture or not, have it handy all the time whether it’s about exploration or confirmation.
    I disagree – there is a need to distinguish between tasks that require sapience and that do not. Sapience is not something that we can store and keep it handy whenever required. Tests and checks distinction is single most powerful idea that can change the automation world in a big way. Automation folks should seriously study and work on it.

    Shrini

  3. Rahul Verma

    @Michael,

    I’ve worked in three software companies so far – one of them an IT services giant, another one focused on mostly testing services and now I am working for a product company (western, if you like). I was employed throughout this experience in India. I didn’t experience the non-sapient testing work (or got the treatment) that you have generalized. I wish I could elaborate here any further on the kind of exciting projects I’ve worked upon. (I’d leave this discussion here as this is not the current context. We can continue discussing about this elsewhere, if you wish).

    There is a fundamental difference With respect to what I call as “Confirmation” and what you call as “Confirmatory Testing”. The discussion gets diluted when we start talking of general testing challenges faced by a tester because these challenges exist when talking of either of these “terms”.

    If you say, you had no intention to float the term “confirmatory testing”, I believe you. It’s just that the text of your posts talks differently from your intention. In your comments for your post on “Testing versus Checking”, you said – “a test is exploratory if you’re seeking new information; it’s confirmatory (and therefore a check, as I’d prefer to call it) if you’re simply making sure of something that you believe you already know”. So, w.r.t. floating the term “confirmatory”, it looks like you are splitting up testing into exploratory and confirmatory (checking) and then you wrote extensively on the subject of “Checking” which you used as a synonym for confirmatory testing. The point here is not whether you elaborated further w.r.t. skills and tactics for confirmatory testing or not. Which of them you treat as term of art and which of them as adjective is not the point. Many testers have already started using the term ‘checking’ (which is your synonym for confirmatory testing). Also, whether you call these as “types” of testing, “ways” of testing or “mindsets” of testing, it has no impact on the agenda of my posts – The agenda clearly is to tell that “Testing = Exploration + Confirmation + …”, which is not what your posts on the subject say.

    >>>The checking part is the least interesting part of any test; that the check can be applied by a machine; that the important stuff is the design, programming, and analysis of the check, all of which are testing activities.

    Many of the tests that cannot be automated could be less interesting as well and just because a check is un-interesting, doesn’t make it automatable. Moreover, if checking is a “part of any test” as mentioned by you, how can checking or confirmatory testing be “approaches” to testing? If I go by the face of it, you are actually agreeing here that “confirmation(checking)” is a part of every test (though you describe it as uninteresting).

    >>>That said, you can choose to perform a load test in an exploratory or a confirmatory way.

    See my note on difference in our interpretation of confirmation. I don’t treat it as a “way” of testing”.

    >>>Yes. My question for you is this: is that generic view helpful? How will it allow us to sharpen our approaches to testing? By expressing it, what decisions will it inform? Will it cause us to change our thinking or our actions in any material way? Similarly, one could assert that “all tests are based on some idea”. Okay, but so what?

    Sometime the best answer to a question is another question (or set of questions) 🙂 Before reading your posts, I had never thought of the division of testing approaches as exploratory and confirmatory. Now suppose I accept that, how does it help me? How does it change the way I test? I already do a careful assessment of what to automate and what not to automate. By calling “what can be automated” as checks, what kind of additional skill set do I get apart from another term I need to think about every time? What additional decisions did this approach inform? How did it change my thinking or actions?

    Generic approach helps by its simplicity and letting me focus on work rather than thinking about what I should name every little variant of stuff I do. Question for you: With the overhead of understanding this new split up, what benefit do I get in a material way?

    >>>If you are suggesting that all testing is confirmatory and therefore there is danger in a kind of testing that is focused on validating our assumptions without challenging them, I’d agree with you (which is why I cited the Klayman and Ha paper). If you are suggesting that there is a kind of testing called “confirmatory testing”, and therefore we should learn the structures, skills, and tactics of confirmatory testing (as our community has done with exploratory testing, then that could be really cool. If on the other hand, you’re suggesting that “confirmation is a part of every form of testing…and then in fact, never think about it”, then I suspect that the greater community will find that rather less helpful.

    As mentioned earlier Confirmation to me is a step in every form of testing and not a type. At the cost of repetition, let me re-iterate how I started this discussion in earlier post –“So, let’s start by discussing what is Confirmatory testing. Bad part is that the term ‘confirmatory testing’ is not an as famous term as exploratory testing. Good part is that I am happy that it isn’t, we already have half the English dictionary in testing glossary. What this means is that I am not floating this term, I’m just using it as mentioned by Michael. I am calling it confirmatory testing so that the readers can correlate this post with the post and comments at Michael’s blog. Once this discussion is over, I don’t have any intention to use or advocate using the term ‘confirmatory testing’ unless a similar argument context is brought up.”

    I have written these posts to demonstrate that the division of testing into exploratory and confirmatory testing is unnecessary. Labeling them as sapient and non-sapient is unnecessary. It doesn’t add anything more to my existing knowledge or help me sharpen my skills in any way.

    Whether I discuss about new skills, tactics on confirmation is an altogether different aspect and in the coming time, you would see that coming. You would see how my version of confirmation ties seamlessly with other concepts that I would highlight.

    Testing community is full of pretty intelligent people. When they read these posts, they can take better decisions for themselves (even if that means disagreeing to me completely). If even one tester gets benefited by my posts or website, I consider it a success. If even that isn’t the case, I wouldn’t still be disappointed because in the course of writing these posts, I am studying and analyzing a lot. I am satisfied with just that as well.

  4. Callum Anderson

    Stepping out of mythology and analogy for a minute (for the benefit of a reader who requires a more direct lesson).

    Are you suggesting here, that all software testing requires a predefined confirmation to be proved or disproved?

    I can see some merit in this idea, especially in terms of automated checks. But for manual or exploratory testing? I think a tester works best when they use their intuition, rather than following a rigid script.

    Your point about the two beds is interesting through. I agree that testing can be split into two beds. One involves tests requiring sapience, and human analysis. And another (what Michael calls ‘checks’) which in my mind are better performed by a computer.

    A very well respected test consultant in Europe once told me that humans are not very good at repeating things, but computers excel at repeated tasks. Encouraging humans to test under such tight conditions (this must pass or fail) is not the way I would use my most powerful resource.

    When you gather up the results of your confirmatory tests, what do you have? A list of Booleans. If you pass all of your confirmatory tests, does that mean your application works perfectly, of course not. This technique seems to be one solely created with the aim of pleasing metrics junkies.

    I understand the benefit of a testing technique called ‘confirmatory testing’ (or ‘checking’ depending on your view). Which offers a heuristic to create automated suites to assist exploratory testing. But this is certainly not a new technique (although plenty of so called revelations in our industry before have been the result of hashing together previous techniques and applying a catchy name!). And I certainly don’t agree that all tests require some form of confirmatory statement.

    Callum.

  5. Rahul Verma

    @Callum,
    Welcome to Testing Perspective!

    Are you suggesting here, that all software testing requires a predefined confirmation to be proved or disproved?

    A little different flavor of the same. The act of taking a decision on part of the tester whether the test passed/failed or to give an answer Yes/No for “Is there a problem here?” calls for confirmation. This is a decision point. It should have a basis of decision. So, from that perspective the “assumption” is always “pre-defined”.

    I can see some merit in this idea, especially in terms of automated checks. But for manual or exploratory testing? I think a tester works best when they use their intuition, rather than following a rigid script.

    Confirmation (how I think of it) is not optional and hence not confined to “scripted testing”. At the decision point, taking your example, what’s the outcome of a tester’s intuition: it’s an assumption against which tester would compare the observation to tell whether there’s problem in the subject of test, *from his/her perspective*.

    Your point about the two beds is interesting through. I agree that testing can be split into two beds. One involves tests requiring sapience, and human analysis. And another (what Michael calls ‘checks’) which in my mind are better performed by a computer.

    It’s good to hear that you are satisfied with the two bed approach. I don’t see two beds, just one. For me it’s just testing and as a part of it I analyse which tests are automatable. If you and Michael want to call automatable tests as checks, and it helps in your understanding of testing, it’s great. I’m pretty much satisfied by calling it testing.

    A very well respected test consultant in Europe once told me that humans are not very good at repeating things, but computers excel at repeated tasks. Encouraging humans to test under such tight conditions (this must pass or fail) is not the way I would use my most powerful resource.

    That’s an interesting point and such tests would be candidates for automation. But here’s that catch. Are all tests of repetitive nature automatable? If they are automtable, what’s the cost and complexity of automation? How frequent those tests would be executed? So, for me, such tests are still tests and a thorough analysis of “could be automated” versus “should be automated” must be done.

    When you gather up the results of your confirmatory tests, what do you have? A list of Booleans. If you pass all of your confirmatory tests, does that mean your application works perfectly, of course not. This technique seems to be one solely created with the aim of pleasing metrics junkies.

    “Testing can show the presence of bugs, not their absence.” Isn’t it? Would a No to all “Is there a problem here?” questions in “exploratory testing” prove the absence of bugs? Whatever testing approach we use, we can never say the application works perfectly. We can at the most say “good enough”, “seems to work for all testing ideas we had so far”, “worked for all sorts of testing we conducted”.Even in that we release software with even known bugs – low severity or even high severity deferred bugs. So, IMHO we shouldn’t raise the question of “perfect software” in this discussion.

    If your point was more related to automated tests, there are quite a few interesting reports which could be generated. Reports could have open-ended observations for analysis by all stakeholders. Such reports are typical to performance testing projects. I have used them in automated functional testing reports as well.

    Metrics aren’t good or bad in themselves. What makes them good or bad is how we use them.

    I understand the benefit of a testing technique called ‘confirmatory testing’ (or ‘checking’ depending on your view). Which offers a heuristic to create automated suites to assist exploratory testing. But this is certainly not a new technique (although plenty of so called revelations in our industry before have been the result of hashing together previous techniques and applying a catchy name!). And I certainly don’t agree that all tests require some form of confirmatory statement.

    In my view, there’s nothing called confirmatory testing or checking. I could certainly start calling certain parts of my job by these names, but I don’t see any benefit in that. Terms should be introduced about new innovations/concepts/techniques – checking/confirmatory testing don’t fit this. Terms could also be introduced to avoid confusion. For me, they are more on confusing side rather than “resolving confusions side”. It’s good if these terms are helping you in understanding and practicing testing better.

    The basis of our disagreement is more on the interpretation of the word “confirmation”. I never said, you need a confirmatory statement before conducting the test. What I’m saying is that every test has a decision point where the tester expresses his/her opinion (if he/she doesn’t, the test is not yet complete, the tester just explored/learnt stuff). At this decision point, there is a confirmation step. Confirmation might be done implicitly by the tester against his biases/choices as well.

    E.g. During exploratory testing, the tester sees a screen with two colors and declares the color combination is not good. This is strictly exploratory testing. What did the tester actually do? Is the color combination really bad? Would the other tester share the same opinion? There is no “pre-defined confirmatory statement” (from your and Michael’s perspective) as such here. For me it is a confirmation against the assumption: “Color A and Color B don’t look good together.”

    Steps:
    – Tester is biased towards certain color combinations. (I would freely call these biases as tester’s assumptions on “good” color combinations.
    – Explores the application, sees how the colors are present.
    – Ask the question “Is There a Problem Here?”
    – Confirmation against his/her bias on colors happens and he/she answers “Yes/No”. Test on analysis of color combinations completed.
    -If the tester answers “Can’t say!”, did the tester really “complete the test?” In my opinion it was just exploration. Apart from increasing the tester’s knowledge of which colors are used by the application, there’s no other information, the tester can share with the stakeholders. IMHO, the “test is not completed”.

  6. Rahul Verma

    @Shrini,

    >>>Many testing (not checking) activities might not even remotely use the term “confirmation”. For example this exploratory testing (not checking) mission “Investigate the behavior of gmail application when dealing with Multibyte character based strings” or “analyze the behavior of exchange server running in this new hardware”. Please note that I am not making any assumptions here (implicit or explicit) – I am conducting an OPEN ended (meaning “close” only when time is up or money is over or we have collected sufficient information to show to our stakeholders). No assumptions and no use the term/phrase “confirm” – even remotely. What kind of testing is this? So confirmation is an essential step in any form of testing is falsified. Right?

    Let me clarify. You don’t need to use the term “confirmation” or call any part of your testing “confirmation”. It’s important to understand that we do confirmation in all tests, so there’s no need for additional terms like confirmatory testing or checking.
    The investigation scenario that you mentioned would have a lot of decision points during your exploration of the application. At all such decision points, you would do confirmation. You don’t need to *call* it anything.

    >>>Can you compare contrast Michael’s and your’s version of “conformity”? Where does these to depart from each other?

    I’ve discussed on this throughout this post. I’m not sure if I can add more without a particular context.

    >>>Applying or introducing “assumption” to this – does not amount to extending “confirmation” to “test design” – assumptions are part of test design not everything right?

    Assumptions form the basis of your decision. They may not be explicitly expressed in a test design document.

    >>>This is the root all confusion. When you say “all testing is confirmatory” – I read it like there is NO testing that is not confirmatory. Having post titled “All testing is confirmatory” than saying “All testing involves elements of confirmation” (though even this can be falsified) – is source of confusion.

    “All dogs are hairy” doesn’t say the dogs are comprised just of hair. It says – all dogs have hair.

    >>>I am not sure if any ET thought leader has specifically said “All testing is exploratory” (please point me to a reference – I would challenge that). You might have heard statements like “All testing involves SOME element of exploration”. Note this is different from saying “all testing is exploratory”.

    Please park this for a while. I’ve a post planned on the subject.

    >>>I would not use the qualifier “All types” and there are examples of testing where these “essential” elements are clearly missing.

    I look forward to hearing about examples that substantiate your claim.

    >>>But in your previous post mentioned that assumption is essential element of “confirmation”. If there is no assumption – what are you confirming?

    Please read it clearly. Did I say “no assumption” anywhere?

    >>>Disagree. Let us say I launch certain application and a sequence operations – ask question is there a problem here – I am just observing and might be recording all that I can possibly observer. It could be an open ended investigation/exploration. Where is confirmation – there are assumptions or expectations to start with? No assumptions, expectations hence no question of confirmation.

    You know what. Though you are saying, you disagee, you are infact proving my point. Till you “confirm” your opinion about observation, from my perspective, you have just explored/learnt, your testing in not complete. The moment you express your opinion on the observation, it would involve confirmation and that would complete the test (from your perspective).

    >>>What if there is no decision point ? In an open ended exploration – there are no decision points.

    In a test, decision point is not optional. It is there. Till you deal with it, you haven’t really tested, you just explored.

    >>>While application of an oracle is required in confirmation – you cannot conversely say that every application of oracle is confirmation. I am apply an oracle but may not make any confirmation. I might discover a new information or behavior – no confirmation here as well.

    If the application of oracle doesn’t let you arrive at a final decision/opinion, the test is not complete as confirmation didn’t happen.

    >>> Using an oracle doesn’t automatically make the process as confirmation as it might discover new information.

    Please see my previous note.

    >>>While I agree that calling something positive or negative testing might not be a good thing for testing – “confirmatory” and “exploratory” might have some interesting applications for the good. As Michael pointed out – “confirmatory” mind set (I am looking see a black car on the road – I will see a black car) can blind you to certain types of thinking fallacies. A confirmatory mind set while might be useful in certain situation is limiting in many others – “your eyes will see what you expect to see”. Hence a confirmatory mindset (checking if a pre-supposed idea is true) is often unidirectional and limiting. I would even say harmful for testers ….

    Please re-analyse what I call as confirmation.

    >>>While later is very trivial one – tough one is to demonstrate illusion demolition. For every “software works” claim – there are several “it does not here” claims.

    Agree. I never commented on which part is tough, which is easy. Take the example of cooking. “Tadka” is the most complex part for preparing an Indian curry, easy part could be to cut a potato, add it and close the pressure cooker. But without the latter, you wouldn’t be able to prepare the gravy.

    >>>Stopping heuristic deals with various ways in which testing could be stopped. You insistence for test to complete there is has to be “pass/fail” – is related to stopping heuristic.

    Thanks for clarification. My interpretation of Pass/Fail is contextual to a given “Test” and not “Testing”. So, stopping heuristic has no context in this discussion.

    >>>This might be because you have that “comments” field in your report where you address a result that is neither pass nor fail (item for further investigation). Thus you have overcome “binary” limitation of pass/fail by embracing expansive definition of “is there a problem here”.

    As I mentioned earlier I don’t have any particular opinion on Pass/Fail or “Is there a problem here?” I can talk to testers using any of these and interpret what they mean. I can question them regarding their decisions on why they are saying a test passed/failed or why they have a particular opinion about the problem in context.

    >>>Is this a new definition of testing? You seem to suggest that unless there is a decision point, nothing can qualify to be testing – which does not appear to be correct. Mandatory requirement that there has to be a decision point is true for checking not testing.

    I’m not sure whether I should call that a definition of testing, but this is pretty much my understanding about a crucial aspect of testing.

    >>>So, exploration is not testing? Now I am surprised that how come you have such a narrow view point on testing?

    Just exploration is not testing, it is a part of testing. IMHO that’s not a narrow view of testing. Infact that broadens my perspective about exploration and how well this fits into various stages of SDLC. (Please park this. There’s a lot more I have to say on this subject.)

    >>>Not always – physicists that went looking for nature of electrons were puzzled by dual nature of electrons. They were investigating the fundamental building block – they were puzzled by new information that now we come to know as “quantum theory”. More so, there might not a decision point. Then what? Is that not testing?

    Getting puzzled is very important for a tester and is in the best interest of subject under test. It forms the basis of further exploration. Till the tester is puzzled and has not solved it, the test is not complete.

    >>>But – you mentioned previously in this post only that confirmation is independent of assumption. Tester while testing can ALWAYS express opinion without any confirmation or assumption.

    Please re-analyse my view on role of assumptions for confirmation and why I say, there has to be atleast one assumption for confirmation. Opinions are formed out of assumptions.

    >>>Again refer to stopping heuristic – you are now creating a new situation for stopping testing that an confirmation be done. I disagree.

    Addressed it in a previous note.

    >>>Situations in Load testing (there is nothing called Load checking I believe) require sapience as there can be no pre-programmed way to handle to evolving situations. But – strictly No confirmation. All we are doing is sapient exploration and investigation.

    What happens at the end of investigation? If you don’t do confirmation and give a decision (e.g. “go-no go for a website), you are “transferring” the confirmation step to someone else who have to do confirmation based on your investigation. There has to be a decision taken.

    >>>Why? I am not convinced that confirmation is an ESSENTIAL part of testing.

    🙂 That’s the beauty of discussions.

    >>>I disagree – there is a need to distinguish between tasks that require sapience and that do not. Sapience is not something that we can store and keep it handy whenever required. Tests and checks distinction is single most powerful idea that can change the automation world in a big way. Automation folks should seriously study and work on it.

    That distinction is necessary from understanding perspective. Being interested in test automation frameworks, I do that. But I am able to do that without giving them names. What could be automated? What should be automated? These questions should be asked in any test automation effort. There’s value in this distinction and that value has been understood long back. Then why introduce new terms as if something is being invented?

    Thanks for doing a detailed analysis of the post. I enjoyed replying to your comments.

    Regards,
    Rahul

  7. Shrini

    >>>You don’t need to use the term “confirmation” or call any part of your testing “confirmation”.

    Disagree. You need to whenever there is a dominant thinking of confirmation. We need words and labels to recognize such patterns of thinking. I suppose you understand the distinction between confirmation and exploratory patterns of thinking. It just happens that confirmation based thinking is limited and narrow where exploration thinking works on diverging aspects with pluralism. In testing, depending upon the situation, we need both in different shades and quantities. Testing is not mono-color picture – it is a rainbow.
    And we still need to recognize the patterns of confirmation and exploration for our own good. Hence labels like “confirmation” are required. They are absolute must. How else will you point the finger and say “that is a confirmatory thinking”?

    >>> It’s important to understand that we do confirmation in all tests,
    Strongly disagree. I have examples where I don’t (you might call them as not testing – but that is your new definition)

    >>> there’s no need for additional terms like confirmatory testing or checking.

    Sure we need additional terms – as a matter of fact these are not new terms (coined in this century) – these are existing terms that we associate new meanings. These labels and new associations are required to identify dominant patterns of thinking. Thus improve our tools, methods based on our knowledge of how do we think … it is epistemology my dear friend.

    >>> The investigation scenario that you mentioned would have a lot of decision points during your exploration of the application. At all such decision points, you would do confirmation.
    I disagree with your assertion that there are decision points (we might have to explore the connation and denotation of word “decision”). But I am sure at many situations – I do not think in terms of “decision” or “confirmation”. I look at bird, rose, a cute kid – observe and appreciate their beauty – I am not making any decisions. Many tasks in testing, are similar to simply observing and writing down the account of what you see and perceive.

    >>> You don’t need to *call* it anything.
    It is like saying – look in testing at some point of time something important happens, that is a significant part of your testing – but you have to name it or call it with any thing? Why? Using labels, metaphors are great way to communicate? If you did not any name for say a poem – what would you call it as ?

    >>> “All dogs are hairy” doesn’t say the dogs are comprised just of hair. It says – all dogs have hair.

    The statement is “All dogs are hairy” is not similar to “All testing is confirmatory”. In the former you are describing a property of an object that we can see for all members like that object. In the later, you are asserting that All of “something” is “something”. See the difference? You wanted to say “there might be confirmatory elements in all kinds of testing” but ended up in saying “all testing is confirmatory” – note “omitting” few words such as “elements or some” can make a big difference in the meaning. Unlike your example – all testing is confirmatory does not say “all testing has some elements of confirmation” but say “all testing is nothing but confirmation”.

    >>> Till you “confirm” your opinion about observation, from my perspective, you have just explored/learnt, your testing in not complete.

    Well – I think that is your brand new definition of testing – my meaning of testing includes exploration and learning (but I would not claim that all testing is exploration and learning). When Michael mentioned “Procrustes” – he probably has this kind of thinking in mind. Procrustes did not have many beds – he just changed the dimensions of the people to fit the bed he has. Here, by proposing (implicitly) a new definition of testing (something where confirmation, decision points etc are must) – you are declaring that all testing is confirmatory. Like Procrustes – you are retrofitting your definition of confirmation on meaning of testing.

    Sometimes depending upon what you are hitting at … just exploration and learning might satisfy “testing mission”. Who decides that? The stakeholder. Let our stakeholders decide that “testing” should do.

    >>> The moment you express your opinion on the observation, it would involve confirmation and that would complete the test (from your perspective).

    I can express an opinion about an observation in number of ways. “This is beautiful”. “this is terrible”, “why Am I seeing this? “, “what else is happening”. Do you call these as confirmation? I don’t. What completes a test – is a subject matter to “stopping” heuristic. Hence it is very relevant to discussion. You might choose many different conditions to declare test as complete other than “confirmation”.

    >>> In a test, decision point is not optional. It is there. Till you deal with it, you haven’t really tested, you just explored.
    I disagree. The entire notion of decision is OPTIONAL and exploration is very much part of testing – that I know of and subscribe to. Another example of Procrustes thinking.
    In reality, there is no problem with Procrustes thinking. We just need to explicitly ACCEPT that we are changing something to fit our thinking.

    >>> If the application of oracle doesn’t let you arrive at a final decision/opinion, the test is not complete as confirmation didn’t happen
    In my definition of testing – confirmation, decision points are OPTIONAL. Exploration and learning are very much part of my testing.. Where application oracle leads to new information – I call that as test – and I report the information as it is.

    >>> Just exploration is not testing, it is a part of testing.
    Now we are back to “part-whole” discussion. You appear to be saying exploration has to end in a confirmation and decision point. And then lead me to the conclusion “exploration that does end in a confirmation and decision is a not testing” hence all testing is EVENTUALLY ends up as confirmation. I would say some times, I explore and report my view points and not necessarily conclude/pass any decision – I am still testing from start to end.

    We might end up in digging on the definitions of decision and confirmation finally.

    >>> Till the tester is puzzled and has not solved it, the test is not complete.
    An unsolved puzzle is also a test.

    >>> If you don’t do confirmation and give a decision (e.g. “go-no go for a website), you are “transferring” the confirmation step to someone else who have to do confirmation based on your investigation. There has to be a decision taken.

    This is a classic QA vs QC debate. Testers do not take decisions. They provide information for stakeholders to take decision. It is a poor business practice to let tester to take Go-No Go decision on the website. Providing information about breaking load, CPU utilization pattern, throuput, response time graphs across key transactions – are finest examples of testing output where no CONFIRMATION and/or DECISION happens.

    Aren’t these examples to FALSIFY your hypothesis that all testing has to end in confirmation?

    >>> There’s value in this distinction and that value has been understood long back.
    Long back? When? I still see people making mistakes in picking tests and wrongly assert that “this or that should be automated”. By new labels and way of articulation – we attempt to bring people to recognize this difference.

    In nutshetll – here are my viewpoints

    1. All testing involves “confirmation” elements to some degree just as there are exploratory elements to some degree. Also, there are testing tasks where either or both of these might TOTALLY missing. Testing cannot be stereotyped to be of only one nature or something that fundamentally has one specific type of work/thinking.

    2. How all testing ends or how does one identify when a test (or check) is COMPLETE. – is a key question of this debate. You seem to suggest that “confirmation” vis-à-vis some set of assumptions is necessary and sufficient condition for this COMPLETION. Enter stopping heuristics – set of criteria that determine when to stop testing or when one can declare that test is complete. There are, I believe, many occasions where a test can be declared as “complete” without doing ANY (I repeat ANY) confirmation.
    3. We constantly look for creating new associations and metaphors (not new words… really) such as test vs check to improve our understanding of how do we think and how to articulate our viewpoints to others. In a plural world like ours – let us not attempt to standardize or create tunnel vision that discourages alternate, expansive, critical viewpoints of our beliefs, material practices and biases. There is no one universal language ….

    I too enjoyed replying to your comments. Since long time, this was something that I got hooked on …

  8. Rahul Verma

    @Shrini,
    I liked the “in a nutshell” part of your comments very much. So, I’m breaking the pattern of commenting line to line and rather I would list my understanding of what fundamental disagreements we have at the moment. We both seem to be happy with our opinions 🙂

    – I stick to my interpretation of “confirmation”. “All Testing is Confirmatory” is meant to convey that Confirmation is a part of every form of testing. I don’t agree that there is anything called confirmatory testing or confirmatory mindset/approach. You disagree here primarily because you (and Michael) think about confirmation in terms of “confirmatory testing” – in your opinion it is non-sapient and hence different from exploratory testing.

    – I differentiate between testing and exploration. You consider exploration itself as a form of testing.

    – As an extension of the above, I insist that a test is complete only when for the decision point(s) we have a definitive answer. You suggest that a decision point itself is optional.

    – You suggest that it’s ok to create new terms to aid in articulation of viewpoints. I insist that we could achieve the same by keeping a limited set of terms in of testing vocabulary, creating new terms and changing interpretations of existing, only when it is a new thought/concept or aids one in becoming a better tester in a way the existing knowledge can’t.

    The above are pretty strong views for either of us. This debate is not about the testing challenges. We more or less would agree on most of the testing problems that are being discussed here. Our disagreement is mainly around how we solve these.

    There is a lot of difference between “lack of understanding” and “being convinced”. I guess we both understand the reasoning we are giving for our points but aren’t convinced with the other’s reasoning :-). To put it in another way, if I put myself in your shoes, I know why you want a differentiation between testing versus checking, but I am feeling pretty good in my own shoes because I have my answers for the same problems without this differentiation.

  9. Shrini

    Looks like we are reaching a rather premature “anticlimax” to this discussion. Few closing remarks (while remaining in my own shoes ….) – in line-line comments ….though.

    >>>You disagree here primarily because you (and Michael) think about confirmation in terms of “confirmatory testing”
    A qualification here — I don’t disagree because I think in terms of “confirmation” in terms of “confirmation testing”. I thought very hard and deep – about the meaning of “confirmation” (ripping of the word “testing” and stay only the word confirmation). To me, confirmation is check if what we believe is TRUE. We do it unconsciously many times in our day-today life. In a way, confirmation is human’s natural instinctive reaction to deal with uncertainty.

    But there is real danger in “confirmatory thinking” (note I am not talking about “confirmatory testing”) – unilateral thinking and blinded-ness. When we look for confirmation – we see only confirmation and ignore rest. Your eyes (senses) will see what you like to see (what you confirm to see). Confirmation leads to “close” minded thinking. It is often useful and practical – but for testers (whose job involves finding problems) it can be dreaded. Note that I am now using a broader view of confirmation.

    Another manifestation of confirmation – we tend to read, hang around with people whose views match or CONFIRM our own thinking and beliefs (unlike like poles – here like poles attract). When a concept or line of thinking that is in contrary to our beliefs is presented to us – we tend to fight it, desperately try to disprove it (both of us are doing exactly that) or ignore it. A reasonable thinker would accept the new thinking and reconcile that with his own there by expanding his value and knowledge system. Confirmation thinking inhibits this progressive approach to acquire and grow knowledge.
    Carl Popper mentions that we gain knowledge by falsifying theories and beliefs not by confirmation. You probably are aware of Taleb’s Black Swan – a symbol hinting towards dangers of confirmation.

    Testers should last creatures on this planet to stick to confirmatory thinking. Critical thinking (meta thinking or questioning/being skeptical about their own beliefs) and exploratory thinking (attempt to be free from biases or confirmatory tendencies) is, in my opinion , what a tester should practice and be good at.

    >>> in your opinion it is non-sapient and hence different from exploratory testing.

    I encourage you (rather look for it as your next blog post) to write about “How confirmation is sapient or non sapient”. I think all most all confirmations can be stated like an ASSERT and can be “checked” without any sapience. If you find an example – throw at me – let me try and demonstrate how it can solution through non sapience can be demonstrated.

    >>> but I am feeling pretty good in my own shoes because I have my answers for the same problems without this differentiation.

    Great !!! I would watch out for your blog for answers to the problems (test design vs automation) …

  10. Rahul Verma

    @Shrini,

    I agree to the points listed by you on confirmation. Difference is that I would say when one does confirmation in this manner it is not confirmation, rather “bad” confirmation. For “good” confirmation one should ask the following questions at the decision point:
    – “Is my understanding about the observation correct? Is it complete/sufficient to give my opinion?”
    – “Are there any related observations which should be made before I conclude anything?”
    – “Could there be some other perspective about the observation?”.
    – “Whom should I consult to take a second/third… opinion on the observation?”
    – “Is there a possibility that I am biased? Biased about color combinations, sequence of operations, choice of input…”
    and so on…
    – “Can I take a decision/express my opinion?”. If not so, “Shouldn’t I pass on my observation as observation to someone else who could confirm?”

    These questions would help in filling the gaps in the basis of confirmation.

    If you observe carefully, even during exploratory testing, the tester faces the same challenges while answering to the question “Is there a problem here?”.

    For me the basis of confirmation is not fixed. It can be improved upon based on further exploration, studying, discussing etc.

    Sapience is the basis of your differentiation of exploratory and confirmatory testing. To me, coming up with an assumption, questioning our assumptions, improving upon them constantly requires sapience, thereby making confirmation sapient.

    It’s a good idea to demonstrate my version of things with examples. I’d definitely plan a blog post on the same. Though I can’t assure you that it would be the next one.

  11. Shrini

    >>> These questions would help in filling the gaps in the basis of confirmation
    These questions are hallmark of an exploratory thinking (not testing … it is a pattern of thinking). You choose to call it as “Good” confirmation. Let us agree to disagree. It is semantics….

    >>> If you observe carefully, even during exploratory testing, the tester faces the same challenges while answering to the question “Is there a problem here?”.

    True. Hence in my language, these questions represent “exploratory” (critical thinking too … thinking about thinking or own beliefs). Key difference between “is there a problem here” and “pass/fail” is that former can have many answers (unlike “yes there is a problem” or “no – there is no problem” or “don’t know – further investigation required). You can state many observations without attaching any “decisions” or “confirmations”.
    Hence for testers, I recommend “ Is there a problem”.

    >>> Sapience is the basis of your differentiation of exploratory and confirmatory testing.
    To be precise – Sapience is the basis of my differentiation between “check” and “test”.

    >>> To me, coming up with an assumption, questioning our assumptions, improving upon them constantly requires sapience, thereby making confirmation sapient.

    This is mixed up statement which I agree in pockets…Sapience is application of human intelligence and spontaneous mental reflux. It is something that cannot be “codified” and presented as “pre-fact” or represented as an algorithm or formula.

    While the statement “coming up with an assumption, questioning our assumptions, improving upon them constantly requires sapience” is TRUE – that does not make “confirmation” sapient. Using whatever definition that you want to use for “confirmation” – I can suggest at least one example in every case that confirmation can be done without using sapience. Considering the activities of design, execution, analysis of results as generic testing tasks – I can set up some of these such that barring design, rest can be executed without requiring any sapience.

    Shrini

Leave a Reply to Rahul VermaCancel reply

  Posts

1 2 12
June 30th, 2020

Arjuna 1.1.0 Production Release

July 23rd, 2017

The Last Keynote on Software Testing

July 23rd, 2017

The Agile Qtopia

July 23rd, 2017

Reflections:: Persistent Learning

February 28th, 2017

Reflections :: Servitude

January 9th, 2017

Reflections on Testing: Dignity

May 10th, 2016

The Pluralistic School of Testing

May 9th, 2016

Don’t Ignore Your Special Users

May 9th, 2016

The Dogmatic Agile – A Critique of Deliberate Blindness

October 9th, 2015

Pattern Thinking for Performance Engineers

Discover more from Rahul Verma Official

Subscribe now to keep reading and get access to the full archive.

Continue reading