In Michael Bolton’s post on Testing vs Checking and the follow-up comments, he splits testing into exploratory testing and confirmatory testing (he’d rather call the latter as checking). I wasn’t convinced with this split-up and studied all his posts on this subject. I initially thought to leave a comment on his post but found it suitable to write it as a post on Testing Perspective because what I am saying has not been said earlier (atleast explicitly) and needs a treatment of its own.
So, let’s start by discussing what is Confirmatory testing. Bad part is that the term ‘confirmatory testing’ is not an as famous term as exploratory testing. Good part is that I am happy that it isn’t, we already have half the English dictionary in testing glossary. What this means is that I am not floating this term, I’m just using it as mentioned by Michael. I am calling it confirmatory testing so that the readers can correlate this post with the post and comments at Michael’s blog. Once this discussion is over, I don’t have any intention to use or advocate using the term ‘confirmatory testing’ unless a similar argument context is brought up.
Following is the entry for the word confirm at Online Oxford dictionary:
(verb) establish the truth or correctness of (something previously believed or suspected to be the case)
On the same page, “confirmatory” is mentioned as the adjective form of this verb.
Based on the meaning of the work ‘confirm’, I came up withe following description of confirmatory testing:
Confirmatory testing of software is done to test the correctness of your conscious and/or sub-conscious assumption(s) about how the software and/or the overall system in which it exists, would behave when you or someone else on your behalf (human(s)/machine(s)) interact with it directly or indirectly in a given manner.
So, confirmatory testing would involve:
- Assumption – The assumption about the software behavior could be anything, it could be based on anything, it could be yours or someone else’s. The assumption could be known at any stage of a test – before test execution, after test execution, at the time of report analysis. The assumption could be conscious on part of the tester as well be sub-conscious one, based on what the tester feels is right based on his/her past experience/biases etc.
- Interaction – The interaction could mean providing a given set of inputs to the software, interacting with its GUI controls, modifying the system on which the software is running in an attempt to impact the software etc.
- Confirmation – A decision whether the test passed or failed or needs further investigation, based on observations made on the system as a result of the interaction. The observations are compared against the assumption(s).
Confirmation in the context of testing is not tied to test execution, rather to the analysis of the test results/observations, when we can say for sure whether we can declare a test as passed or failed or atleast that with all our current knowledge we are unable to comment on the outcome. It’s only at this stage that the test has provided useful information. Testing should be considered complete for a given interaction only when the result of confirmation in terms of pass or fail is available.
An important point to note is that if at the time of confirmation, the tester or the automated test system is not aware of the assumption which he/she needs to confirm, it means the test for that particular interaction is not completed. This is true irrespective of whether the testing is based on formal documented test designing, whether the test is automated or manual and whether the approach to testing was exploratory. Till the confirmation is done, test is incomplete. Note that this final confirmation is contextual and could very well be incorrect from the perspective of a different tester or evaluator.
Despite the ambiguity in the above situation, it is still confirmatory testing. The only difference is that the tester is unable to confirm the results against any of his/her assumptions existing beliefs/expectations. At this stage he/she would say that the test results need further investigation either by the same tester or a different one. It might involve the analysis of the same result set or executing the test again by repeating the exact interaction to collect fresh results and possibly even multiple iterations of the same test.
Let’s see confirmatory testing in various contexts:
Testing with Formal Test Designing
- We could have documented detailed instructions/steps thereby making the assumptions pre-defined as “Expected Results/Behavior”. While testing, we would confirm only these assumptions.
- We could have been provided with test case document written by some one else. If we stick to testing only the assumptions in the test design document, we are confirming someone else’s assumptions about the software behavior. The “someone else” in this case could represent one or more testers, developers, technical leads, managers, architects, product owners and anyone else involved directly or indirectly in the creation of the test design document or its reviews.
- We are doing exploratory testing and in the course of the same, identified an interesting test we would like to execute. While conducting this test, we would assume/expect a certain kind of behavior (knowingly/unknowingly) and confirm the same by analyzing the output.
- As exploratory testing is about learning during test execution, we may not be clear about the outcome of a given interaction. We would still go ahead and perform that interaction . By observing the behavior of the software/system, we would still confirm whether what happened could be tagged as pass/fail/needs further investigation. Knowingly/unknowingly, at this stage we are doing confirmation against our beliefs/expectations/previous experience. If we are not able to do so, the test under discussion should be treated as incomplete and a note should be taken in the session notes.
- We could have written a small script, developed or used an existing testing tool/framework to automate test cases that we thought can be automated and could better be automated. We have assertions that return Pass/Fail or set similar variable values based on comparisons done using relational operators. With such comparisons, we are confirming that value(s) seen at run-time has/have the same relation with predefined value(s) that we assumed before test execution.
- Similar to the above, we could have done the above as a part of formal test automation involving other team members in terms of feedback, direct contribution and reviews. We could also be involved in executing and analyzing such tests created by other testers. The resulting set of assertions is the collective assumptions made by one or more testers, developers, technical leads, managers, architects, product owners and anyone else involved directly or indirectly in the creation of the test automation and its review. So, while testing, we are confirming the assumptions of all mentioned stakeholders.
- A small variant of this would be the automated tests which do not in themselves declare any test case as pass/fail rather provide information which has to be analysed post-execution to determine pass/fail status. An example of this is client server performance testing. This post-execution confirmation could be based on an SLA (Service Level Agreement) and/or industry standards (e.g. web response time for an HTML-only page) and/or company standards (e.g. a company with many web applications could have such standards set) and/or prior experience/expectations of the team/tester analyzing the results.
All testing is confirmatory because whenever we test, we do so with a lot of conscious and sub-conscious assumptions based on information made available for the software under test as well as our past experience. While testing we would confirm whether these assumptions are correct.
If all testing is confirmatory, I don’t see any reason why we should think of testing as exploratory testing versus confirmatory testing, because even exploratory testing is confirmatory testing.
Exploratory Testers: Here’s another statement for you: “All Testing is Exploratory”. If you are not getting my point already, stay tuned for further posts.
I mostly enjoy to read your post coz you think differently than most other famous bloggers and make up your thoughts interestingly on your post, and ofcourse i also enjoyed your presentation at BWST2.
I was shared this post on tweeter and got the following replies –
Replied by Sajjadul: @dedicatedtester He took the word “confirmatory” & missed @michaelbolton’s context. Would have been credible if he didn’t quote that post.
Replied by Michael Bolton: @sajjadul @dedicatedtester Not only that, but he failed to publish my comment in which I urged him to research “confirmation bias”.
So, i’m curious to know your opinion on the above comments.
Don’t agree with that. Testing is more than checking(confirmatory testing), in checking only validation, verification of existing knowledge to be checked is executed through automated process or through manually & no investigation is being done to finding out the unknowns.
By the unknowns I mean the issues or scenarios that might not have been thought to be thought, written or scripted. And finding out the unthinkable and unknowns through a process of exploration, observation, investigation and learning new knowledge about the product and reporting the issues founded through this process is more likely to be preferred as “Testing” Checking or
confirmatory testing or checking is only a small part of testing.
Thanks for appreciating my post.
I have replied on Twitter for Michael’s claim that I didn’t publish his comment. In short, I didn’t get any of his comment so far :-). I have also replied to Sajjadul’s comment.
Here are some tweets for your reference:
When someone says, you didn’t understand “my context” or some else’s context on Twitter, I really can’t understand what the context was.
For me, twitter is a platform to announce and share interesting quotes. Its limit on length of text constraints fruitful discussions.
Mentioning types of testing as “exploratory” versus “confirmatory” is wrong. Any sort of testing is exploratory + confirmatory.
@michaelbolton I’ll do my homework on “confirmation bias”. Meanwhile, I didn’t get any reasoning/logic from you on why you think I am wrong.
@michaelbolton You yourself took about 12 days to publish my comment on your blog where I mentioned about my post.
@michaelbolton I was eagerly waiting for your view on my take on your division of testing into exploratory and confirmatory.
8 minutes ago via web
@michaelbolton There’s no reason for not publishing your comment. I didn’t get your comment. I checked SPAM folder as well.
@sajjadul I didn’t miss Michael’s context. I have challenged it 🙂
On “confirmation bias”, before commennting first I’ve to know what it is and what Michael might have meant. (He never left this comment on my blog, but that doesn’t mean I wouldn’t look into it) I don’t want to go by the very face of it. Words have been tweaked a lot in the testing world 🙂
We are saying the same thing. I didn’t say “Testing is completely confirmatory”. “All Testing is Confirmatory” means in all kinds of testing, has confirmation step(s).
Michael mentioned confirmatory (checking) and exploratory testing as types of testing. Exploration and Confirmation are steps in any kind of testing when you look at it from end to end.
I disagree with him because even in exploratory testing, the way it is defined by him, has a confirmation step. So, how can he split testing into exploratory and confirmatory?
Calling these steps by different names doesn’t help me in any way in becoming a better tester. For me, it’s adding to the buffer overflow in testing glossary, whose glass was full long back.
We don’t need to know/create hundreds of testing terms for being better testers.
I think Michael Bolton is the appropriate person to answer your challenge. 🙂
But the key difference I find is that at confirmatory testing the checking is done on previously found knowledge and by verifying it whether it validates or not.
In exploratory testing, tester “recognizes” issues that might reduce the value of a product based on the continuous learning and observations of the product. In this method discovery of new knowledge about the product is occurring.
From your perspective that i understand, in exploratory testing, tester confirms of issues being present based on some oracle, I agree. But not based on previously known information about the product.
Though I think in the field of testing both confirmation and exploration is necessary. But they are two different things.
At the point of making a decision, it would always be a confirmation, even in case of exploratory testing.
You explored the application, found “new knowledge”. In your next exploration steps, you have to decide whether there’s a problem. What are you doing at that moment? You are confirming whether the software is behaving appropriately as per your newly formed assumptions based on your new knowledge.
Confirmation is one of the final steps in any kind of testing.
Exploration and Confirmation are not “types” of testing, they are important parts of all types of testing.
I reply here.
To state all testing types as equivalent because they all share an attribute is sort of like saying dogs and cats are the same because they both have hair.
It seems that you argument generalizes all testing methods to a point where there are no distinctions. If that is something that works for you, that’s fine. I find it much more useful to at the least categorize the approaches I use to test something into types of tests that require human thought/intervention/interpretation and those that don’t (those simple assertions about an application that require little or no interpretation).
Welcome to Testing Perspective!
I don’t consider confirmatory testing as a type of testing, method of testing or approach to testing. Confirmation is a “step” or a “part” of every form of testing.
Going by your example, my argument is not “dogs and cats are the same because they both have hair.”, rather simply, “dogs and cats both have hair.”, so, “both are hairy”, that doesn’t mean they are made just of hair.
Michael’s argument is of the nature – there are two kinds of dogs – “hairy and non-hairy”. Hair could be optionally present in a dog, confirmation isn’t (for testing)
Exploration and Confirmation are essential for every type of testing and aren’t optional. When we start defining “exploratory testing”, “confirmatory testing”, we are suggesting that either one of exploration and confirmation is optional, which is not true.
I liked your statement – “If that is something that works for you, that’s fine.”. That should be the case for everyone and regarding any concept.
As a side note, “exploratory” versus “confirmatory” testing doesn’t work for me. “Testing = Exploration + Confirmation + …” does.
Towards the end of your comment, you are suggesting just like Michael that “confirmatory testing” is non-sapient. That’s not true (clearly Michaels’ version of confirmatory testing is strikingly different from what I call as confirmation). Complexity of confirmation would depend on how easy or tough it is to identify the assumption and comparing results against an oracle. It might need loads of sapience, so we should keep it handy 🙂
Thanks for your elaborate post in response to mine. I’m in the processing of writing one to address your comments.
[…] All Testing is Confirmatory (Rahul) […]
I think you can also say that Exploratory Testing “can” lead to having more test ideas for “Confirmation Tests”
For me, the way I sometimes tackle exploratory testing is to go into an area of a product with no notion beforehand what i’m checking. I’m a blind mouse in a maze. Now are bugs/results that i’m achieving “Confirmation”? I think not because I don’t have a checklist of what I’m actually trying to Confirm per se. I may build a checklist for a future run and that can perhaps be called Confirmation.
Anyhow, like other people have mentioned, checking, confirmation, exploratory are really just all terms that people may or may no agree on.
Welcome to Testing Perspective!
I don’t look at confirmation as “confirmatory test”/”confirmatory approach” etc. Every test has a confirmatory step and no test is complete without confirmation.
A checklist is one of the means for confirmation where you have clear-cut pre-defined assumptions that you want to check (they could very well be someone else’s assumptions). A tester during exploration, could develop new assumptions that s/he wants to confirm. These assumptions at times may not be realized explicitly by the tester but are based on his/her past experience, biases, old/new knowledge.
E.g. During your exploration of application, you see a screen, you ask a question “Is there a problem here?” (Let me skip pass/fail as of now). To answer to this question, you may not have a checklist, but the act of answering this question itself requires some assumption on part of the tester, to give an answer Yes/No/Don’t Know/Can’ say to the question about the possible existence of a problem.
I have written another post on the subject to answer Michael’s comments. Might answer some of yours as well:
I believe I see where you are going here. It’s that the believe that since there is some sort of assumption that has to be made; this is like a confirmation of some sort.
It’s like a puzzle, I have to to model everything, but at the end of the day, the confirmation is really can I do it or not.
>>>he splits testing into exploratory testing and confirmatory testing
I think he is not splitting (cutting a whole into pieces) testing – but making distinction between two forms of thinking and acting in testing. Checking – a non sapient assertion (yes .. in programmatic sense) based on (previously, agreed) decision rule. Testing – one that requires spontaneous application of human sapience leading to an open ended exploration or investigation. I would checking and testing are mutually exclusive when seen/applied in their “pure” form.
>>> So, let’s start by discussing what Confirmatory is testing.
Let me say the opposite. There is nothing exists by name confirmatory testing. In “checking” confirmation is made to see if check passes. Nothing more than that.. Some testing might involve “confirmatory” element.
>>> when we can say for sure whether we can declare a test as passed or failed or atleast that with all our current knowledge we are unable to comment on the outcome
Pass and fail are provisional labels intimately linked to reference or oracle and observed behavior. Flip the oracle and observation you will have a different set of pass/fail. We can surely comment on outcome of a test (not a check) without referring to pass or fail at all..
>>> It’s only at this stage that the test has provided useful information.
True for checking. But false for “testing” – you need not explicitly think or have information about pass or fail yet make useful inferences.
>>>Testing should be considered complete for a given interaction only when the result of confirmation in terms of pass or fail is available.
Just replace the word “Testing” with “Checking” – rest of the sentence will read fine,
>>> An important point to note is that if at the time of confirmation, the tester or the automated test system is not aware of the assumption which he/she needs to confirm, it means the test for that particular interaction is not completed.
Why assumption? How about a known or true fact – check if document is printed in landscape style?
>>> Till the confirmation is done, test is incomplete.
It is “check” my dear friend … not a “test” – a test can be declared to be complete on various non-confirmatory situations.
>> Despite the ambiguity in the above situation, it is still confirmatory testing.
>>> All testing is confirmatory because whenever we test, we do so with a lot of conscious and sub-conscious assumptions based on information made available for the software under test as well as our past experience.
Disagree. If we make assumptions, we need not necessarily do “confirmatory” test. We can explore these assumptions to “check” if they are true, under what circumstances, what makes them true or false, what personal experiences and biases are influencing our observations – a true exploration – no confirmation.
>>> While testing we would confirm whether these assumptions are correct.
True in “checking” and False in Testing – in testing we do not work to confirm if our assumptions are true. Well, that might be one of several things that we do – but there are others that do not fall in “confirmation world”.
>>Calling these steps by different names doesn’t help me in any way in becoming a better tester. For me, it’s adding to the buffer overflow in testing glossary, whose glass was full long back.
Discovering new words, labels, metaphors – is part of human being’s constant struggle to be heard and understood in this complex, ever evolving multicultural world. We do this so that we can improve our understanding of the world around us and be able to communicate to others. This is part of constant struggle. People, most of the cases do not do it for fun or any trivial reason. Those words that improve our understanding prevail and those do not perish over period of time. The glass of words can never become full.
>>>> you are suggesting just like Michael that “confirmatory testing” is non-sapient. That’s not true
Well – I would say that is indeed True. Confirmatory testing (using your definition – “Checking” the correctness of assumptions leading to pass/fail result) is intrinsically structured as non sapient. Having sapience would be useless as confirmation test (or “check” actually) is designed in such way as to AVIOD the need for sapience. This is made possible as a confirmatory test (“check” actually) is designed that way – ASSERT ( x=0) kind of a format.
>>> Complexity of confirmation would depend on how easy or tough it is to identify the assumption and comparing results against an oracle.
Regardless of complexity of confirmation – when represented in “formal” form – confirmation “check” becomes non sapient. Identifying assumption is part of test design (on exploratory setup – it happens while you are executing the test). In any case, identifying assumption is done as “pre facto” and included in the structure of check. Hence sapience is removed.
This is where; Michael’s distinction of check vs test comes very handy. Once you call something as “check” – it automatically means that it is “sapience” free – regardless of complexity. You might represent a complex 3rd order differential equation representing a computational fluid dynamics problem or a neuroscience model. Once it is a check – it does not require sapience. What to check, what are underlying assumptions – all are EMBEDED in there.
>> It might need loads of sapience, so we should keep it handy
Let me mention a dimension or interpretation of sapience here – To me sapience is similar to “reflux” reaction of a goal keeper defending a penalty shoot out in a football match. Key characteristic of a sapient action is its spontaneity – it just comes out of no-where. You cannot call “thinking” required to solve a “well defined” problem as sapience. Sapience makes you feel the real need and distinction of human judgment that cannot specified as “priori”.
Let us continue the discussion …
Thanks for elaborate analysis and sharing your thoughts.
I would like to re-insist that this post is not about Michael’s version of testing and Michael’s version of confirmation. I have expressed my thoughts on how confirmation fits in testing and how I think it is different from the way Michael puts it. For this reason, I’m not replying to your generic comments of the nature – Checking or confirmation is non-sapient, This is checking or confirmation etc. because that would be repetition of things already discussed in the post (or its follow up post).
Please see my post – All Testing Is Confirmatory – Part 2 for my detailed opinion on Pass/Fail versus “Is There a Problem Here?”
At a decision point, how a human gets the basis of decision, whether spontaneously / sapient or whatever, logically it is always “previous” to the decision point. That’s the reason I consider confirmation as a part of exploratory testing as well.
“Checking if a document is printed in landscape style” is not a “known or true fact” as mentioned by you. It is infact an assumption that the document would get printed in landscape style with given settings. Am I missing your point? I would like to hear to any further such examples which you think cannot be assumptions.
This is how Michael’s discussion on testing versus checking is confusing:
1. I identify a test, apply thought on how to automate it, write the test in automated form. All this has “sapience” so far. The moment I execute it, it becomes a check? And not just that, the follow up results could call again for sapience, so is that again testing? (For me it is still one test under discussion. And yes, I’m treating Test as a noun here).
2. OR is it that it’s not the first time that it becomes a check? First time it is a test. When I regress it, it suddenly becomes a regression check instead of test?
3. Take a third scenario, the tester doesn’t automate a test or develop a formal test design. Every time he would like to do exploratory testing. He might be implicitly conducting a given set of tests over and again, so a sub-set of what he’s doing becomes checks instead of tests!
4. A tester is given a test case document. While he is conducting testing based on that, for the purpose of the exact test cases mentioned, it is checking, isn’t it? And when the tester sees the results and uses sapience to locate any problem, it becomes a test?
5. If the written test case mentions, ‘Enter net and watch out for any unexpected behavior’, is it non-sapient because the input value is exactly defined so the tester doesn’t have to come up with that or is it sapient because the evaluation of results is open-ended?
Do we really need all the above confusion? How does the above thought process help me in being a better tester or add to my skill set? You might say I’ll be able to talk and convey my message clearly. But when some certification board says that its purpose is to make terminology uniform so that testers can talk in the same language, why does it call for criticism? I see several discussions on how almost every term is confusing and calls for different interpretations by different people, so why should we keep creating terms to solve this ambiguity? The new terms are mostly ambiguous in themselves setting off a chain reaction.
How I see this “testing versus checking” or “exploratory versus confirmatory” discussion is that, names are being created for existing stuff without any value edition. Neither it’s a new thing being invented which calls for a new name nor for me it adds any dimension to my existing understanding of testing.
I am involved in designing test automation frameworks. This split up as mentioned by Michael on testing “ways”, “mindsets” doesn’t help me at all. Those whom it seems to help are pretty excited about this and have accepted this whole-heartedly. I see an example in you. Believe me I feel happy for you. I wish I could understand the value that the introduction of ‘checking’ or ‘confirmatory testing’ brings with it. So far, I don’t; it’s rather the other way round, it seems to me as totally unnecessary.
Let me address your questions about test vs. check – in a way that I understand it.
>>>1. I identify a test, apply thought on how to automate it, write the test in automated form. All this has “sapience” so far. The moment I execute it, it becomes a check?
Fundamental difference in dealing with check and tests is the way we go about creating them. Like test design, I believe there is something called “Check design” – identifying the observation(s) to make, decision rule to apply (formulate the rule in such a way that a necessary requirement of non-sapient verification/comparison happens) and make sure all along that human sapience is not required anytime later when check is executed. Instead of saying something becomes check on execution – I would say you think of a check, design it (its elements observation and decision rule etc). Once it is designed – there would be no requirement of sapience in its lifetime until its specification changes (either the observation or the decision rule or method of comparison) – in which case either you re-design the check or discard and create a new one (both require specification again).
Thus – there is a definite life cycle for a check and sapience required ONLY while a check is being designed/redesigned – NEVER and I repeat NEVER when it is executed (no matter how complex is observation, decision rule and method of comparison).
>>>And not just that, the follow up results could call again for sapience, so is that again testing?
See above – results verification is either so “formally specified” (almost like an ASSERT used in TDD) that a non sapient check should be possible (otherwise why would call that as a check) It is this property of a check that makes it a highly suitable candidate for automation. I would say checks are SPECIFICALLY made for automation. No sapience required any time once check is designed.
>>>is it that it’s not the first time that it becomes a check?
One of biggest problems that I often encountered in automation taking a manual test and trying to automation – a typical GUI automation paradigm that we encounter in IT. This is a problem because the way test case (not a test or a check) is written – requires a hell lot of preparation (cut, join, grow, rephrase etc) to suck essence out of it for a reasonable maintainable automated “check”.
It is not true that a test becomes check and vice-versa. If you see test and check mixed up that way (one becoming the other or one having traces of the other), as it happens today in IT scenario – it is sign of poor test design and poses challenges in doing automation.
Precisely, in this context (mixed up test design) – Michael’s distinction between test and check is important. Test is designed as such and check is designed as such. They don’t undergo metamorphosis. Since the distinction is not so well known – you might see “mixed” test/check. When you see such things – apply the distinction and separate them.
>>>First time it is a test. When I regress it, it suddenly becomes a regression check instead of test?
For a check every time you repeat (regress) – nothing changes – observation, decision rule to match and method comparison all are fixed (being structural and constant elements of check). For a test, human sapience makes it possible follow different routes of observations, oracles and results each time test is rune – hence a test might have different shape/size every time it is run.
I repeat – a check does not become test and vice-versa UNLESS original artifact was poorly designed to mix them up.
>>>Take a third scenario, the tester doesn’t automate a test or develop a formal test design. Every time he would like to do exploratory testing. He might be implicitly conducting a given set of tests over and again, so a sub-set of what he’s doing becomes checks instead of tests!
Something does not become test or check just because it is repeated over and again. In this case if tester performs ET, he might be using all “tests”. On carefully observing what he/she does – we can separate out “checking” elements of “tests”.
>>> 4. A tester is given a test case document. While he is conducting testing based on that, for the purpose of the exact test cases mentioned, it is checking, isn’t it? And when the tester sees the results and uses sapience to locate any problem, it becomes a test?
Like I said before, typical test case documents that most testers (especially in IT) write are mixed items. We need to pass them through a filter called Test-check filter that distinguishes sapient and non sapient aspects and tags them as test and checks. I am not sure if filter exists – but we can make one. That will be pretty useful in automation. In this case – it is not that when test sees the result and uses sapience to locate problem it becomes test – things are just mixed (due to poor design). You would agree that you can see results and locate problem without using sapience (mechanical application of decision rule or the oracle – For e.g x=35, ASSERT (x == 35).
>>>5. If the written test case mentions, ‘Enter net and watch out for any unexpected behavior’, is it non-sapient because the input value is exactly defined so the tester doesn’t have to come up with that or is it sapient because the evaluation of results is open-ended?
This is a classic case of mixed up test-check. If test case mentions “Enter net and check if value in field “b” is equal to “1000” and radio button has selection “A True check” “ – then this specification qualifies for a “check” (assuming that observation and comparisons cane be made in a non sapient way). If test case mentions “Enter net and watch out for any unexpected behavior” looks like a “test” to me (a vague one though…)
Simple trick is separate out the elements that require sapience and that do not. First part (collection of items requiring sapience) becomes “test” and other becomes “check”.
>>> Do we really need all the above confusion?
I do not see any confusion here and would be happy to clarify any confusion that you still may have.
>>> How does the above thought process help me in being a better tester or add to my skill set?
It has IMMENSLY helped me in automation with mechanism to deal with one of the biggest problem involving test design and automation (in IT scenario especially). It also helped me to optimize test cycles.
>>>You might say I’ll be able to talk and convey my message clearly.
It is much more than that … the distinction between test and check settles two things:
1. Problem linking test design and automation (IT scenario)
2. Scripted testing and exploratory testing.
>>> But when some certification board says that its purpose is to make terminology uniform so that testers can talk in the same language, why does it call for criticism?
We live in plural world – where no single framework, language works for all purposes. Even at the core of theoretical physics – theories are not unified. You see people criticizing certification board’s purpose to create a universal language – because few of us fundamentally believe that No such universal language exists – leave alone testing not even in most exact forms of science – theoretical physics. More often than not – such attempts to unify or create an universal language are motivated by vested business interests – making money. The language of test and check is neither an attempt to make all testers speak same language nor an attempt to create new words or jargons.
It is just an attempt to use exiting words in a new interpretation style to help us in this plural world. It recognizes that people will require different names/labels to indicate same or different things.
>>> I see several discussions on how almost every term is confusing and calls for different interpretations by different people, so why should we keep creating terms to solve this ambiguity?
We need to create new terms to help us in making sense of this ever changing world. Humans have evolved from staying caves to space stations. We live in a world where every second something new is hitting us. We don’t create new terms to solve ambiguity but help us with multiple ways to deal with pluralism and cultural/social/economical diversity that exists (and is constantly changing) in this world.
You can’t help people confusing terms by calling for a ban on new words or imposing one universal language. It is like asking every one to use HINDI as words in other language like English or Tamil might be confusing and leads to different interpretations by different people.
>>> The new terms are mostly ambiguous in themselves setting off a chain reaction.
That is the beauty of new words. What do you suggest as solution? Ban new words or create a single and universal language where every one speaks same language?
>>>How I see this “testing versus checking” or “exploratory versus confirmatory” discussion is that, names are being created for existing stuff without any value edition.
Value edition? I think you meant value addition…
As I have demonstrated here – I see the value and you don’t. That is our multidimensional, plural world. We need labels – many of them to help us to use one depending upon the situation.
>>>Neither it’s a new thing being invented which calls for a new name nor for me it adds any dimension to my existing understanding of testing.
I have an exception to report to above. The distinction of test and check has nearly solved my problem dealing with manual test design and automation in my context (IT).
Change you context – you might see a value.
Let me know if you any more questions and confusions around test vs check
Michael’s argument is of the nature – there are two kinds of dogs – “hairy and non-hairy”. Hair could be optionally present in a dog, confirmation isn’t (for testing)
This is not what I’m saying.
There are two kinds of things in the world that we call dogs. One lives and breathes, and has some degree of sentience. The other looks like this.
You can get a lot out of each kind of “dog”, of course, and either might serve some useful purpose for you. But it would be a mistake to call the second kind “a dog” without seriously considering what a dog is, and what a dog does.
Thanks for sharing this interesting video 🙂
I interpret the video in terms of test automation. One area which the video misses is that the robo-dog could have some gadgets which could make it fly / have a laser gun / skating board.. which a “normal” dog wouldn’t :-).
The video is about a robo-dog trying to mimic a normal dog probably because the context of this video is about making a “robo-dog” which is close to what people *want to* call a dog.
Put your imagination into play and the robo-dog could do wonders. If we keep looking at this ‘limiting’ version of the robo-dog, we’ll never have a powerful robo-dog.
Don’t look at the robo-dog to draw conclusions. It could be the outcome of a limiting imagination or context (we want something demonstrable/ something which we could sell as a toy etc..) or technological limitations. If something is to be questioned – it’s not this dog, it’s the background work which went into it.
Consider this – Even a robot of this nature would be yesterday’s fiction but today’s reality. With such limiting views on test automation, let’s not put tomorrow’s reality at stake 🙂
(Off the discussion thread, I enjoyed the chat session with you on test oracles. You discussed some pretty good examples which can be used to make the testers understand about the concept.)
I like your post and way of writing and presentation. Thanks for sharing such a valuable information.
Nice post. Thank you for sharing this informative artcile on QA Testing.