List

One of the famous Urdu poets once pointed out that “Democracy is a form of government in which people are counted, not weighed”. I do not want to go into the trap of discussing on the topic of democracy, rather I have something to say very specific on software testing. We as software testers do not count bugs, we measure bugs. We find a bug and then talk in terms of severity and priority associated with it. Every bug does not have equal right (Poor bugs!). So, when we as software testers are in such a mature state, I really wonder how one can set bug count as target for testers.

That sounds weird, isn’t it? – Setting bug count as weekly target or as a measure for evaluating the efficiency of the tester! This has been a quite controversial topic, so I beg to say that whatever I say is my opinion, which owes its origin to my thought process, experience and the exposure to the available literature. I welcome views from one and all to encourage healthy discussion. If I am convinced, I assure I will be a follower.

I have seen a practical case like this in one of the organizations with dedicated testing teams for various domains, and every time I thought about it, I felt pity for my friends in those teams, getting screwed by a target so unrealistic and so illogical.

I discussed this point as a part of my comments in the blog of Bj Rollison (alias I.M.Testy), a Test Architect in the Engineering Excellence group at Microsoft. He pointed me to one of his blog posts, which in turn linked to another blog post by Larry Osterman. I have mentioned these links in the references section at the end of this post.


These were some good links and for readers of this blog, I would summarize the points here:

Factors that impact bug count

  1. Code Complexity
  2. Code Maturity
  3. Developer’s Experience and Expertise
  4. Maturity of Code Reviews and Unit Testing
  5. Clarity in assessment and consolidation of Customer Requirements at Design phase

In a project the bug count surely goes down along the testing effort spent. So, whereas in the beginning the testers might be able to locate and report some obvious bugs and meet the targets, it becomes more and more difficult to find bugs down the line. This is a known and understood fact. Take a small puzzle, where you compare two pictures and find the mismatches. Some will be very obvious ones and will be found out in the first few seconds, other will take time based on the quality of the puzzle. So, if you set the target as number of mismatches found per minute, it might attend to evaluating people for some simple puzzles, but as the complexity of the puzzle or the mismatch grows, finding the mismatch itself becomes a challenge, not the time taken. This is just an analogy. I do not want to get into fight of words over puzzle example; I think I conveyed my message. Based on this analogy, if you treat the software undergoing testing as a big puzzle, the quality of a bug dictates the time taken to find it.


Another irony is that even when the bug count is set as the target, the bugs termed as “Won’t Fix”, “By Design”, “Not reproducible” are not considered in bug count. Two comments. First – You yourself (here “you” is the manager or the lead who sets such targets, for other you is third person) are indirectly forcing the tester to find low quality bugs and sometimes duplicate bugs for meeting targets. This is in contradiction to your own concept. Second – Quality of bug is not measured by the fact that it gets fixed or not. There are many factors which impact the decision for fixing of a bug and not necessarily the quality of the bug.

Impacts of “Bug-Count-As-Performance-Measure” Approach

  1. Bug Morphing – One bug split and reported as many bugs
  2. Quality of discussion with developers ruins. The target for the tester is to get the bug “Fixed”, even if the bug is not accurate, is duplicate or otherwise.
  3. Bug tracking becomes difficult because of extraneous and often duplicate bugs, which make the defect database huge.
  4. Under such a situation, the metric % Bugs Fixed or Resolved will give you unrealistic insight and some bugs which needed actual attention might be cornered.
  5. When bugs are counted, not measured and when the tester’s performance evaluation is based on count of bugs found and fixed, he has no inclination to find good quality bugs.

References


Bug counts as key performance indicators (KPI) for testers

Larry’s rules of software engineering #2: Measuring testers by test metrics

Whose Bug is it anyway?

Rahul Verma

www.testingperspective.com

One Response to “Using Bug Count for Performance Evaluation of Testers”

  1. Bj Rolllison

    Hi Rahul,

    Good post. I would also recommend a post by Harry Robinson at http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=COL&ObjectId=6838

    In my opinion some people (and managers) focus on bug counts because it is a cheap and very visible number. But we need to realize it is really only a number. Yes, we could analyze bug counts for trending information but that is about it.

    Wouldn’t it be great if instead of focusing productivity of testers based on bug count we focused productivity of developers by a lack of bugs!

    -Bj-

Leave a Reply

  Posts

1 10 11 12
May 24th, 2007

The Big Fight – Schools of Testing – Views Against and Additional Perspective – I

May 22nd, 2007

The Big Fight – Schools of Testing – Views in Favour

May 21st, 2007

The Big Fight – Schools of Testing – Is Context-Driven school a Meta-school?

May 15th, 2007

The Big Fight – Schools of Testing – The Schools

May 10th, 2007

The Big Fight – Schools of Testing – The Origin

May 9th, 2007

The Big Fight – Schools of Testing – Introduction

May 7th, 2007

Dealing with dynamic boundaries in LoadRunner using Text Flags

May 3rd, 2007

Presentation at Yahoo! Bangalore

May 3rd, 2007

Selective Logging for effective debugging in LoadRunner

April 26th, 2007

Using Bug Count for Performance Evaluation of Testers

Discover more from Rahul Verma Official

Subscribe now to keep reading and get access to the full archive.

Continue reading