Friday 17 August 2012

Metrics

two reasons: you are measuring something that is hard to measure and it takes time to record the metrics, time you could have spent testing.

9 comments:

  1. I get asked that question by a business stakeholder about every other testing effort, "Can you get me a list of items that may have been missed in the testing effort?"

    I think that stems from a lack of trust for the testers.

    My answer, every time, is Nothing. But I do make an effort to show our coverage, and explain that we dont 'Miss' anything. set-aside, perhaps, but not miss.

    ReplyDelete
  2. Hi Evan, thanks for your comment. It's the paying clients or users' privilege not to trust testers ;)
    I like your answer "set-aside, perhaps, but not miss"

    ReplyDelete
  3. Depends on requirements.

    ReplyDelete
  4. Hi Steven, what do you mean by it depends on requirements?
    Cheers, Andy.

    ReplyDelete
  5. Hi Andy!
    That's also a great cartoon from you!
    I think we testers have one thing together: Everyone wants from us data for statistics: How many testcases did you execute? How many bugs did you find? How many times did you repead your testcase for regression? And so on and so on. And that's the point! We don't have time to do all this testing things together in the same time (we don't have hand's enough, do we? :-) We have the choise: Finding bug's or spending our time with keeping trashing info's up to date. But maybe, when showing our stakeholders and bosses this cartoon, it will be thought-provoking won't it?!? I'll try it, I'd swear!

    I've to add one thing: I'm fan of the "rapid software tesing" approach, especially "explorative testing". The problem is, that with these method's it won't be easyier to keep our testing info's up to date. But I've made the experience, that I can find much more bugs - especially in new development projects - with "exploratory testing".
    So what did I do, to log the test execution info about my "exploratory testing session" in a test tool, which only allows scriped based testing? I created a new testcase in the testing tool named "exploratory testing, module xxx", opend a word document, executed my "exploratory testing session" and typed in all info about it (including found bugs) in that word document. At the end of the session, I linked the document to the tool based test case and set it to failed (because I found at least one bug during "exploratory testing" :-) ).

    KR, Ralf (I've currently created my first blog (http://thespiritoftesting.wordpress.com/ )

    ReplyDelete
  6. Hi Ralf, thanks for the comment. Sounds like you have been able to incorporate a more exploratory testing approach to a scripted one - which isn't always easy!

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. What is a number anyhow.
    How many bugs did you find? 25. Wow thats really good, is it? Or is it bad? Well its bad that there was 25 but good that you found 25. Yes but there could be 1000 bugs, does that 25 look so good now?

    If required I will try and produce some kind of metric that links requirements to tests executed (I am not talking scripted tests) which can get (non testing managers) of your case. But essentially thats also nonsense, doesn't mean that my tests were good tests.

    Its simply down to trust, I am employed to test software, and thats what I will do. I am a driven individual and experienced, so if I tell you that with the time I had and information i had, i did the best job i could then i did. Issues that arose will be digested to ensure that my next lot of testing will be better than the last, so your confidence in me should increase, and therefore you put your trust in me and no the numbers i could choose to produce.

    ReplyDelete
  9. Thanks for this Andy, I just used it to illustrate a Trust IV blog post, I know you've given me permission to use your cartoons in the past (with acknowledgements), so I hope that this is still OK.

    I've been asked to produce some management information for a client to help them demonstrate the value of testing. This is almost impossible because it is difficult to find valid testing metrics that can't be "fudged" by the test team. Measuring the team's performance based on number of tests done or number of defects found could encourage too much unnecessary testing or over-reporting of defects.

    I think that poor quality testing that is managed badly may encoiurage people to skip testing altogether or to do minimal testing. This goes hand in hand with the move to more monitoring of production systems coupled with forward fixing defects in production.

    ReplyDelete