Tuesday, 30 August 2011
Thursday, 25 August 2011
The Cost of Automation
This cartoon was designed by Raimond Sinivee. Thanks Raimond!
Raimond had a few comments to say about the cartoon [added to post on 30-Aug-2011]
The idea behind is that, it is hard to justify automation as a testing method to managers. When it is said that we are going to automate stuff, then first answer is: “Yes!”. The answer may change when tester gives tool cost or additional working hours estimate. Manager might go: “Extra cost? I thought that we are going to save!”
They want to see direct money benefit or labor cost savings. I think this comes from assembly line type of manufacturing. There has been lot of automation and cost savings through that. Testing is not assembly line type of work – it is engineering! So if we want to compare, then we should compare tools cost in engineering, not at assembly.Automation could be taken as a helping method to cover new aspects of software that are hard or impossible to test manually. Usually these parts are not tested at all. Since nobody is testing newly covered aspects, then it is mostly extra expense to test them.
Tester job is to provide information about risks among other things. Tester tries to find ways to gather information about risks and how to level them. Sometimes a tester needs a tool to find out if the risk is covered. The result of executing a test could be that there is no problem. That is also valuable information if it cleared out a risk, but it is hard to actually calculate the amount of money that was saved. We could calculate the risk impact, but that also might be disputable and manipulated.
Tuesday, 23 August 2011
Robo-Cat
In testing we can be bombarded with the notion that test automation is best. I recently read an article saying all manual testing can be replaced with a new test automation tool – what craziness! I can’t wait for someone to say all developing and coding can be automated too!
Test automation does have its place though. When a tester is repeating a step or task while testing, it’s worth investigating if that can be automated. Sometimes, writing quick and dirty automated scripts can save a lot of time. For example, when testing a web app, I often use a record and play back tool to log into the app, once logged in, I can carry on with manual testing. I can re-do these steps as many times as needed. Once the testing is finished, I throw away the automated script. At other times, for a large testing project, it’s good to invest in a test automation framework if you expect to run the same checks over and over again as regression tests.
Yet test automation always has flaws. One major flaw is the fact the automation can only check for what it is programmed to check. For example, if it is checking the login screen is displayed, it will check for this but won’t check whereabouts on the screen it is displayed, what colours, what font it is using or whether anything else is displayed on the screen.
Another flaw is that test automation can be brittle. A developer changing the layout of a screen may generate a false positive in the test automation. The tester will need to spend some time investigating why it failed and then update the script. All this adds to the maintenance costs.
From my experience, the biggest flaw is that test automation doesn’t find many bugs, and the bugs it finds don’t tend to be critical bugs compared to those found through manual testing. The main reason for this is that test automation is typically used for regression. The application has been manually tested in the past and most of the bugs have been identified, a test automation script has been developed to give confidence that the application has not regressed. One positive use for using automation is that regression testing is still required but can be and is boring to most testers – as it’s repeating the same tests executed previously without much hope of finding new bugs.
I recently read this article from Adam Goucher about test automation heuristics. It’s a good read :)
Watch out for next cartoon from Raimond Sinivee which has a similar theme.
Sunday, 21 August 2011
Thursday, 11 August 2011
Critical
As testers, one of our main jobs is to find and report issues with the software. We are paid to find bugs. If there are bugs in the software, and inevitably there are, testers need to do what they can to uncover them.
A major problem with this is that we can come across as very negative types of people, always bearing bad news. And we don’t stop there, we also raise issues with the process, documentation, people: “Why didn’t you do any unit testing?”, “That developer doesn’t know what working in a team means!”, “This feature needs to be documented!”, “You’re not supposed to make any changes to the code now!”
One way to avoid coming across so negatively is not raising as many bugs. But that approach is short-sighted! Another way, which is far better, is to bear good news as well as bad. I know I’m not the best coder, so when a Dev delivers a build for me to test, I know that they’ve done a better job than I would have done. Having this attitude tends to make it easy to give positive feedback on the software I’m testing. Also, as a tester, I try to do my best to test, to find and report bugs, but even then I make mistakes on a regular basis, I'm only human after all. But so are developers (believe it or not!), project managers and the pointy-haired boss.
Wednesday, 3 August 2011
Coverage
Coverage is tricky. You think you got everything covered, and then, WHAM!! A bug suddenly appears. You’re forced to defend your testing and you end up saying something like: “well, when I said I had everything covered, I clearly didn’t mean E-V-E-R-Y-T-H-I-N-G!” Because testing everything is impossible. Everyone knows that, including your clients/manager/PM/developer... right?
Similarly to automation, thinking about coverage can give a false sense of security. There are so many ways software could have test coverage. Requirements coverage for example, do your tests provide 100% coverage of the requirements? What about the requirements that aren’t written down in a document? What about requirements that are documented but are ambiguous?
But I’m no expert, while Cem Kaner is, he wrote a great article on this topic, worth a look if you’ve not read it before (if you care about testing you should definitely read it). I came up with one type of coverage “requirements coverage”. Cem Kaner comes up with 101 types in the article's appendix. The next time you or your manager thinks you need 100% test coverage, think again!
Subscribe to:
Posts (Atom)