Tuesday, February 11, 2014

Retest and close what you reported

Tester's work is to find bugs, right?
Not exactly in my opinion: our job is to get rid of bugs and promote software quality.

Our job is to make sure bugs were fixed properly, with no new bugs introduced.
We also should make sure the fix is not lost, but:
  • deployed to production environments
  • pushed to the right branch

Resolution:Fixed

I've been often mislead by resolution:Resolved and "Fix version" field in JIRA. The bug might have status "fixed", and:

  • .. is not fixed at all or not good enough. Retest and Reopen!
  • .. is fixed, but not deployed (not commited/pushed ets). Retest and Reopen!
  • .. is fixed by changing system or deleting functionality. Retest and consult system architects.
  • .. is really fixed. Retest and Close!

Please keep in mind that the issue is fixed, when it's fixed on production and development of further system versions include that fix. What good is a fix if end users never see it, or next system version does not include the fix (i.e. reintroduces the bug)?

How do we get rid of bugs then?

  1. We retest the fix on the test/staging environment and Close the issue in bugtracking.
  2. We make sure that proper repository branches include our fix. It might not if:
    1. Client requests that only selected fixes from branch deployed to test are to be deployed on production.
    2. The fix is commited on a branch that is never merged with branch that the next system version is made of.
  3. We periodicaly retest important fixes to make sure no regression occurred
  4. We need to understand the cause of the error and expand test cases based on that knowledge - talk to programmers, and demand analysis/cause description in issue comments. The "done" comment is not good enough.

Periodic retesting?

With 10 bugs every month, we have a whooping 120 test cases for each year of project. That's quite a heap of things to retest, right?

To manage such a pile of imporant knowledge, we should document test cases that need to be done regularly.
Some of them should enter our automatic regression testing scenarios.

Automation of the regression testing should be based on analysys of causes of errors in the system. For example if search engine has trouble when filtering is employed - we should add such a tests and run it often.

If we don't do this - nobody will

Because only we know in details what was broken - only we are able to make sure it's fixed.
No programmer, chief programmer, project manager or developer is able to take up the task of getting rid of detailed system problems.