Beware of Automation

nly use automation when it advances your mission of testing. Evaluate your automation only when it has helped to achieve your mission

Only use automation when it advances your mission of testing. Evaluate your automation only when it has helped to achieve your mission

You should only use automation when it advances your mission of testing. Evaluate your automation only when it has helped to achieve your mission. This post describes four items that you should be wary of when you are automating. Automation without good test design or consideration of more important things you could be doing will result in little value to succeed in your mission.

Review your development process

Test automation is done to reduce testing costs and do not get as much attention and resources to ensure they get done. The question now is “How do I get more attention to get more testing done?”

To get more automation testing done, you need to tell the team that your goal is to minimise development failure. Automation will add more power to the team by giving the developers quick feedback.

Here is an example of a technique to support the pace of development:

Automate Smoke Tests - “Smoke testing” coined from hardware testing when you plug in a new board and turn on the power. If there is smoke coming out, you know something is wrong. If nothing comes out of it, you can probably assume that it’s working fine. These tests are focused on finding glaring bugs, this will ensure that no time is wasted by going into the next stage of the development cycle and this becomes the developer’s priority to fix.

Smoke tests allow can be run by anyone at anytime. They should be done as part of a checklist to have a look whether a build is qualified to go to the next stage. Once the build has passed the automated smoke tests and the developer’s part of the testing is done, then a green light for the next stage will be fine.

This is extremely invaluable, these smoke tests will be run so many times that it would’ve paid itself back over and over again with regards to the time invested in creating them.

Manual Testing ≠ Automated Testing

When you are doing manual tests, you bring the human aspects of interaction that automate tests do not give. You most likely will go off-track with what the main test cases as you found something you did not anticipate. Automation is completely different, it’s just checking if X has the value that you told it to have, bringing about a not-so intellectual process.

Automation does not make the computer do the testing that you do. It performs the testing you specify, unable to take into account your internal knowledge and awareness of what’s going on for the most part. An automation suite runs the same thing each time, at the same speed and in the same order. The manual test goes for the areas that haven’t been looked at by the automated testing and often finding issues.

One of the main advantages of automation is to minimise development failure by finding any glaring issues, and running tests that are repetitive that human concentration will start lagging after a few hours of doing the same thing. It’s ideal that when the automated tests are running then you’ll get some form of notification that will let you know if there are issues in the test so that it can be reported to the developer as soon as possible. It might be quite troublesome when you find that you have an automated test suite that runs for a whole day and then when you get to work the next day everything is wrong.

Manual tests and automated tests are not the same. Look at it from the point of view that you will be able to extend your reach and test more scenarios manually by having some automated tests running alongside.

Don’t estimate the value of a test in terms of how often you run it

Testing is a service. The value of testing comes from the information that it provides to you, the developers, and other stakeholders. Estimating testing is difficult. Let’s take this scenario, you have estimated your testing to be complete for a certain feature in a day but you found an issue and it might take sometime to fix. Is your testing still going the be completed during the time you’ve said so? No. You have you wait for the bug to get fixed, question and find out the implications of the fix then you give another estimate. Skilled testing is a matter of exercising judgements well.

Your manager tells you to try to estimate whether automating will provide an ROI (return on investment) by comparing the cots of automated tests to the costs of running the tests manually.

Here are two equations you might’ve come across:

  • Manual Testing Cost = Manual preparation cost + (N x Manual execution cost)
  • Automated Testing Cost = Automation preparation cost + (N X Automation execution cost)

The assumption goes is that even though automation costs may have a higher upfront cost, overtime it will pay itself back. Whilst manual testing is always a cost. Whilst these are fair equations in its simplest form, automation efforts should not be based on saving testing costs for the following reasons

  1. From one of the points above - “Automation does not make the computer do the testing that you do. It performs the testing you specify, unable to take into account your internal knowledge and awareness of what’s going on for the most part" An automation suite runs the same thing each time, at the same speed and in the same order. The manual test goes for the areas that haven’t been looked at by the automated testing and often finding issues
  2. You can not measure a return on investment just by simply running the automated tests. Let’s say the developers or working on a new feature that has no impact to other parts of the software. If you have a regression suite and you run it, it is to no avail. The new feature does not affect the current features. Remember, the value of the tests comes from the information it provides, in this case it provides no value.
  3. Automated tests need regular maintenance - They decay for reasons described below (not necessarily all)
    1. Change of the user interface
    2. Fixing errors in the tests leading to more errors
    3. The creator of the tests moves away
    4. Test suites breaking when moved to different machines

Like all activities, when automating some tests you have to agree on the cost and benefit of creating the tests.

What are your tests not finding?

From experience the percentage of bugs found by automated tests are lower than what we would expect. I found that regression tests find more on the development-testing phase (just before the formal testing phase) compared to running them in the testing phase.

When calculating the cost of automation, also think about what else you could be doing that may or may not be more important than automating. Consider the following questions - What tests aren’t you running? What bugs aren’t you finding? What is your mission? As the ‘sole’ tester of the team I find that when I spend my time automating it only delays bug finding.

Poor Coverage - Do not focus on the number of tests in a test suite. It’s extremely easy to make-up test cases even though they are not useful. For example, you have a dropdown of 500 items in a system, would it be more worthwhile testing each single item in that list or testing different parts of the system?

Test Automation up-to-date? - With the last few releases, have new automation test suites been created? How about updating the current suites?

Good test suites are forever changing. New tests are being added, old tests being removed, some tests being updated. If this isn’t happening, something is wrong. Imagine having an automation test suite that tests 100% of the functionality today, then at the future with so many releases now the previous automation suites only cover 50%. When you run these tests would you be satisfied that you covered everything?

With these in mind, you should be more wary of what you're automating and why you're automating.

Happy Testing!

Leave a Reply

Your email address will not be published. Required fields are marked *