When is a human tester better than a machine at checking your product? When does the machine win hands down?
These questions have been at the heart of the testing process for a long time. They have even influenced the way that testing is structured.
There are obvious advantages to having a mechanised testing solution in place. An automated system can run thousands of test iterations in much less time than a human tester. The automated system delivers testing reports that consist only of relevant data, controllable in minute detail with no emotive terms or poorly-phrased descriptions. Automated systems follow the instructions they are given exactly when testing, without a slip or omission. The same cannot be said for even the best human testers. But automated testing solutions are only as good as the user who defines the conditions for the test – anything the human omits, the automated system will also omit without question. The human tester has the advantage of being able to think around the situation and raise concerns if necessary.
By contrast, the advantages of a human mind over a machine when reviewing software are not as immediately obvious. Nonetheless, a human eye can spot a discrepancy in the layout of a page that an automated system may not even be able to see. Content accuracy to a predetermined set of parameters can be checked by machines at rates many times faster than a human, but the meaning of that content is outside the machine’s ability to process. Think about how a spell checker misses spelling errors if the mis-spelled word is in the onboard dictionary. A human may not always know the right spelling, but I’m sure that most users are able to spot the semantic error in “the seven deadly sons.” Your spell checker, a simple automated test process, would miss this obvious mistake. The identification of errors that affect human scales of meaning is an aspect of testing that remains firmly a human-only test.
Some tests are exclusively human-driven. When a UK publisher released a series of books made for the UK market in the United Arab Emirates, social and cultural taboos of the middle-eastern state were inadvertently broken by some of the images used in the books. Human testing is the only way to make discriminating localisation judgements about the social ramifications of an image to another culture.
Other test types are completely automated processes, for similarly practical reasons. Validation of the markup language used to produce a huge international database website like Wikipedia, for example, is simply too big and too exacting a task for fallible, expensive humans to test. The checks that need to be made against systems that must be 100% secure and bug-free, such as international money transfer systems, controls software for nuclear power stations and air traffic control systems, must be repeatedly tested and retested until no detectable bugs remain. This is a longwinded and slow process that becomes unfeasibly expensive if done by humans.
Many modern software companies use the ‘public beta’ method to collect a vast amount of error data generated by user actions, which is often reported via a standardised, automated reporting system (we’ve all seen the “Would you like to tell Microsoft about this problem?” dialog at least once). Although the bugs are located by human users, testing and reporting are automated. The users don’t decide if what they have encountered is ‘a bug’; that decision is made by the error reporting software. Likewise, humans do not have a qualitative input in the reporting process, in which their only involvement is the ability to read the error log and to decide if it should be sent. This is an example of an automated testing process that tailors the areas tested to find the bugs most commonly encountered by human users, rather than by laboriously testing every part of the product. In this way, the product can be patched where the users find errors, making the user experience a better one without having the expense of running comprehensive tests.
This is just one example of the way in which the question ‘automated versus human testing’ is becoming ever-more increasingly answered by neither one nor the other, but by a combination of both testing forms.
Epicentre Says: “There are advantages and disadvantages to using either automated testing or human-driven testing. Whether to go with one option or the other, or a mix of both, is a choice that can profoundly affect the final results when your product hits the shelves. At Epicentre we fully understand the need for the right mix of human and automated testing solutions, enabling us to match the best kind of testing with your product, to ensure it will be at its best on release day.” That is not writemyessay4me.org the case with the picture that is most memorable