Today’s Community Contributor is Chris Hyde, a software QA lead who has extensive experience with test architecture, test suite/case management, and test direction for multiple projects. His former development experience helps with understanding how to componentize major test efforts into testable units for both manual and automated test solutions. Check out his reviews on Sauce Labs and JIRA here.
Why you should use automated testing
This is one of the hottest subjects in software development and testing today. What is automation? How much should I automate? What do I automate? How do I fit this into my development cycle? This is a topic that raises myriad questions. The first thing to understand is what is (and should be) automated testing? Let’s be clear that automated testing is not intended to replace human, senior, skilled testers. These people are very adept at using their experience to root out tests and test scenarios that are often overlooked.
Automated testing can be technically defined as transforming a test from manually-executed to run by a machine. This is often done via various automated test frameworks available in the market and by integrating tools like Saucelabs for cross-browser testing. Knowing what automated testing is defined as is only part of the equation; knowing how to properly automate and what to automate is the rare skill in our field.
Best tests to automate
The first tests that are often automated are the boring, tedious, and repetitive regression tests that are executed every release. This makes sense as these are the most prone to defects! It is counterintuitive that these would be the most prone to error, as one would think that these are the most understood functionality. However, studies and experience show that people become desensitized to these types of test executions, especially if they are the ones executing the same regression suite every release. Automating this much testing can prove daunting; however, if an iterative process is used, it can be accomplished faster than anticipated.
The best way to accomplish continuous delivery of automated testing is to have the work become part of sprints, either as their own stories or sub-tasks of the sprint story(ies) they support. If work is done separate from sprint delivery in a disjointed manner, two things will happen:
- The work will be expected to be done for “free”, as the work is not represented in sprint velocity
- The development of automated testing will lag actual feature/story development and this gap will continue to grow over time
By making automated testing part of the sprint process, it’s ensured that it is treated as what it is: a developable product! Test automation should be treated as a development project because it is one. It has the same inherent processes associated with it as feature development does including feature development, prioritization, architecture, and technical debt. Treating it as a product that gets work in “free time” (which nobody has) will doom the project before it gets started.
Another key point to remember is keeping the tests very purposeful and small. Always be thinking MVP (Minimum Viable Product) not only when developing product code, but also when developing automated tests. By practicing good Object-Oriented (O-O) principles such as encapsulation, repetitive code will be reduced and will lead to a more maintainable automation framework in the long run. This will also lead to the ability to write more tests as it will be easier to write them. The end of all this is increased quality! Every automated test written replaces the human error element. It also allows your test team to focus on the flows of the application that are either hard to automate, or that you don’t want to automate due to their complexity to do so.
All automated tests can be run all the time, and this is “free” testing being performed! The cost of automation is exponentially smaller after the initial development of the framework is realized. Then, the team can “rinse and repeat” each release, keeping the automated development in line with feature development on the product. This will encourage more buy-in from upper management and C-level executives. The more tedious tests that get automated, the more defects caught earlier in the process. A bug caught as far left as possible is the cheapest to find, and is looked upon much more favorably than one found in production.
Was this helpful?