Test automation has been adopted and used effectively in the IT industry for over a decade now. One of the core objectives of using test automation - along with conventional testing - is to repeatedly test certain actions, logics, and business functionalities with the end goal of increasing the effectiveness, efficiency, and coverage of the software in test.
As you may recall, we tackled the issue of “myths” associated with test automation back in the summer, now Online’s team of QA experts have banded together to bring you five important concepts that will make your testing efforts more effective.
1) When developing automated scripts, design considerations make a big difference
While it is true that we are moving towards script-less automation, test automation is largely development work and a lot of aspects and factors need to be investigated and considered before we jump into writing test scripts. I recommend investing time into formalizing the proof-of-concept, identifying the testing tools required for the different phases of testing, and analyzing the environment in which application will be used – before any pure testing work begins. By working through these design oriented considerations, we can much more effectively write code to execute the test steps that minimize the effect of changes to the application.
2) The development of automated scripts can be independent of test data
By definition, data-driven testing uses external input and output sources that are independently developed by members of the automation team. In this kind of automation development, data retrieval patterns are used that associate with keywords, actions, or identifiers and are stored in a Database, Excel, or CSV property file, or in plain text. From my experience, automation testing provides test logic, while test data is prepared by another member of the automation team and is used to improve test coverage. In addition, data-driven frameworks are preferable in test automation as any changes to the data level corresponding to the application itself has minimal effects on the development. This concurrently improves the robustness, efficiency, and maintainability of the automated scripts.
3) Agile and Waterfall testing methods can co-exist in a Test Automation Environment/Project
Today, automation is a critical component of supporting an Agile project. Automation can be used to support much more than automated tests:
- Continuous integration/builds
- Unit, functional, and integration test execution
In a Waterfall model, projects don’t stop after one release, they need bug fixing, change requests, and additional functionality, which is then followed by regression testing. If the automation solution is developed diligently, the scripts can support both methodologies. I am currently working on a project where I am automating test cases for early releases of software that have reached stabilization and are currently in production. The developed automated scripts are executed regularly as part of the regression tests. In the meantime, the second release of this software is in progress now and I am using the same platform and tools to support the agile based automated testing.
4) Automated scripts can be developed for parallel execution
Automation solutions can be executed in parallel using different technologies and platforms. It is very important to design the proper testing environment and use different technologies to support parallel executions such as virtual machines to create independent and isolated test environments. By using cron jobs or continuous integration tools (e.g., Jenkins) we can trigger different test phases and types, for example unit tests, smoke tests, sanity test, functional tests, integrations, regression(partial/full), End-to-End, or Acceptance tests.
5) Visibility and accessibility of the automated test results should be an integral part of a solution
Test automation is not only comprised of developing scripts but should add value to the entire team by providing insight into how the solution is operating overall. Visibility in this context represents test results - the in-depth analysis of these results, and any appropriate error reports. I usually create two sets of results: one that holds the details of the test execution and one that is a summary of the results along with a failure description if there is any.
Accessibility represents both the scripts and the results which should be stored in a shared, accessible, location. We can use the same version control tools used for development to store these results. It is always recommended to have some kind of notification that indicates the completion of the test result through e-mail or using a continuous integration tool such as Jenkins.Do you have any other tips or tricks that you would like to share? Feel free to leave a comment below! If you would like to learn more about Online’s Business Consulting practice, click here.