The Test Automation Pyramid


What is it?

The test automation pyramid is a concept related to automation testing as part of agile methods. The pyramid part of the concept comes from the pictorial representation of the concept as a triangle with horizontal bands across the triangle representing unit testing, service testing and UI testing. The concept and its pictorial representation show the number of tests one might normally expect to perform at each of those levels of automated testing; with many more tests expected at the unit test level than at the UI test level.

The concept was credited to Mike Cohn of Mountain Goat Software, although I suspect it was a product of its time and the developments that were taking place in the agile community then. Mike introduced the concept in his book “Succeeding with Agile” and then mentioned it on his blog post. It has been discussed by Martin Fowler and an interesting review of the concept and its variants exists at the tar pit.

There has been some debate over the names of each of the layers and also what is represented by each of the layers, but broadly speaking the following seems to apply:

  • The UI layer – which is the user interface layer and also called the end-to-end layer or system testing layer. This layer normally requires testing of the whole system through the user interface or other external interface so that the entire functionality of the system can be tested at once. Because the layer tests the whole system, it can also be known as, or co-opted to become, the “acceptance testing” layer. This method of automated testing represents what the user might experience (which is why it lends itself to acceptance testing) or what the system as a whole might experience. These tests are often brittle (i.e. easy to break if the code changes), especially when they have to simulate UI actions; they can be difficult to write and can also take some time to run. This is why, in general we try and minimise the number of tests in this layer.
  • The service layer – also called the “API” layer, the “integration” layer or the “component” layer. Testing of this layer excludes testing through the user interface, in favour of using the service layer/API layer or component layer interfaces. This makes the test code less brittle, easier to write and faster to execute. The tests are, however, still testing multiple elements of the system at the same time. Where the system is UI based and designed around a common pattern, such as MVC, MVP or MVVM, then this level is normally testing at the Controller, Presenter or View Model level, respectively. In general, there are more of these tests than there are of the UI tests.
  • The unit tests – which are rarely called anything else, are the unit tests for classes and the like, which software developers commonly write using xUnit These tests allow the smallest units of code to be fully tested, normally in isolation to check that they are doing just what you require. As such, there are normally many more of these tests that there are of either service level or UI level tests.

My early experiences of testing

Many years ago, I dread to think how many now, it was commonplace for systems to be broken into sub-systems and then into modules; indeed, this may still be in use, but probably less so these days. For each of the modules, specifications were developed and tests written to prove the operation of those modules. These were like the unit tests we now see in agile; however, in my experience they were rarely (if ever) automated. These days I struggle to see why that leap to automation was not taken earlier! The benefit of hindsight is, of course, a wonderful thing.

Manually running module tests was time-consuming. Therefore, once you managed to pass them, you often did not run them again, except if you were fixing code within that module and it was required to run them again for the purposes of regression. I’m sure this left a lot to be desired in term of regression test coverage.

Above the module tests were integration tests. An integration plan was normally developed to ensure that a method of integrating modules into subsystems and subsystems into the system was defined. At certain points in this integration cycle, when sets of elements had been joined together, integration tests were used to ensure those sets of elements correctly functioned together. Once again, these tests were normally manually run.

Once the system had been integrated (and sometimes before, with various integration products) system level tests would be run. These would test the limitations of the system: how fast could it go, how much could it store, how many users could simultaneously use it and the like. These tests often took a great deal of organising and were often somewhat fake because you needed to drive your system to its limits, which normally required artificial circumstances.

The eventual aim was to prove to the user that the system designed and built for them functioned as they expected and allow them to sign off delivery of the system. These acceptance tests were normally the top level of tests which put the system through its paces exactly as the user would have expected to see it operate.

As per the normal V model lifecycle, if a test failed at the system, acceptance or user testing level, a fix would be made, the relevant module and integration tests would have to be re-run and passed and then either all or the relevant parts of the system, acceptance or user tests would have to be re-run. Given the manual nature of all of these tests, this repeated cycle through the various levels of tests was expensive and painful, leading to long timescales to prepare a system for release and longer release cycles. It is still true for many large software systems released today.

Similarities and differences …

The test automation pyramid from today’s agile methods bears some similarities to earlier more conventional methods of developments. In particular, the number of manual module tests would normally have been larger than the number of integration tests which would likely have been larger than the number of acceptance tests. This pattern of test numbers matches the numbers of automated tests run at the unit, service and UI level in the test automation pyramid. The primary difference is that one set are manually run and the other is automated.

Having tried both approaches, I definitely favour the use of the agile methods and automated testing. I would contrast the warm feeling of seeing a green test bar or clean automated build after changing some fundamental piece of code, against, the feeling of fear of not having run enough manual regression tests from each level to be sure I would catch any bug, or alternatively whether (at worst), I would have to re-run all tests!

Even better is that bug fixes, refactoring and feature enhancements can get rolled out with relative ease because of the confidence engendered by being able to run a full test suite quickly and not having to fight through a large number of manual tests.

I find myself these days trying to meld the automated testing of agile methods with the standard and conventional methods I used to employ to drive the best outcome for the projects I am involved in. So far, I have been pleased with the results.

 

Mark Davison, Terzo Digital, April 2016