My July 25 post listed an impediment, “Need to automate functional tests, and non-functional (governance-related) tests whenever possible”. Let’s consider performance testing: that is a pretty universal non-functional test that enterprises do.
Not too long ago I had to create a performance testing tool from scratch and establish a performance testing process for an agile team. The client was a government client, and luckily it was forward thinking: it allowed us to define the performance testing process and to perform the tests ourselves. This is important because during development one needs to be able to conduct performance tests in multiple environments, repeatedly. Multiple environments are needed because it is generally very difficult to conduct tests or diagnose problems in a production-like environment that is controlled by a release management process, so it pays big dividends if you can conduct most of your performance testing in a scalable cloud that you have direct access to and then later conduct the tests in the controlled environment as a final check.
In contrast, many projects for large organizations do not think to ask for a scalability testing environment that they will have direct access to – i.e., an environment that is not under release management control. As a result, they are stuck doing performance testing in a “test” or “staging” environment that requires hands-off deployments. This greatly inhibits diagnosis of issues that arise: and when performance testing, there will be many issues – some very hard to diagnose without direct shell access to the machines. I used to run a performance testing lab, so I am very familiar with the kinds of issues that can arise.
Performance testing is a specialized skill, and so many organizations have a centralized performance testing function. This is nice to have, if it operates as according to a “coaching” model, such that the performance testing staff instruct and assist the development team to set up performance testing. In contrast, it does not work well if the performance testing group insists on doing it themselves: i.e., designing and possibly even executing all performance tests. To use a metaphor, agile performance testing needs to be approached as an airplane flight, rather than as a rocket launch: it needs to be something that you can do again and again without a huge amount of preparation, and it needs to be something that you can do without an entire mission control center supporting you.
In the spirit of agile, the team needs to be responsible for building its own continuous integration tests, and those tests need to include performance testing. Performance testing is not something that you do at the end of a release: it is something that you should do repeatedly and early throughout an agile release development cycle. It should be done at least once per sprint, and ideally every day. It should not be part of the build process that is run when someone checks in code, but it should be run as a separate process (e.g., Jenkins job) that runs regularly.
In addition, the development team needs to be able to run their own performance tests at will, to see if changes to the code affect performance. They will do this as they are trying solutions to performance problems or other problems. For example, they might want to implement a new design pattern and need to see what the impact on performance will be: rather than make them wait until the evening’s run, why not allow them to kick off a run right then, to give them fast feedback? The run can be configured to run for five minutes, but at high load. That will tell them what they need to know. The run needs to be executed in an isolated performance testing environment so that it does not interfere with functional testing (and vice versa). If cloud services are being used, this is not a big deal: you spin up some more VMs and release them after the tests have completed. (Note: tools like Docker make it even easier to create environments on the fly very quickly.)
Let me give you an example of how it should not work. Not too long ago I had a client that had a centralized performance testing group. In order to use their services, you had to do the following:
An Apollo 11 checklist |
- Fill out a form to request a bridge to the test environment, which was under release management control.
- Wait for the form to be processed.
- Request the appropriate permissions to the application, executing in the controlled environment.
- Request the required VM instances (a separate form), and request ports to be opened that will be needed, and for test data to be transferred.
- Create a detailed performance profile, according to a predefined format.
- Wait for the performance test plan to be created and approved.
- Wait for the performance test environment to be set up.
- Coordinate with the Security team and wait for them to approve the setup.
- Obtain access to monitoring servers.
- Perform the tests.
Indeed, many of these things need to be done anyway, and writing the scripts to set up a performance testing environment can be very time consuming; but if you have to wait for a busy group to do it for you, and fill out forms along the way, then you don’t have the turnaround and diagnostic access that you need for an agile project that is changing code continuously, and you have allowed a bottleneck to exist: the performance testing group. It is far better to use this group for exploring new methods and instructing teams – elevating the capabilities of the teams – rather than having the performance testing group do all of the hard stuff related to performance testing. Also, you will be pleasantly surprised at the clever techniques that development teams come up with for streamlining the performance testing process.
The use of separate teams to do performance testing originates from the waterfall approach in which there is a test phase at the end of development. This thinking has even found its way into agile projects in the form of “hardening sprints” at the end of a release. This is a bad practice because agile and devops rely on making all processes continuous – not batching things up in to phases that can potentially drag on for an indeterminate time. By performing all kinds of testing all along, one can accurately gauge the team’s rate of progress, whereas if you wait to do one of the hardest tasks at the end, you can end up finding issues that are unexpectedly hard – as with occurred when the Healthcare.gov effort left their integration testing for the last two weeks of the project and it turned out that it needed much more time than that – but by then it was too late to delay the release.
The solution:
- Require your teams to do performance testing at regular intervals throughout development – not as a hardening phase at the end.
- Provide the teams with a development test environment that is just for performance testing and that they can access directly without going through release management procedures. Alternatively, allow the teams to use commercial cloud environments for routine performance testing, with a final check in your organization’s controlled test environment.
- Have your agile teams do their own performance test design and setup: make it a requirement that they do it, so that they learn how, and require them to run the tests frequently – at least once per development iteration.
- If you have a performance testing group, make them available to instruct and assist the development teams to learn how to do this type of testing: do not let the performance testing group do it for the teams! In other words, turn the members of your performance team into coaches who help others to learn how to do performance testing. This helps to make your organization a “learning organization”!
Lovely post
ReplyDelete