Once you have set up multiple environments for your application, it’s important to set up the proper release management process. It means that you should have a well-defined set of steps, describing how the code moves between development, staging and production environments, how testing is performed, who fixes the bugs and how.
What is a release
In an agile project, the team is working in iterations. Each iteration is completed within a sprint, its results are evaluated, and the next sprint is planned. A release is a bigger chunk of work on the roadmap that usually contains a number of user stories that together comprise a feature. While each user story and each iteration should result in a working and testable code deployed to the development server, a release is a set of those user stories that solve a user’s problems and allow them to complete tasks that were not possible before.
You can think of a release as of a milestone on a roadmap. This milestone is assigned a deadline, and a set of user stories is expected to be delivered by that deadline. Releases are vital because they bring order to a somewhat chaotic flow of agile software development. By knowing your priorities for a release and having a constraint of a fixed date you can make better product management decisions.
Release cycle
A typical release consists of three to four iterations during which a set of user stories are implemented and tested. During the work on a release all critical bugs should have been fixed before showing new functionality to the public. Because of the need to test the code in a release you need to plan for two releases to overlap. A typical approach is to split the team so that there’s implementation team and release support team. Or you can split the time so that the same developers work half of their time on implementing user stories for the new release and fixing bugs for the previous one. The release cycle flow in your team can be something like that:
- Week 1. Release 1, Implentation week 1
- Week 2. Release 1, Implentation week 2
- Week 3. Release 1, Implentation week 3
- Week 4. Release 1, Implentation week 4
- Week 5. Release 1, Testing week 1. Release 2 Implentation week 1
- Week 6. Release 1, Bugfixing week 1. Release 2 Implentation week 2
- Week 7. Release 1, Production deploy and testing week 1. Release 2 Implentation week 3
- Week 8. Release 1, Production bug fixing week 1. Release 2 Implentation week 4
Of course, your actual release duration and amount of time you need for testing and implementation will depend on your project and the team you have. Unfortunately, nothing ever goes according to plan. Sometimes implementation takes way more time that it was planned, or there are critical bugs that take plenty of time to fix. In this case, you either have to simplify the features and user stories that go into the release or remove certain user stories from the release completely, depending on priorities. When planning the release you should already have plan A and plan B so that when you need to cut some functionality out, you would already know what exactly you will skip.
Release debt
Some time ago I stumbled upon a great article by William Holroyd about release debt. You create release debt when you don’t release changes implemented by the team as soon as they become available. While William Holroyd talks about the traditional waterfall process compared to the agile process, I see that many agile teams also accumulate release debt, just in a different form.
The primary source of release debt in an agile team is when you don’t deploy from staging to production when you have to. This might happen because a critical date is approaching or when the team didn’t have enough time to test the release code thoroughly because of the scope creep. The only way to avoid it is to either plan less for a release, splitting big critical pieces into multiple phases, simplifying implementation and incrementally adding complexity, or extending the amount of time needed for a release.
With the number of functionality that has not been released grows the amount of bugs and issues with the new release. The number of problems from integrating new changes will also grow, and soon your team might find itself in a tough situation where they need to implement new features when the code they are using as a base is changing too because of the bugs.
Automated tests and continuous deployment
In an early-stage startup, every hour and dollar spent counts and automated tests are often seen as unnecessary work. After all, they don’t guarantee that there will be no bugs, they take time to write and maintain (usually the same amount of time as writing new functionality). They don’t even work from time to time. Automated tests have two main reasons to exist, and unfortunately, the only way to learn them is to get burned by the lack of tests.
First and the biggest reason is that tests allow you to prevent most of the issues because of code updates. Every time the code is changed the test suite runs to make sure that functionality that was there before is still working as expected. Of course, you can discover issues by manual testing, but this often happens later in the release cycle delaying the fix of a problem, while the automated test suite runs every time the code is merged.
The other big reason is continuous deployment — one of the best practices of release management. It’s a strategy where updates are deployed as the code is committed, merged and passes automated tests. Continuous deployment removes most of the burden form QA engineers as many bugs that appear during code merge will get caught by the automated tests.
Sometimes continuous deployment is understood as a process of pushing code from development server directly to production so that all or just a portion of users get the most recent changes as soon as possible. I don’t know any team or product where that approach is used, mostly because you can’t get away from manual testing. Don’t make your users become testers. Your code has to tested manually to deliver the release without most apparent issues.